pax_global_header00006660000000000000000000000064151074255620014521gustar00rootroot0000000000000052 comment=a976115fd7254f4f9f54cc33528f0817b07b7976 Azure-WALinuxAgent-a976115/000077500000000000000000000000001510742556200153725ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/.gitattributes000066400000000000000000000047261510742556200202760ustar00rootroot00000000000000############################################################################### # Set default behavior to automatically normalize line endings. ############################################################################### * text=auto ############################################################################### # Set default behavior for command prompt diff. # # This is need for earlier builds of msysgit that does not have it on by # default for csharp files. # Note: This is only used by command line ############################################################################### #*.cs diff=csharp ############################################################################### # Set the merge driver for project and solution files # # Merging from the command prompt will add diff markers to the files if there # are conflicts (Merging from VS is not affected by the settings below, in VS # the diff markers are never inserted). Diff markers may cause the following # file extensions to fail to load in VS. An alternative would be to treat # these files as binary and thus will always conflict and require user # intervention with every merge. To do so, just uncomment the entries below ############################################################################### #*.sln merge=binary #*.csproj merge=binary #*.vbproj merge=binary #*.vcxproj merge=binary #*.vcproj merge=binary #*.dbproj merge=binary #*.fsproj merge=binary #*.lsproj merge=binary #*.wixproj merge=binary #*.modelproj merge=binary #*.sqlproj merge=binary #*.wwaproj merge=binary ############################################################################### # behavior for image files # # image files are treated as binary by default. ############################################################################### #*.jpg binary #*.png binary #*.gif binary ############################################################################### # diff behavior for common document formats # # Convert binary document formats to text before diffing them. This feature # is only available from the command line. Turn it on by uncommenting the # entries below. ############################################################################### #*.doc diff=astextplain #*.DOC diff=astextplain #*.docx diff=astextplain #*.DOCX diff=astextplain #*.dot diff=astextplain #*.DOT diff=astextplain #*.pdf diff=astextplain #*.PDF diff=astextplain #*.rtf diff=astextplain #*.RTF diff=astextplain Azure-WALinuxAgent-a976115/.github/000077500000000000000000000000001510742556200167325ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/.github/CONTRIBUTING.md000066400000000000000000000105271510742556200211700ustar00rootroot00000000000000# Contributing to Linux Guest Agent First, thank you for contributing to WALinuxAgent repository! ## Basics If you would like to become an active contributor to this project, please follow the instructions provided in [Microsoft Azure Projects Contribution Guidelines](http://azure.github.io/guidelines/). ## Table of Contents [Before starting](#before-starting) - [Github basics](#github-basics) - [Code of Conduct](#code-of-conduct) [Making Changes](#making-changes) - [Pull Requests](#pull-requests) - [Pull Request Guidelines](#pull-request-guidelines) - [Cleaning up commits](#cleaning-up-commits) - [General guidelines](#general-guidelines) - [Testing guidelines](#testing-guidelines) ## Before starting ### Github basics #### GitHub workflow If you don't have experience with Git and Github, some of the terminology and process can be confusing. [Here's a guide to understanding Github](https://guides.github.com/introduction/flow/). #### Forking the Azure/Guest-Configuration-Extension repository Unless you are working with multiple contributors on the same file, we ask that you fork the repository and submit your Pull Request from there. [Here's a guide to forks in Github](https://guides.github.com/activities/forking/). ### Code of Conduct This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. ## Making Changes ### Pull Requests You can find all of the pull requests that have been opened in the [Pull Request](https://github.com/Azure/Guest-Configuration-Extension/pulls) section of the repository. To open your own pull request, click [here](https://github.com/Azure/WALinuxAgent/compare). When creating a pull request, keep the following in mind: - Make sure you are pointing to the fork and branch that your changes were made in - Choose the correct branch you want your pull request to be merged into - The pull request template that is provided **should be filled out**; this is not something that should just be deleted or ignored when the pull request is created - Deleting or ignoring this template will elongate the time it takes for your pull request to be reviewed ### Pull Request Guidelines A pull request template will automatically be included as a part of your PR. Please fill out the checklist as specified. Pull requests **will not be reviewed** unless they include a properly completed checklist. #### Cleaning up Commits If you are thinking about making a large change, **break up the change into small, logical, testable chunks, and organize your pull requests accordingly**. Often when a pull request is created with a large number of files changed and/or a large number of lines of code added and/or removed, GitHub will have a difficult time opening up the changes on their site. This forces the Azure Guest-Configuration-Extension team to use separate software to do a code review on the pull request. If you find yourself creating a pull request and are unable to see all the changes on GitHub, we recommend **splitting the pull request into multiple pull requests that are able to be reviewed on GitHub**. If splitting up the pull request is not an option, we recommend **creating individual commits for different parts of the pull request, which can be reviewed individually on GitHub**. For more information on cleaning up the commits in a pull request, such as how to rebase, squash, and cherry-pick, click [here](https://github.com/Azure/azure-powershell/blob/dev/documentation/cleaning-up-commits.md). #### General guidelines The following guidelines must be followed in **EVERY** pull request that is opened. - Title of the pull request is clear and informative - There are a small number of commits that each have an informative message - A description of the changes the pull request makes is included, and a reference to the issue being resolved, if the change address any - All files have the Microsoft copyright header #### Testing Guidelines The following guidelines must be followed in **EVERY** pull request that is opened. - Pull request includes test coverage for the included changesAzure-WALinuxAgent-a976115/.github/ISSUE_TEMPLATE/000077500000000000000000000000001510742556200211155ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/.github/ISSUE_TEMPLATE/bug_report.md000066400000000000000000000016351510742556200236140ustar00rootroot00000000000000--- name: Bug report about: Create a report to help us improve title: "[BUG] Bug Title" --- **Describe the bug: A clear and concise description of what the bug is.** Note: Please add some context which would help us understand the problem better 1. Section of the log where the error occurs. 2. Serial console output 3. Steps to reproduce the behavior. **Distro and WALinuxAgent details (please complete the following information):** - Distro and Version: [e.g. Ubuntu 16.04] - WALinuxAgent version [e.g. 2.2.40, you can copy the output of `waagent --version`, more info [here](https://github.com/Azure/WALinuxAgent/wiki/FAQ#what-does-goal-state-agent-mean-in-waagent---version-output) ] **Additional context** Add any other context about the problem here. **Log file attached** If possible, please provide the full /var/log/waagent.log file to help us understand the problem better and get the context of the issue. Azure-WALinuxAgent-a976115/.github/PULL_REQUEST_TEMPLATE.md000066400000000000000000000022141510742556200225320ustar00rootroot00000000000000 ## Description Issue # --- ### PR information - [ ] Ensure development PR is based on the `develop` branch. - [ ] If applicable, the PR references the bug/issue that it fixes in the description. - [ ] New Unit tests were added for the changes made ### Quality of Code and Contribution Guidelines - [ ] I have read the [contribution guidelines](https://github.com/Azure/WALinuxAgent/blob/master/.github/CONTRIBUTING.md). --- ### Distro maintenance information, if applicable - [ ] This is a contribution from a distro maintainer - [ ] The changes in this PR have been taken as a downstream patch (Note: it is not recommended to patch the agent without upstream review and approval) Azure-WALinuxAgent-a976115/.github/codecov.yml000066400000000000000000000000461510742556200210770ustar00rootroot00000000000000github_checks: annotations: false Azure-WALinuxAgent-a976115/.github/workflows/000077500000000000000000000000001510742556200207675ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/.github/workflows/ci_pr.yml000066400000000000000000000167021510742556200226140ustar00rootroot00000000000000name: CI Unit tests on: push: branches: [ "*" ] pull_request: branches: [ "*" ] workflow_dispatch: jobs: execute-tests: name: "Python ${{ matrix.python-version }} Unit Tests" runs-on: ubuntu-24.04 strategy: fail-fast: false matrix: include: # # Some of the Python versions we test are not supported by the setup-python Github Action. For those versions, we use a # pre-built virtual environment. # - python-version: "2.6" use_virtual_environment: true - python-version: "2.7" use_virtual_environment: true - python-version: "3.4" use_virtual_environment: true - python-version: "3.5" use_virtual_environment: true - python-version: "3.6" use_virtual_environment: true - python-version: "3.7" use_virtual_environment: true - python-version: "3.8" - python-version: "3.9" additional-nose-opts: "--with-coverage --cover-erase --cover-inclusive --cover-branches --cover-package=azurelinuxagent" - python-version: "3.10" - python-version: "3.11" - python-version: "3.12" steps: - name: Checkout WALinuxAgent uses: actions/checkout@v3 # # We either install Python and the test dependencies, or download a pre-built virtual environment, depending on the # use_virtual_environment flag. # - name: Setup Python ${{ matrix.python-version }} if: (!matrix.use_virtual_environment) uses: actions/setup-python@v4 with: python-version: ${{ matrix.python-version }} - name: Install dependencies if: (!matrix.use_virtual_environment) id: install-dependencies run: | sudo env "PATH=$PATH" python -m pip install --upgrade pip sudo env "PATH=$PATH" pip install -r requirements.txt sudo env "PATH=$PATH" pip install -r test-requirements.txt sudo env "PATH=$PATH" pip install --upgrade pylint - name: Setup Python ${{ matrix.python-version }} Virtual Environment if: matrix.use_virtual_environment id: install-venv run: | sudo apt-get update sudo apt-get install -y curl bzip2 sudo curl -sSf --retry 5 -o /tmp/python-${{ matrix.python-version }}.tar.bz2 https://dcrdata.blob.core.windows.net/python/python-${{ matrix.python-version }}.tar.bz2 sudo tar xjf /tmp/python-${{ matrix.python-version }}.tar.bz2 --directory / # # The virtual environments have dependencies on old versions of OpenSSL (e.g 1.0/1.1) which are not available on Ubuntu 24. We use this script to patch the environments. # if [[ "${{ matrix.use_virtual_environment}}" == "true" ]]; then sudo ./tests/python_eol/patch_python_venv.sh "${{ matrix.python-version }}" fi # # Execute the tests # - name: Execute Unit Tests run: | if [[ "${{ matrix.python-version }}" =~ ^3\.[1-9][0-9]+$ ]]; then # # Use pytest # ./ci/pytest.sh else # # Use nosetests # if [[ "${{ matrix.use_virtual_environment}}" == "true" ]]; then # the pytest version on the venvs does not support the --with-timer option export NOSEOPTS="--verbose ${{ matrix.additional-nose-opts }}" else export NOSEOPTS="--verbose --with-timer ${{ matrix.additional-nose-opts }}" fi # # If using a venv, activate it. # if [[ "${{ matrix.use_virtual_environment}}" == "true" ]]; then source /home/waagent/virtualenv/python${{ matrix.python-version }}/bin/activate fi ./ci/nosetests.sh fi # # Execute pylint even when the tests fail (but only if the dependencies were installed successfully) # # The virtual environments for 2.6, 2.7, and 3.4 do not include pylint, so we skip those Python versions. # - name: Run pylint if: (!contains(fromJSON('["2.6", "2.7", "3.4"]'), matrix.python-version) && (success() || (failure() && steps.install-dependencies.outcome == 'success'))) run: | # # If using a venv, activate it. # if [[ "${{ matrix.use_virtual_environment}}" == "true" ]]; then source /home/waagent/virtualenv/python${{ matrix.python-version }}/bin/activate fi # # List of files/directories to be checked by pylint. # The end-to-end tests run only on Python 3.9 and we lint them only on that version. # PYLINT_FILES="azurelinuxagent setup.py makepkg.py tests" if [[ "${{ matrix.python-version }}" == "3.9" ]]; then PYLINT_FILES="$PYLINT_FILES tests_e2e" fi # # Command-line options for pylint. # * "unused-private-member" is not implemented on 3.5 and will produce "E0012: Bad option value 'unused-private-member' (bad-option-value)" # so we suppress "bad-option-value". # * 3.9 will produce "no-member" for several properties/methods that are added to the mocks used by the unit tests (e.g # "E1101: Instance of 'WireProtocol' has no 'aggregate_status' member") so we suppress that warning. # * On 3.9 pylint crashes when parsing azurelinuxagent/daemon/main.py (see https://github.com/pylint-dev/pylint/issues/9473), so we ignore it. # * 'no-self-use' ("R0201: Method could be a function") was moved to an optional extension on 3.8 and is no longer used by default. It needs # to be suppressed for previous versions (3.0-3.7), though. # * 'contextmanager-generator-missing-cleanup' are false positives if yield is used inside an if-else block for contextmanager generator functions. # (https://pylint.readthedocs.io/en/latest/user_guide/messages/warning/contextmanager-generator-missing-cleanup.html). # This is not implemented on versions (3.0-3.7) Bad option value 'contextmanager-generator-missing-cleanup' (bad-option-value) # * >= 3.9 will produce "too-many-positional-arguments" for several methods that are having more than 5 args, so we suppress that warning. # (R0917: Too many positional arguments (8/5) (too-many-positional-arguments)) PYLINT_OPTIONS="--rcfile=ci/pylintrc --jobs=0" if [[ "${{ matrix.python-version }}" == "3.9" ]]; then PYLINT_OPTIONS="$PYLINT_OPTIONS --disable=no-member,too-many-positional-arguments --ignore=main.py" fi if [[ "${{ matrix.python-version }}" =~ ^3\.(10|11|12)$ ]]; then PYLINT_OPTIONS="$PYLINT_OPTIONS --disable=too-many-positional-arguments" fi if [[ "${{ matrix.python-version }}" =~ ^3\.[0-7]$ ]]; then PYLINT_OPTIONS="$PYLINT_OPTIONS --disable=no-self-use,bad-option-value" fi echo "PYLINT_OPTIONS: $PYLINT_OPTIONS" echo "PYLINT_FILES: $PYLINT_FILES" pylint $PYLINT_OPTIONS $PYLINT_FILES # # Lastly, compile code coverage # - name: Compile Code Coverage if: matrix.python-version == '3.9' run: | echo looking for coverage files : ls -alh | grep -i coverage sudo env "PATH=$PATH" coverage combine coverage.*.data sudo env "PATH=$PATH" coverage xml sudo env "PATH=$PATH" coverage report - name: Upload Code Coverage if: matrix.python-version == '3.9' uses: codecov/codecov-action@v3 with: file: ./coverage.xml Azure-WALinuxAgent-a976115/.gitignore000066400000000000000000000020461510742556200173640ustar00rootroot00000000000000# Byte-compiled / optimized / DLL files __pycache__/ *.py[cod] *$py.class # Virtualenv py3env/ # C extensions *.so # Distribution / packaging .Python env/ build/ develop-eggs/ dist/ downloads/ eggs/ parts/ sdist/ var/ *.egg-info/ .installed.cfg *.egg # PyCharm .idea/ .idea_modules/ # PyInstaller # Usually these files are written by a python script from a template # before PyInstaller builds the exe, so as to inject date/other infos into it. *.manifest *.spec # Installer logs pip-log.txt pip-delete-this-directory.txt # Unit test / coverage reports htmlcov/ .tox/ .coverage .cache nosetests.xml coverage.xml # Translations *.mo *.pot # Django stuff: *.log # Sphinx documentation docs/_build/ # PyBuilder target/ waagentc *.pyproj *.sln *.suo waagentc bin/waagent2.0c # rope project .ropeproject/ # mac osx specific files .DS_Store ### VirtualEnv template # Virtualenv # http://iamzed.com/2009/05/07/a-primer-on-virtualenv/ .Python pyvenv.cfg .venv pip-selfcheck.json # virtualenv venv/ ENV/ # dotenv .env # pyenv .python-version .vscode/Azure-WALinuxAgent-a976115/CODEOWNERS000066400000000000000000000012661510742556200167720ustar00rootroot000000000000001 # See https://help.github.com/articles/about-codeowners/ # for more info about CODEOWNERS file # It uses the same pattern rule for gitignore file # https://git-scm.com/docs/gitignore#_pattern_format # Provisioning Agent # The Azure Linux Provisioning team is interested in getting notifications # when there are requests for changes in the provisioning agent. For any # questions, please feel free to reach out to thstring@microsoft.com. /azurelinuxagent/pa/ @trstringer @anhvoms /tests/pa/ @trstringer @anhvoms # # RDMA # /azurelinuxagent/common/rdma.py @longlimsft /azurelinuxagent/pa/rdma/ @longlimsft # # Linux Agent team # * @narrieta @ZhidongPeng @nagworld9 @maddieford @gabstamsft Azure-WALinuxAgent-a976115/LICENSE.txt000066400000000000000000000261301510742556200172170ustar00rootroot00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright 2016 Microsoft Corporation Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Azure-WALinuxAgent-a976115/MAINTENANCE.md000066400000000000000000000013131510742556200174340ustar00rootroot00000000000000## Microsoft Azure Linux Agent Maintenance Guide ### Version rules * Production releases are public * Test releases are for internal use * Production versions use only [major].[minor].[revision] * Test versions use [major].[minor].[revision].[build] * Test a.b.c.0 is equivalent to Prod a.b.c * Publishing to Production requires incrementing the revision and dropping the build number * We do not use pre-release labels on any builds ### Version updates * The version of the agent can be found at https://github.com/Azure/WALinuxAgent/blob/master/azurelinuxagent/common/version.py#L53 assigned to AGENT_VERSION * Update the version here and send for PR before declaring a release via GitHub Azure-WALinuxAgent-a976115/MANIFEST000066400000000000000000000005701510742556200165250ustar00rootroot00000000000000# file GENERATED by distutils, do NOT edit README setup.py bin/waagent config/waagent.conf config/waagent.logrotate test/test_logger.py walinuxagent/__init__.py walinuxagent/agent.py walinuxagent/conf.py walinuxagent/envmonitor.py walinuxagent/extension.py walinuxagent/install.py walinuxagent/logger.py walinuxagent/protocol.py walinuxagent/provision.py walinuxagent/util.py Azure-WALinuxAgent-a976115/MANIFEST.in000066400000000000000000000001141510742556200171240ustar00rootroot00000000000000recursive-include bin * recursive-include init * recursive-include config * Azure-WALinuxAgent-a976115/NOTICE000066400000000000000000000002411510742556200162730ustar00rootroot00000000000000Microsoft Azure Linux Agent Copyright 2012 Microsoft Corporation This product includes software developed at Microsoft Corporation (http://www.microsoft.com/). Azure-WALinuxAgent-a976115/README.md000066400000000000000000000616171510742556200166640ustar00rootroot00000000000000 # Microsoft Azure Linux Agent ## Linux distributions support The list of distros we officially support is maintained at: [Linux distributions supported by Azure](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/endorsed-distros). Our daily automation tests most of these distributions. The Agent can be used on other distributions as well, but development, testing and support for those are done by the open source community. This repo contains community-driven support for some distributions which are not officially supported by Azure. Testing is done using the develop branch, which can be unstable. For a stable build please use the master branch instead. [![CodeCov](https://codecov.io/gh/Azure/WALinuxAgent/branch/develop/graph/badge.svg)](https://codecov.io/gh/Azure/WALinuxAgent/branch/develop) ## Introduction The Microsoft Azure Linux Agent (waagent) manages Linux provisioning and VM interaction with the Azure Fabric Controller. It provides the following functionality for Linux IaaS deployments: * Image Provisioning * Creation of a user account * Configuring SSH authentication types * Deployment of SSH public keys and key pairs * Setting the host name * Publishing the host name to the platform DNS * Reporting SSH host key fingerprint to the platform * Resource Disk Management * Formatting and mounting the resource disk * Configuring swap space * Networking * Manages routes to improve compatibility with platform DHCP servers * Ensures the stability of the network interface name * Kernel * Configure virtual NUMA (disable for kernel <2.6.37) * Configure SCSI timeouts for the root device (which could be remote) * Diagnostics * Console redirection to the serial port * SCVMM Deployments * Detect and bootstrap the VMM agent for Linux when running in a System Center Virtual Machine Manager 2012R2 environment * VM Extension * Inject component authored by Microsoft and Partners into Linux VM (IaaS) to enable software and configuration automation * VM Extension reference implementation on [GitHub](https://github.com/Azure/azure-linux-extensions) ## Communication The information flow from the platform to the agent occurs via two channels: * A boot-time attached DVD for IaaS deployments. This DVD includes an OVF-compliant configuration file that includes all provisioning information other than the actual SSH keypairs. * A TCP endpoint exposing a REST API used to obtain deployment and topology configuration. ### HTTP Proxy The Agent will use an HTTP proxy if provided via the `http_proxy` (for `http` requests) or `https_proxy` (for `https` requests) environment variables. Due to limitations of Python, the agent *does not* support HTTP proxies requiring authentication. Similarly, the Agent will bypass the proxy if the environment variable `no_proxy` is set. Note that the way to define those environment variables for the Agent service varies across different distros. For distros that use systemd, a common approach is to use Environment or EnvironmentFile in the [Service] section of the service definition, for example using an override or a drop-in file (see "systemctl edit" for overrides). Example ```bash # cat /etc/systemd/system/walinuxagent.service.d/http-proxy.conf [Service] Environment="http_proxy=http://proxy.example.com:80/" Environment="https_proxy=http://proxy.example.com:80/" # ``` The Agent passes its environment to the VM Extensions it executes, including `http_proxy` and `https_proxy`, so defining a proxy for the Agent will also define it for the VM Extensions. The [`HttpProxy.Host` and `HttpProxy.Port`](#httpproxyhost-httpproxyport) configuration variables, if used, override the environment settings. Note that this configuration variables are local to the Agent process and are not passed to VM Extensions. ## Requirements The following systems have been tested and are known to work with the Azure Linux Agent. Please note that this list may differ from the official list of supported systems on the Microsoft Azure Platform as described [here](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/endorsed-distros). Waagent depends on some system packages in order to function properly: * Python 2.6+ * OpenSSL 1.0+ * OpenSSH 5.3+ * Filesystem utilities: sfdisk, fdisk, mkfs, parted * Password tools: chpasswd, sudo * Text processing tools: sed, grep * Network tools: ip-route, iptables ## Installation Installing via your distribution's package repository is the only method that is supported. You can install from source for more advanced options, such as installing to a custom location or creating custom images. Installing from source, though, may override customizations done to the Agent by your distribution, and is meant only for advanced users. We provide very limited support for this method. To install from source, you can use **setuptools**: ```bash sudo python setup.py install --register-service ``` For Python 3, use: ```bash sudo python3 setup.py install --register-service ``` You can view more installation options by running: ```bash sudo python setup.py install --help ``` The agent's log file is kept at `/var/log/waagent.log`. Lastly, you can also customize your own RPM or DEB packages using the configuration samples provided in the deb and rpm sections below. This method is also meant for advanced users and we provide very limited support for it. ## Upgrade Upgrading via your distribution's package repository or using automatic updates are the only supported methods. More information can be found here: [Update Linux Agent](https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/update-linux-agent) To upgrade the Agent from source, you can use **setuptools**. Upgrading from source is meant for advanced users and we provide very limited support for it. ```bash sudo python setup.py install --force ``` Restart waagent service,for most of linux distributions: ```bash sudo service waagent restart ``` For Ubuntu, use: ```bash sudo service walinuxagent restart ``` For CoreOS, use: ```bash sudo systemctl restart waagent ``` ## Command line options ### Flags `-verbose`: Increase verbosity of specified command `-force`: Skip interactive confirmation for some commands ### Commands `-help`: Lists the supported commands and flags. `-deprovision`: Attempt to clean the system and make it suitable for re-provisioning, by deleting the following: * All SSH host keys (if Provisioning.RegenerateSshHostKeyPair is 'y' in the configuration file) * Nameserver configuration in /etc/resolv.conf * Root password from /etc/shadow (if Provisioning.DeleteRootPassword is 'y' in the configuration file) * Cached DHCP client leases * Resets host name to localhost.localdomain **WARNING!** Deprovision does not guarantee that the image is cleared of all sensitive information and suitable for redistribution. `-deprovision+user`: Performs everything under deprovision (above) and also deletes the last provisioned user account and associated data. `-version`: Displays the version of waagent `-serialconsole`: Configures GRUB to mark ttyS0 (the first serial port) as the boot console. This ensures that kernel bootup logs are sent to the serial port and made available for debugging. `-daemon`: Run waagent as a daemon to manage interaction with the platform. This argument is specified to waagent in the waagent init script. `-start`: Run waagent as a background process `-collect-logs [-full]`: Runs the log collector utility that collects relevant agent logs for debugging and stores them in the agent folder on disk. Exact location will be shown when run. Use flag `-full` for more exhaustive log collection. ## Configuration A configuration file (/etc/waagent.conf) controls the actions of waagent. Blank lines and lines whose first character is a `#` are ignored (end-of-line comments are *not* supported). A sample configuration file is shown below: ```yml Extensions.Enabled=y Extensions.GoalStatePeriod=6 Provisioning.Agent=auto Provisioning.DeleteRootPassword=n Provisioning.RegenerateSshHostKeyPair=y Provisioning.SshHostKeyPairType=rsa Provisioning.MonitorHostName=y Provisioning.DecodeCustomData=n Provisioning.ExecuteCustomData=n Provisioning.PasswordCryptId=6 Provisioning.PasswordCryptSaltLength=10 ResourceDisk.Format=y ResourceDisk.Filesystem=ext4 ResourceDisk.MountPoint=/mnt/resource ResourceDisk.MountOptions=None ResourceDisk.EnableSwap=n ResourceDisk.EnableSwapEncryption=n ResourceDisk.SwapSizeMB=0 Logs.Verbose=n Logs.Collect=y Logs.CollectPeriod=3600 OS.AllowHTTP=n OS.RootDeviceScsiTimeout=300 OS.EnableFIPS=n OS.OpensslPath=None OS.SshClientAliveInterval=180 OS.SshDir=/etc/ssh HttpProxy.Host=None HttpProxy.Port=None ``` The various configuration options are described in detail below. Configuration options are of three types : Boolean, String or Integer. The Boolean configuration options can be specified as "y" or "n". The special keyword "None" may be used for some string type configuration entries as detailed below. ### Configuration File Options #### __Extensions.Enabled__ _Type: Boolean_ _Default: y_ This allows the user to enable or disable the extension handling functionality in the agent. Valid values are "y" or "n". If extension handling is disabled, the goal state will still be processed and VM status is still reported, but only every 5 minutes. Extension config within the goal state will be ignored. Note that functionality such as password reset, ssh key updates and backups depend on extensions. Only disable this if you do not need extensions at all. _Note_: disabling extensions in this manner is not the same as running completely without the agent. In order to do that, the `provisionVMAgent` flag must be set at provisioning time, via whichever API is being used. We will provide more details on this on our wiki when it is generally available. #### __Extensions.WaitForCloudInit__ _Type: Boolean_ _Default: n_ Waits for cloud-init to complete (cloud-init status --wait) before executing VM extensions. Both cloud-init and VM extensions are common ways to customize a VM during initial deployment. By default, the agent will start executing extensions while cloud-init may still be in the 'config' stage and won't wait for the 'final' stage to complete. Cloud-init and extensions may execute operations that conflict with each other (for example, both of them may try to install packages). Setting this option to 'y' ensures that VM extensions are executed only after cloud-init has completed all its stages. Note that using this option requires creating a custom image with the value of this option set to 'y', in order to ensure that the wait is performed during the initial deployment of the VM. #### __Extensions.WaitForCloudInitTimeout__ _Type: Integer_ _Default: 3600_ Timeout in seconds for the Agent to wait on cloud-init. If the timeout elapses, the Agent will continue executing VM extensions. See Extensions.WaitForCloudInit for more details. #### __Extensions.GoalStatePeriod__ _Type: Integer_ _Default: 6_ How often to poll for new goal states (in seconds) and report the status of the VM and extensions. Goal states describe the desired state of the extensions on the VM. _Note_: setting up this parameter to more than a few minutes can make the state of the VM be reported as unresponsive/unavailable on the Azure portal. Also, this setting affects how fast the agent starts executing extensions. #### __AutoUpdate.UpdateToLatestVersion__ _Type: Boolean_ _Default: y_ Enables auto-update of the Extension Handler. The Extension Handler is responsible for managing extensions and reporting VM status. The core functionality of the agent is contained in the Extension Handler, and we encourage users to enable this option in order to maintain an up to date version. When this option is enabled, the Agent will install new versions when they become available. When disabled, the Agent will not install any new versions, but it will use the most recent version already installed on the VM. _Notes_: 1. This option was added on version 2.10.0.8 of the Agent. For previous versions, see AutoUpdate.Enabled. 2. If both options are specified in waagent.conf, AutoUpdate.UpdateToLatestVersion overrides the value set for AutoUpdate.Enabled. 3. Changing config option requires a service restart to pick up the updated setting. For more information on the agent version, see our [FAQ](https://github.com/Azure/WALinuxAgent/wiki/FAQ#what-does-goal-state-agent-mean-in-waagent---version-output).
For more information on the agent update, see our [FAQ](https://github.com/Azure/WALinuxAgent/wiki/FAQ#how-auto-update-works-for-extension-handler).
For more information on the AutoUpdate.UpdateToLatestVersion vs AutoUpdate.Enabled, see our [FAQ](https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion).
#### __AutoUpdate.Enabled__ _Type: Boolean_ _Default: y_ Enables auto-update of the Extension Handler. This flag is supported for legacy reasons and we strongly recommend using AutoUpdate.UpdateToLatestVersion instead. The difference between these 2 flags is that, when set to 'n', AutoUpdate.Enabled will use the version of the Extension Handler that is pre-installed on the image, while AutoUpdate.UpdateToLatestVersion will use the most recent version that has already been installed on the VM (via auto-update). On most distros the default value is 'y'. #### __Provisioning.Agent__ _Type: String_ _Default: auto_ Choose which provisioning agent to use (or allow waagent to figure it out by specifying "auto"). Possible options are "auto" (default), "waagent", "cloud-init", or "disabled". #### __Provisioning.Enabled__ (*removed in 2.2.45*) _Type: Boolean_ _Default: y_ This allows the user to enable or disable the provisioning functionality in the agent. Valid values are "y" or "n". If provisioning is disabled, SSH host and user keys in the image are preserved and any configuration specified in the Azure provisioning API is ignored. _Note_: This configuration option has been removed and has no effect. waagent now auto-detects cloud-init as a provisioning agent (with an option to override with `Provisioning.Agent`). #### __Provisioning.MonitorHostName__ _Type: Boolean_ _Default: n_ Monitor host name changes and publish changes via DHCP requests. #### __Provisioning.MonitorHostNamePeriod__ _Type: Integer_ _Default: 30_ How often to monitor host name changes (in seconds). This setting is ignored if MonitorHostName is not set. #### __Provisioning.UseCloudInit__ _Type: Boolean_ _Default: n_ This options enables / disables support for provisioning by means of cloud-init. When true ("y"), the agent will wait for cloud-init to complete before installing extensions and processing the latest goal state. _Provisioning.Enabled_ must be disabled ("n") for this option to have an effect. Setting _Provisioning.Enabled_ to true ("y") overrides this option and runs the built-in agent provisioning code. _Note_: This configuration option has been removed and has no effect. waagent now auto-detects cloud-init as a provisioning agent (with an option to override with `Provisioning.Agent`). #### __Provisioning.DeleteRootPassword__ _Type: Boolean_ _Default: n_ If set, the root password in the /etc/shadow file is erased during the provisioning process. #### __Provisioning.RegenerateSshHostKeyPair__ _Type: Boolean_ _Default: y_ If set, all SSH host key pairs (ecdsa, dsa and rsa) are deleted during the provisioning process from /etc/ssh/. And a single fresh key pair is generated. The encryption type for the fresh key pair is configurable by the Provisioning.SshHostKeyPairType entry. Please note that some distributions will re-create SSH key pairs for any missing encryption types when the SSH daemon is restarted (for example, upon a reboot). #### __Provisioning.SshHostKeyPairType__ _Type: String_ _Default: rsa_ This can be set to an encryption algorithm type that is supported by the SSH daemon on the VM. The typically supported values are "rsa", "dsa" and "ecdsa". Note that "putty.exe" on Windows does not support "ecdsa". So, if you intend to use putty.exe on Windows to connect to a Linux deployment, please use "rsa" or "dsa". #### __Provisioning.MonitorHostName__ _Type: Boolean_ _Default: y_ If set, waagent will monitor the Linux VM for hostname changes (as returned by the "hostname" command) and automatically update the networking configuration in the image to reflect the change. In order to push the name change to the DNS servers, networking will be restarted in the VM. This will result in brief loss of Internet connectivity. #### __Provisioning.DecodeCustomData__ _Type: Boolean_ _Default: n_ If set, waagent will decode CustomData from Base64. #### __Provisioning.ExecuteCustomData__ _Type: Boolean_ _Default: n_ If set, waagent will execute CustomData after provisioning. #### __Provisioning.PasswordCryptId__ _Type: String_ _Default: 6_ Algorithm used by crypt when generating password hash. * 1 - MD5 * 2a - Blowfish * 5 - SHA-256 * 6 - SHA-512 #### __Provisioning.PasswordCryptSaltLength__ _Type: String_ _Default: 10_ Length of random salt used when generating password hash. #### __ResourceDisk.Format__ _Type: Boolean_ _Default: y_ If set, the resource disk provided by the platform will be formatted and mounted by waagent if the filesystem type requested by the user in "ResourceDisk.Filesystem" is anything other than "ntfs". A single partition of type Linux (83) will be made available on the disk. Note that this partition will not be formatted if it can be successfully mounted. #### __ResourceDisk.Filesystem__ _Type: String_ _Default: ext4_ This specifies the filesystem type for the resource disk. Supported values vary by Linux distribution. If the string is X, then mkfs.X should be present on the Linux image. SLES 11 images should typically use 'ext3'. BSD images should use 'ufs2' here. #### __ResourceDisk.MountPoint__ _Type: String_ _Default: /mnt/resource_ This specifies the path at which the resource disk is mounted. #### __ResourceDisk.MountOptions__ _Type: String_ _Default: None_ Specifies disk mount options to be passed to the mount -o command. This is a comma separated list of values, ex. 'nodev,nosuid'. See mount(8) for details. #### __ResourceDisk.EnableSwap__ _Type: Boolean_ _Default: n_ If set, a swap file (/swapfile) is created on the resource disk and added to the system swap space. #### __ResourceDisk.EnableSwapEncryption__ _Type: Boolean_ _Default: n_ If set, the swap file (/swapfile) is mounted as an encrypted filesystem (flag supported only on FreeBSD.) #### __ResourceDisk.SwapSizeMB__ _Type: Integer_ _Default: 0_ The size of the swap file in megabytes. #### __Logs.Verbose__ _Type: Boolean_ _Default: n_ If set, log verbosity is boosted. Waagent logs to /var/log/waagent.log and leverages the system logrotate functionality to rotate logs. #### __Logs.Collect__ _Type: Boolean_ _Default: y_ If set, agent logs will be periodically collected and uploaded to a secure location for improved supportability. NOTE: This feature relies on the agent's resource usage features (cgroups); this flag will not take effect on any distro not supported. #### __Logs.CollectPeriod__ _Type: Integer_ _Default: 3600_ This configures how frequently to collect and upload logs. Default is each hour. NOTE: This only takes effect if the Logs.Collect option is enabled. #### __OS.AllowHTTP__ _Type: Boolean_ _Default: n_ If SSL support is not compiled into Python, the agent will fail all HTTPS requests. You can set this option to 'y' to make the agent fall-back to HTTP, instead of failing the requests. NOTE: Allowing HTTP may unintentionally expose secure data. #### __OS.EnableRDMA__ _Type: Boolean_ _Default: n_ If set, the agent will attempt to install and then load an RDMA kernel driver that matches the version of the firmware on the underlying hardware. #### __OS.EnableFIPS__ _Type: Boolean_ _Default: n_ If set, the agent will emit into the environment "OPENSSL_FIPS=1" when executing OpenSSL commands. This signals OpenSSL to use any installed FIPS-compliant libraries. Note that the agent itself has no FIPS-specific code. _If no FIPS-compliant certificates are installed, then enabling this option will cause all OpenSSL commands to fail._ #### __OS.EnableFirewall__ _Type: Boolean_ _Default: n (set to 'y' in waagent.conf)_ Creates firewall rules to allow communication with the VM Host only by the Agent. #### __OS.MonitorDhcpClientRestartPeriod__ _Type: Integer_ _Default: 30_ The agent monitor restarts of the DHCP client and restores network rules when it happens. This setting determines how often (in seconds) to monitor for restarts. #### __OS.RootDeviceScsiTimeout__ _Type: Integer_ _Default: 300_ This configures the SCSI timeout in seconds on the root device. If not set, the system defaults are used. #### __OS.RootDeviceScsiTimeoutPeriod__ _Type: Integer_ _Default: 30_ How often to set the SCSI timeout on the root device (in seconds). This setting is ignored if RootDeviceScsiTimeout is not set. #### __OS.OpensslPath__ _Type: String_ _Default: None_ This can be used to specify an alternate path for the openssl binary to use for cryptographic operations. #### __OS.RemovePersistentNetRulesPeriod__ _Type: Integer_ _Default: 30_ How often to remove the udev rules for persistent network interface names (75-persistent-net-generator.rules and /etc/udev/rules.d/70-persistent-net.rules) (in seconds) #### __OS.SshClientAliveInterval__ _Type: Integer_ _Default: 180_ This values sets the number of seconds the agent uses for the SSH ClientAliveInterval configuration option. #### __OS.SshDir__ _Type: String_ _Default: `/etc/ssh`_ This option can be used to override the normal location of the SSH configuration directory. #### __HttpProxy.Host, HttpProxy.Port__ _Type: String_ _Default: None_ If set, the agent will use this proxy server for HTTP/HTTPS requests. These values *will* override the `http_proxy` or `https_proxy` environment variables. Lastly, `HttpProxy.Host` is required (if to be used) and `HttpProxy.Port` is optional. #### __CGroups.EnforceLimits__ _Type: Boolean_ _Default: y_ If set, the agent will attempt to set cgroups limits for cpu and memory for the agent process itself as well as extension processes. See the wiki for further details on this. #### __CGroups.Excluded__ _Type: String_ _Default: customscript,runcommand_ The list of extensions which will be excluded from cgroups limits. This should be comma separated. #### __Protocol.EndpointDiscovery__ _Type: String_ _Default: dhcp_ Determines how the agent will discover the [WireServer endpoint](https://learn.microsoft.com/en-us/azure/virtual-network/what-is-ip-address-168-63-129-16). Agent will use DHCP by default to discover the WireServer endpoint, but if this setting is 'static' the agent will use the known WireServer address (168.63.129.16). Possible options are "dhcp" (default) or "static". ### Telemetry WALinuxAgent collects usage data and sends it to Microsoft to help improve our products and services. The data collected is used to track service health and assist with Azure support requests. Data collected does not include any personally identifiable information. Read our [privacy statement](http://go.microsoft.com/fwlink/?LinkId=521839) to learn more. WALinuxAgent does not support disabling telemetry at this time. WALinuxAgent must be removed to disable telemetry collection. If you need this feature, please open an issue in GitHub and explain your requirement. ### Appendix We do not maintain packaging information in this repo but some samples are shown below as a reference. See the downstream distribution repositories for officially maintained packaging. #### deb packages The official Ubuntu WALinuxAgent package can be found [here](https://launchpad.net/ubuntu/+source/walinuxagent). Run once: 1. Install required packages ```bash sudo apt-get -y install ubuntu-dev-tools pbuilder python-all debhelper ``` 2. Create the pbuilder environment ```bash sudo pbuilder create --debootstrapopts --variant=buildd ``` 3. Obtain `waagent.dsc` from a downstream package repo To compile the package, from the top-most directory: 1. Build the source package ```bash dpkg-buildpackage -S ``` 2. Build the package ```bash sudo pbuilder build waagent.dsc ``` 3. Fetch the built package, usually from `/var/cache/pbuilder/result` #### rpm packages The instructions below describe how to build an rpm package. 1. Install setuptools ```bash curl https://bootstrap.pypa.io/ez_setup.py -o - | python ``` 2. The following command will build the binary and source RPMs: ```bash python setup.py bdist_rpm ``` ----- This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. Azure-WALinuxAgent-a976115/SECURITY.md000066400000000000000000000053051510742556200171660ustar00rootroot00000000000000 ## Security Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), and [our GitHub organizations](https://opensource.microsoft.com/). If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://aka.ms/opensource/security/definition), please report it to us as described below. ## Reporting Security Issues **Please do not report security vulnerabilities through public GitHub issues.** Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/opensource/security/create-report). If you prefer to submit without logging in, send email to [secure@microsoft.com](mailto:secure@microsoft.com). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://aka.ms/opensource/security/pgpkey). You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://aka.ms/opensource/security/msrc). Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue: * Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.) * Full paths of source file(s) related to the manifestation of the issue * The location of the affected source code (tag/branch/commit or direct URL) * Any special configuration required to reproduce the issue * Step-by-step instructions to reproduce the issue * Proof-of-concept or exploit code (if possible) * Impact of the issue, including how an attacker might exploit the issue This information will help us triage your report more quickly. If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/opensource/security/bounty) page for more details about our active programs. ## Preferred Languages We prefer all communications to be in English. ## Policy Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/opensource/security/cvd). Azure-WALinuxAgent-a976115/__main__.py000066400000000000000000000012521510742556200174640ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import azurelinuxagent.agent as agent agent.main() Azure-WALinuxAgent-a976115/azurelinuxagent/000077500000000000000000000000001510742556200206175ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/azurelinuxagent/__init__.py000066400000000000000000000011651510742556200227330ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # Azure-WALinuxAgent-a976115/azurelinuxagent/agent.py000066400000000000000000000504351510742556200222760ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # """ Module agent """ from __future__ import print_function import json import os import re import subprocess import sys import threading import time from azurelinuxagent.common.exception import CGroupsException from azurelinuxagent.ga import logcollector, cgroupconfigurator from azurelinuxagent.ga.cgroupcontroller import AGENT_LOG_COLLECTOR from azurelinuxagent.ga.cpucontroller import _CpuController from azurelinuxagent.ga.cgroupapi import create_cgroup_api, InvalidCgroupMountpointException from azurelinuxagent.ga.firewall_manager import FirewallManager import azurelinuxagent.common.conf as conf import azurelinuxagent.common.event as event import azurelinuxagent.common.logger as logger from azurelinuxagent.common.event import WALAEventOperation from azurelinuxagent.common.future import ustr from azurelinuxagent.ga.logcollector import LogCollector, OUTPUT_RESULTS_FILE_PATH from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.utils import fileutil, textutil from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from azurelinuxagent.common.version import AGENT_NAME, AGENT_LONG_VERSION, AGENT_VERSION, \ DISTRO_NAME, DISTRO_VERSION, \ PY_VERSION_MAJOR, PY_VERSION_MINOR, \ PY_VERSION_MICRO, GOAL_STATE_AGENT_VERSION, \ get_daemon_version, set_daemon_version from azurelinuxagent.ga.collect_logs import CollectLogsHandler, get_log_collector_monitor_handler from azurelinuxagent.pa.provision.default import ProvisionHandler class AgentCommands(object): """ This is the list of all commands that the Linux Guest Agent supports """ DeprovisionUser = "deprovision+user" Deprovision = "deprovision" Daemon = "daemon" Start = "start" RegisterService = "register-service" RunExthandlers = "run-exthandlers" Version = "version" ShowConfig = "show-configuration" Help = "help" CollectLogs = "collect-logs" SetupFirewall = "setup-firewall" Provision = "provision" class Agent(object): def __init__(self, verbose, conf_file_path=None): """ Initialize agent running environment. """ self.conf_file_path = conf_file_path self.osutil = get_osutil() # Init stdout log level = logger.LogLevel.VERBOSE if verbose else logger.LogLevel.INFO logger.add_logger_appender(logger.AppenderType.STDOUT, level) # Init config conf_file_path = self.conf_file_path \ if self.conf_file_path is not None \ else self.osutil.get_agent_conf_file_path() conf.load_conf_from_file(conf_file_path) # Init log verbose = verbose or conf.get_logs_verbose() level = logger.LogLevel.VERBOSE if verbose else logger.LogLevel.INFO logger.add_logger_appender(logger.AppenderType.FILE, level, path=conf.get_agent_log_file()) # echo the log to /dev/console if the machine will be provisioned if conf.get_logs_console() and not ProvisionHandler.is_provisioned(): self.__add_console_appender(level) if event.send_logs_to_telemetry(): logger.add_logger_appender(logger.AppenderType.TELEMETRY, logger.LogLevel.WARNING, path=event.add_log_event) ext_log_dir = conf.get_ext_log_dir() try: if os.path.isfile(ext_log_dir): raise Exception("{0} is a file".format(ext_log_dir)) if not os.path.isdir(ext_log_dir): fileutil.mkdir(ext_log_dir, mode=0o755, owner=self.osutil.get_root_username()) except Exception as e: logger.error( "Exception occurred while creating extension " "log directory {0}: {1}".format(ext_log_dir, e)) # Init event reporter # Note that the reporter is not fully initialized here yet. Some telemetry fields are filled with data # originating from the goal state or IMDS, which requires a WireProtocol instance. Once a protocol # has been established, those fields must be explicitly initialized using # initialize_event_logger_vminfo_common_parameters(). Any events created before that initialization # will contain dummy values on those fields. event.init_event_status(conf.get_lib_dir()) event_dir = os.path.join(conf.get_lib_dir(), event.EVENTS_DIRECTORY) event.init_event_logger(event_dir) event.enable_unhandled_err_dump("WALA") def __add_console_appender(self, level): logger.add_logger_appender(logger.AppenderType.CONSOLE, level, path="/dev/console") def daemon(self): """ Run agent daemon """ set_daemon_version(AGENT_VERSION) logger.set_prefix("Daemon") threading.current_thread().name = "Daemon" child_args = None \ if self.conf_file_path is None \ else "-configuration-path:{0}".format(self.conf_file_path) from azurelinuxagent.daemon import get_daemon_handler daemon_handler = get_daemon_handler() daemon_handler.run(child_args=child_args) def provision(self): """ Run provision command """ from azurelinuxagent.pa.provision import get_provision_handler provision_handler = get_provision_handler() provision_handler.run() def deprovision(self, force=False, deluser=False): """ Run deprovision command """ from azurelinuxagent.pa.deprovision import get_deprovision_handler deprovision_handler = get_deprovision_handler() deprovision_handler.run(force=force, deluser=deluser) def register_service(self): """ Register agent as a service """ print("Register {0} service".format(AGENT_NAME)) self.osutil.register_agent_service() print("Stop {0} service".format(AGENT_NAME)) self.osutil.stop_agent_service() print("Start {0} service".format(AGENT_NAME)) self.osutil.start_agent_service() def run_exthandlers(self, debug=False): """ Run the update and extension handler """ logger.set_prefix("ExtHandler") threading.current_thread().name = "ExtHandler" # # Agents < 2.2.53 used to echo the log to the console. Since the extension handler could have been started by # one of those daemons, output a message indicating that output to the console will stop, otherwise users # may think that the agent died if they noticed that output to the console stops abruptly. # # Feel free to remove this code if telemetry shows there are no more agents <= 2.2.53 in the field. # if conf.get_logs_console() and get_daemon_version() < FlexibleVersion("2.2.53"): self.__add_console_appender(logger.LogLevel.INFO) try: logger.info(u"The agent will now check for updates and then will process extensions. Output to /dev/console will be suspended during those operations.") finally: logger.disable_console_output() from azurelinuxagent.ga.update import get_update_handler update_handler = get_update_handler() update_handler.run(debug) def show_configuration(self): configuration = conf.get_configuration() for k in sorted(configuration.keys()): print("{0} = {1}".format(k, configuration[k])) def collect_logs(self, is_full_mode): logger.set_prefix("LogCollector") if is_full_mode: logger.info("Running log collector mode full") else: logger.info("Running log collector mode normal") LogCollector.initialize_telemetry() # Check the cgroups unit log_collector_monitor = None tracked_controllers = [] if CollectLogsHandler.is_enabled_monitor_cgroups_check(): try: cgroup_api = create_cgroup_api() logger.info("Using cgroup {0} for resource enforcement and monitoring".format(cgroup_api.get_cgroup_version())) except InvalidCgroupMountpointException as e: event.warn(WALAEventOperation.LogCollection, "The agent does not support cgroups if the default systemd mountpoint is not being used: {0}", ustr(e)) sys.exit(logcollector.INVALID_CGROUPS_ERRCODE) except CGroupsException as e: event.warn(WALAEventOperation.LogCollection, "Unable to determine which cgroup version to use: {0}", ustr(e)) sys.exit(logcollector.INVALID_CGROUPS_ERRCODE) def _validate_log_collector_cgroup_slice(): """ Validates that the log collector process is running in the expected cgroup slice. It is expected that after invoking the log collector, there may be a delay in populating cgroup information in systemd. Hence, multiple retries have been added. If it still fails, the function logs a warning event and exits the process with the appropriate error code. If multiple log collector runs fail with the same error, we disable the log collector until the service is restarted. """ retry_count = 0 while True: try: log_collector_cgroup = cgroup_api.get_process_cgroup(process_id="self", cgroup_name=AGENT_LOG_COLLECTOR) if not log_collector_cgroup.check_in_expected_slice(cgroupconfigurator.LOGCOLLECTOR_SLICE): raise CGroupsException("The Log Collector process is not in the proper cgroup. Expected slice: {0}".format(cgroupconfigurator.LOGCOLLECTOR_SLICE)) return log_collector_cgroup except CGroupsException as e: retry_count += 1 if retry_count >= logcollector.LOG_COLLECTOR_CGROUP_PATH_VALIDATION_MAX_RETRIES: event.warn(WALAEventOperation.LogCollection, ustr(e)) sys.exit(logcollector.UNEXPECTED_CGROUP_PATH_ERRCODE) logger.info("Check cgroup in expected slice failed: retrying in {0} secs [Attempt {1}/{2}]".format(logcollector.LOG_COLLECTOR_CGROUP_PATH_VALIDATION_RETRY_DELAY, retry_count, logcollector.LOG_COLLECTOR_CGROUP_PATH_VALIDATION_MAX_RETRIES)) time.sleep(logcollector.LOG_COLLECTOR_CGROUP_PATH_VALIDATION_RETRY_DELAY) log_collector_cgroup = _validate_log_collector_cgroup_slice() tracked_controllers = log_collector_cgroup.get_controllers() for controller in tracked_controllers: logger.info("{0} controller for cgroup: {1}".format(controller.get_controller_type(), controller)) if len(tracked_controllers) != len(log_collector_cgroup.get_supported_controller_names()): event.warn(WALAEventOperation.LogCollection, "At least one required controller is missing. The following controllers are required for the log collector to run: {0}", log_collector_cgroup.get_supported_controller_names()) sys.exit(logcollector.INVALID_CGROUPS_ERRCODE) try: log_collector = LogCollector(is_full_mode) # Running log collector resource monitoring only if agent starts the log collector. # If Log collector start by any other means, then it will not be monitored. if CollectLogsHandler.is_enabled_monitor_cgroups_check(): for controller in tracked_controllers: if isinstance(controller, _CpuController): controller.initialize_cpu_usage() controller.track_throttle_time(True) break log_collector_monitor = get_log_collector_monitor_handler(tracked_controllers) log_collector_monitor.run() archive, total_uncompressed_size = log_collector.collect_logs_and_get_archive() logger.info("Log collection successfully completed. Archive can be found at {0} " "and detailed log output can be found at {1}".format(archive, OUTPUT_RESULTS_FILE_PATH)) if log_collector_monitor is not None: log_collector_monitor.stop() try: metrics_summary = log_collector_monitor.get_max_recorded_metrics() metrics_summary['Total Uncompressed File Size (B)'] = total_uncompressed_size msg = json.dumps(metrics_summary) logger.info(msg) event.add_event(op=event.WALAEventOperation.LogCollection, message=msg, log_event=False) except Exception as e: msg = "An error occurred while reporting log collector resource usage summary: {0}".format(ustr(e)) logger.warn(msg) event.add_event(op=event.WALAEventOperation.LogCollection, is_success=False, message=msg, log_event=False) except Exception as e: logger.error("Log collection completed unsuccessfully. Error: {0}".format(ustr(e))) logger.info("Detailed log output can be found at {0}".format(OUTPUT_RESULTS_FILE_PATH)) sys.exit(1) finally: if log_collector_monitor is not None: log_collector_monitor.stop() @staticmethod def setup_firewall(endpoint): logger.set_prefix("Firewall") threading.current_thread().name = "Firewall" event.info(event.WALAEventOperation.Firewall, "Setting up firewall after boot. Endpoint: {0}", ustr(endpoint)) try: firewall_manager = FirewallManager.create(endpoint) firewall_manager.setup() event.info(event.WALAEventOperation.Firewall, "Successfully set the firewall rules") except Exception as error: event.error(event.WALAEventOperation.Firewall, "Unable to add firewall rules. Error: {0}", ustr(error)) sys.exit(1) def main(args=None): """ Parse command line arguments, exit with usage() on error. Invoke different methods according to different command """ if args is None: args = [] if len(args) <= 0: args = sys.argv[1:] command, force, verbose, debug, conf_file_path, log_collector_full_mode, firewall_endpoint = parse_args(args) if command == AgentCommands.Version: version() elif command == AgentCommands.Help: print(usage()) elif command == AgentCommands.Start: start(conf_file_path=conf_file_path) else: try: agent = Agent(verbose, conf_file_path=conf_file_path) if command == AgentCommands.DeprovisionUser: agent.deprovision(force, deluser=True) elif command == AgentCommands.Deprovision: agent.deprovision(force, deluser=False) elif command == AgentCommands.Provision: agent.provision() elif command == AgentCommands.RegisterService: agent.register_service() elif command == AgentCommands.Daemon: agent.daemon() elif command == AgentCommands.RunExthandlers: agent.run_exthandlers(debug) elif command == AgentCommands.ShowConfig: agent.show_configuration() elif command == AgentCommands.CollectLogs: agent.collect_logs(log_collector_full_mode) elif command == AgentCommands.SetupFirewall: agent.setup_firewall(firewall_endpoint) except Exception as e: logger.error(u"Failed to run '{0}': {1}", command, textutil.format_exception(e)) def parse_args(sys_args): """ Parse command line arguments """ cmd = AgentCommands.Help force = False verbose = False debug = False conf_file_path = None log_collector_full_mode = False endpoint = None regex_cmd_format = "^([-/]*){0}" for arg in sys_args: if arg == "": # Don't parse an empty parameter continue m = re.match(r"^(?:[-/]*)configuration-path:([\w/\.\-_]+)", arg) if not m is None: conf_file_path = m.group(1) if not os.path.exists(conf_file_path): print("Error: Configuration file {0} does not exist".format( conf_file_path), file=sys.stderr) print(usage()) sys.exit(1) elif re.match("^([-/]*)deprovision\\+user", arg): cmd = AgentCommands.DeprovisionUser elif re.match(regex_cmd_format.format(AgentCommands.Deprovision), arg): cmd = AgentCommands.Deprovision elif re.match(regex_cmd_format.format(AgentCommands.Daemon), arg): cmd = AgentCommands.Daemon elif re.match(regex_cmd_format.format(AgentCommands.Start), arg): cmd = AgentCommands.Start elif re.match(regex_cmd_format.format(AgentCommands.RegisterService), arg): cmd = AgentCommands.RegisterService elif re.match(regex_cmd_format.format(AgentCommands.RunExthandlers), arg): cmd = AgentCommands.RunExthandlers elif re.match(regex_cmd_format.format(AgentCommands.Version), arg): cmd = AgentCommands.Version elif re.match(regex_cmd_format.format("verbose"), arg): verbose = True elif re.match(regex_cmd_format.format("debug"), arg): debug = True elif re.match(regex_cmd_format.format("force"), arg): force = True elif re.match(regex_cmd_format.format(AgentCommands.ShowConfig), arg): cmd = AgentCommands.ShowConfig elif re.match("^([-/]*)(help|usage|\\?)", arg): cmd = AgentCommands.Help elif re.match(regex_cmd_format.format(AgentCommands.CollectLogs), arg): cmd = AgentCommands.CollectLogs elif re.match(regex_cmd_format.format("full"), arg): log_collector_full_mode = True else: regex_cmd = regex_cmd_format.format("{0}=(?P[\\d.]{{7,}})".format(AgentCommands.SetupFirewall)) match = re.match(regex_cmd, arg) if match is not None: cmd = AgentCommands.SetupFirewall endpoint = match.group('endpoint') else: cmd = AgentCommands.Help break return cmd, force, verbose, debug, conf_file_path, log_collector_full_mode, endpoint def version(): """ Show agent version """ print(("{0} running on {1} {2}".format(AGENT_LONG_VERSION, DISTRO_NAME, DISTRO_VERSION))) print("Python: {0}.{1}.{2}".format(PY_VERSION_MAJOR, PY_VERSION_MINOR, PY_VERSION_MICRO)) print("Goal state agent: {0}".format(GOAL_STATE_AGENT_VERSION)) def usage(): """ Return agent usage message """ s = "\n" s += ("usage: {0} [-verbose] [-force] [-help] " "-configuration-path:" "-deprovision[+user]|-register-service|-version|-daemon|-start|" "-run-exthandlers|-show-configuration|-collect-logs [-full]|-setup-firewall=]" "").format(sys.argv[0]) s += "\n" return s def start(conf_file_path=None): """ Start agent daemon in a background process and set stdout/stderr to /dev/null """ args = [sys.argv[0], '-daemon'] if conf_file_path is not None: args.append('-configuration-path:{0}'.format(conf_file_path)) with open(os.devnull, 'w') as devnull: subprocess.Popen(args, stdout=devnull, stderr=devnull) if __name__ == '__main__' : main() Azure-WALinuxAgent-a976115/azurelinuxagent/common/000077500000000000000000000000001510742556200221075ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/azurelinuxagent/common/AgentGlobals.py000066400000000000000000000023221510742556200250220ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ class AgentGlobals(object): """ This class is used for setting AgentGlobals which can be used all throughout the Agent. """ GUID_ZERO = "00000000-0000-0000-0000-000000000000" # # Some modules (e.g. telemetry) require an up-to-date container ID. We update this variable each time we # fetch the goal state. # _container_id = GUID_ZERO @staticmethod def get_container_id(): return AgentGlobals._container_id @staticmethod def update_container_id(container_id): AgentGlobals._container_id = container_id Azure-WALinuxAgent-a976115/azurelinuxagent/common/__init__.py000066400000000000000000000011661510742556200242240ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # Azure-WALinuxAgent-a976115/azurelinuxagent/common/agent_supported_feature.py000066400000000000000000000130071510742556200274000ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from azurelinuxagent.common import conf class SupportedFeatureNames(object): """ Enum for defining the Feature Names for all features that we the agent supports """ MultiConfig = "MultipleExtensionsPerHandler" ExtensionTelemetryPipeline = "ExtensionTelemetryPipeline" FastTrack = "FastTrack" GAVersioningGovernance = "VersioningGovernance" # Guest Agent Versioning class AgentSupportedFeature(object): """ Interface for defining all features that the Linux Guest Agent supports and reports their if supported back to CRP """ def __init__(self, name, version="1.0", supported=False): self.__name = name self.__version = version self.__supported = supported @property def name(self): return self.__name @property def version(self): return self.__version @property def is_supported(self): return self.__supported class _MultiConfigFeature(AgentSupportedFeature): __NAME = SupportedFeatureNames.MultiConfig __VERSION = "1.0" __SUPPORTED = True def __init__(self): super(_MultiConfigFeature, self).__init__(name=_MultiConfigFeature.__NAME, version=_MultiConfigFeature.__VERSION, supported=_MultiConfigFeature.__SUPPORTED) class _ETPFeature(AgentSupportedFeature): __NAME = SupportedFeatureNames.ExtensionTelemetryPipeline __VERSION = "1.0" __SUPPORTED = True def __init__(self): super(_ETPFeature, self).__init__(name=self.__NAME, version=self.__VERSION, supported=self.__SUPPORTED) class _GAVersioningGovernanceFeature(AgentSupportedFeature): """ CRP would drive the RSM update if agent reports that it does support RSM upgrades with this flag otherwise CRP fallback to largest version. Agent doesn't report supported feature flag if auto update is disabled or old version of agent running that doesn't understand GA versioning or if explicitly support for versioning is disabled in agent Note: Especially Windows need this flag to report to CRP that GA doesn't support the updates. So linux adopted same flag to have a common solution. """ __NAME = SupportedFeatureNames.GAVersioningGovernance __VERSION = "1.0" __SUPPORTED = conf.get_auto_update_to_latest_version() and conf.get_enable_ga_versioning() def __init__(self): super(_GAVersioningGovernanceFeature, self).__init__(name=self.__NAME, version=self.__VERSION, supported=self.__SUPPORTED) # This is the list of features that Agent supports and we advertise to CRP __CRP_ADVERTISED_FEATURES = { SupportedFeatureNames.MultiConfig: _MultiConfigFeature(), SupportedFeatureNames.GAVersioningGovernance: _GAVersioningGovernanceFeature() } # This is the list of features that Agent supports and we advertise to Extensions __EXTENSION_ADVERTISED_FEATURES = { SupportedFeatureNames.ExtensionTelemetryPipeline: _ETPFeature() } def get_supported_feature_by_name(feature_name): if feature_name in __CRP_ADVERTISED_FEATURES: return __CRP_ADVERTISED_FEATURES[feature_name] if feature_name in __EXTENSION_ADVERTISED_FEATURES: return __EXTENSION_ADVERTISED_FEATURES[feature_name] raise NotImplementedError("Feature with Name: {0} not found".format(feature_name)) def get_agent_supported_features_list_for_crp(): """ List of features that the GuestAgent currently supports (like FastTrack, MultiConfig, etc). We need to send this list as part of Status reporting to inform CRP of all the features the agent supports. :return: Dict containing all CRP supported features with the key as their names and the AgentFeature object as the value if they are supported by the Agent Eg: { MultipleExtensionsPerHandler: _MultiConfigFeature() } """ return dict((name, feature) for name, feature in __CRP_ADVERTISED_FEATURES.items() if feature.is_supported) def get_agent_supported_features_list_for_extensions(): """ List of features that the GuestAgent currently supports (like Extension Telemetry Pipeline, etc) needed by Extensions. We need to send this list as environment variables when calling extension commands to inform Extensions of all the features the agent supports. :return: Dict containing all Extension supported features with the key as their names and the AgentFeature object as the value if the feature is supported by the Agent. Eg: { CRPSupportedFeatureNames.ExtensionTelemetryPipeline: _ETPFeature() } """ return dict((name, feature) for name, feature in __EXTENSION_ADVERTISED_FEATURES.items() if feature.is_supported) Azure-WALinuxAgent-a976115/azurelinuxagent/common/conf.py000066400000000000000000000562011510742556200234120ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # """ Module conf loads and parses configuration file """ # pylint: disable=W0105 import os import os.path from azurelinuxagent.common.utils.fileutil import read_file #pylint: disable=R0401 from azurelinuxagent.common.exception import AgentConfigError DISABLE_AGENT_FILE = 'disable_agent' class ConfigurationProvider(object): """ Parse and store key:values in /etc/waagent.conf. """ def __init__(self): self.values = {} def load(self, content): if not content: raise AgentConfigError("Can't not parse empty configuration") for line in content.split('\n'): if not line.startswith("#") and "=" in line: parts = line.split('=', 1) if len(parts) < 2: continue key = parts[0].strip() value = parts[1].split('#')[0].strip("\" ").strip() self.values[key] = value if value != "None" else None @staticmethod def _get_default(default): if hasattr(default, '__call__'): return default() return default def get(self, key, default_value): """ Retrieves a string parameter by key and returns its value. If not found returns the default value, or if the default value is a callable returns the result of invoking the callable. """ val = self.values.get(key) return val if val is not None else self._get_default(default_value) def get_switch(self, key, default_value): """ Retrieves a switch parameter by key and returns its value as a boolean. If not found returns the default value, or if the default value is a callable returns the result of invoking the callable. """ val = self.values.get(key) if val is not None and val.lower() == 'y': return True elif val is not None and val.lower() == 'n': return False return self._get_default(default_value) def get_int(self, key, default_value): """ Retrieves an int parameter by key and returns its value. If not found returns the default value, or if the default value is a callable returns the result of invoking the callable. """ try: return int(self.values.get(key)) except TypeError: return self._get_default(default_value) except ValueError: return self._get_default(default_value) def is_present(self, key): """ Returns True if the given flag present in the configuration file, False otherwise. """ return self.values.get(key) is not None __conf__ = ConfigurationProvider() def load_conf_from_file(conf_file_path, conf=__conf__): """ Load conf file from: conf_file_path """ if os.path.isfile(conf_file_path) == False: raise AgentConfigError(("Missing configuration in {0}" "").format(conf_file_path)) try: content = read_file(conf_file_path) conf.load(content) except IOError as err: raise AgentConfigError(("Failed to load conf file:{0}, {1}" "").format(conf_file_path, err)) __SWITCH_OPTIONS__ = { "OS.AllowHTTP": False, "OS.EnableFirewall": False, "OS.EnableFIPS": False, "OS.EnableRDMA": False, "OS.UpdateRdmaDriver": False, "OS.CheckRdmaDriver": False, "Logs.Verbose": False, "Logs.Console": True, "Logs.Collect": True, "Extensions.Enabled": True, "Extensions.WaitForCloudInit": False, "Provisioning.AllowResetSysUser": False, "Provisioning.RegenerateSshHostKeyPair": False, "Provisioning.DeleteRootPassword": False, "Provisioning.DecodeCustomData": False, "Provisioning.ExecuteCustomData": False, "Provisioning.MonitorHostName": False, "DetectScvmmEnv": False, "ResourceDisk.Format": False, "ResourceDisk.EnableSwap": False, "ResourceDisk.EnableSwapEncryption": False, "AutoUpdate.Enabled": True, "AutoUpdate.UpdateToLatestVersion": True, "EnableOverProvisioning": True, # # "Debug" options are experimental and may be removed in later # versions of the Agent. # "Debug.CgroupLogMetrics": False, "Debug.CgroupDisableOnProcessCheckFailure": True, "Debug.CgroupDisableOnQuotaCheckFailure": True, "Debug.EnableAgentMemoryUsageCheck": False, "Debug.EnableFastTrack": True, "Debug.EnableGAVersioning": True, "Debug.EnableCgroupV2ResourceLimiting": False, "Debug.EnableExtensionPolicy": False, "Debug.EnableSignatureValidation": False } __STRING_OPTIONS__ = { "Lib.Dir": "/var/lib/waagent", "DVD.MountPoint": "/mnt/cdrom/secure", "Pid.File": "/var/run/waagent.pid", "Extension.LogDir": "/var/log/azure", "OS.OpensslPath": "/usr/bin/openssl", "OS.SshDir": "/etc/ssh", "OS.HomeDir": "/home", "OS.PasswordPath": "/etc/shadow", "OS.SudoersDir": "/etc/sudoers.d", "OS.RootDeviceScsiTimeout": None, "Provisioning.Agent": "auto", "Provisioning.SshHostKeyPairType": "rsa", "Provisioning.PasswordCryptId": "6", "HttpProxy.Host": None, "ResourceDisk.MountPoint": "/mnt/resource", "ResourceDisk.MountOptions": None, "ResourceDisk.Filesystem": "ext3", "AutoUpdate.GAFamily": "Prod", "Policy.PolicyFilePath": "/etc/waagent_policy.json", "Protocol.EndpointDiscovery": "dhcp" } __INTEGER_OPTIONS__ = { "Extensions.GoalStatePeriod": 6, "Extensions.InitialGoalStatePeriod": 6, "Extensions.WaitForCloudInitTimeout": 3600, "OS.EnableFirewallPeriod": 300, "OS.RemovePersistentNetRulesPeriod": 30, "OS.RootDeviceScsiTimeoutPeriod": 30, "OS.MonitorDhcpClientRestartPeriod": 30, "OS.SshClientAliveInterval": 180, "Provisioning.MonitorHostNamePeriod": 30, "Provisioning.PasswordCryptSaltLength": 10, "HttpProxy.Port": None, "ResourceDisk.SwapSizeMB": 0, "Autoupdate.Frequency": 3600, "Logs.CollectPeriod": 3600, # # "Debug" options are experimental and may be removed in later # versions of the Agent. # "Debug.CgroupCheckPeriod": 300, "Debug.AgentCpuQuota": 50, "Debug.AgentCpuThrottledTimeThreshold": 120, "Debug.AgentMemoryQuota": 30 * 1024 ** 2, "Debug.EtpCollectionPeriod": 300, "Debug.AutoUpdateHotfixFrequency": 14400, "Debug.AutoUpdateNormalFrequency": 86400, "Debug.FirewallRulesLogPeriod": 86400, "Debug.LogCollectorInitialDelay": 5 * 60 } def get_configuration(conf=__conf__): options = {} for option in __SWITCH_OPTIONS__: options[option] = conf.get_switch(option, __SWITCH_OPTIONS__[option]) for option in __STRING_OPTIONS__: options[option] = conf.get(option, __STRING_OPTIONS__[option]) for option in __INTEGER_OPTIONS__: options[option] = conf.get_int(option, __INTEGER_OPTIONS__[option]) return options def get_default_value(option): if option in __STRING_OPTIONS__: return __STRING_OPTIONS__[option] raise ValueError("{0} is not a valid configuration parameter.".format(option)) def get_int_default_value(option): if option in __INTEGER_OPTIONS__: return int(__INTEGER_OPTIONS__[option]) raise ValueError("{0} is not a valid configuration parameter.".format(option)) def get_switch_default_value(option): if option in __SWITCH_OPTIONS__: return __SWITCH_OPTIONS__[option] raise ValueError("{0} is not a valid configuration parameter.".format(option)) def is_present(key, conf=__conf__): """ Returns True if the given flag present in the configuration file, False otherwise. """ return conf.is_present(key) def enable_firewall(conf=__conf__): return conf.get_switch("OS.EnableFirewall", False) def get_enable_firewall_period(conf=__conf__): return conf.get_int("OS.EnableFirewallPeriod", 300) def get_remove_persistent_net_rules_period(conf=__conf__): return conf.get_int("OS.RemovePersistentNetRulesPeriod", 30) def get_monitor_dhcp_client_restart_period(conf=__conf__): return conf.get_int("OS.MonitorDhcpClientRestartPeriod", 30) def enable_rdma(conf=__conf__): return conf.get_switch("OS.EnableRDMA", False) or \ conf.get_switch("OS.UpdateRdmaDriver", False) or \ conf.get_switch("OS.CheckRdmaDriver", False) def enable_rdma_update(conf=__conf__): return conf.get_switch("OS.UpdateRdmaDriver", False) def enable_check_rdma_driver(conf=__conf__): return conf.get_switch("OS.CheckRdmaDriver", True) def get_logs_verbose(conf=__conf__): return conf.get_switch("Logs.Verbose", False) def get_logs_console(conf=__conf__): return conf.get_switch("Logs.Console", True) def get_collect_logs(conf=__conf__): return conf.get_switch("Logs.Collect", True) def get_collect_logs_period(conf=__conf__): return conf.get_int("Logs.CollectPeriod", 3600) def get_lib_dir(conf=__conf__): return conf.get("Lib.Dir", "/var/lib/waagent") def get_published_hostname(conf=__conf__): # Some applications rely on this file; do not remove this setting return os.path.join(get_lib_dir(conf), 'published_hostname') def get_dvd_mount_point(conf=__conf__): return conf.get("DVD.MountPoint", "/mnt/cdrom/secure") def get_agent_pid_file_path(conf=__conf__): return conf.get("Pid.File", "/var/run/waagent.pid") def get_ext_log_dir(conf=__conf__): return conf.get("Extension.LogDir", "/var/log/azure") def get_agent_log_file(): return "/var/log/waagent.log" def get_policy_file_path(conf=__conf__): return conf.get("Policy.PolicyFilePath", "/etc/waagent_policy.json") def get_fips_enabled(conf=__conf__): return conf.get_switch("OS.EnableFIPS", False) def get_openssl_cmd(conf=__conf__): return conf.get("OS.OpensslPath", "/usr/bin/openssl") def get_ssh_client_alive_interval(conf=__conf__): return conf.get("OS.SshClientAliveInterval", 180) def get_ssh_dir(conf=__conf__): return conf.get("OS.SshDir", "/etc/ssh") def get_home_dir(conf=__conf__): return conf.get("OS.HomeDir", "/home") def get_passwd_file_path(conf=__conf__): return conf.get("OS.PasswordPath", "/etc/shadow") def get_sudoers_dir(conf=__conf__): return conf.get("OS.SudoersDir", "/etc/sudoers.d") def get_sshd_conf_file_path(conf=__conf__): return os.path.join(get_ssh_dir(conf), "sshd_config") def get_ssh_key_glob(conf=__conf__): return os.path.join(get_ssh_dir(conf), 'ssh_host_*key*') def get_ssh_key_private_path(conf=__conf__): return os.path.join(get_ssh_dir(conf), 'ssh_host_{0}_key'.format(get_ssh_host_keypair_type(conf))) def get_ssh_key_public_path(conf=__conf__): return os.path.join(get_ssh_dir(conf), 'ssh_host_{0}_key.pub'.format(get_ssh_host_keypair_type(conf))) def get_root_device_scsi_timeout(conf=__conf__): return conf.get("OS.RootDeviceScsiTimeout", None) def get_root_device_scsi_timeout_period(conf=__conf__): return conf.get_int("OS.RootDeviceScsiTimeoutPeriod", 30) def get_ssh_host_keypair_type(conf=__conf__): keypair_type = conf.get("Provisioning.SshHostKeyPairType", "rsa") if keypair_type == "auto": ''' auto generates all supported key types and returns the rsa thumbprint as the default. ''' return "rsa" return keypair_type def get_ssh_host_keypair_mode(conf=__conf__): return conf.get("Provisioning.SshHostKeyPairType", "rsa") def get_extensions_enabled(conf=__conf__): return conf.get_switch("Extensions.Enabled", True) def get_wait_for_cloud_init(conf=__conf__): return conf.get_switch("Extensions.WaitForCloudInit", False) def get_wait_for_cloud_init_timeout(conf=__conf__): return conf.get_switch("Extensions.WaitForCloudInitTimeout", 3600) def get_goal_state_period(conf=__conf__): return conf.get_int("Extensions.GoalStatePeriod", 6) def get_initial_goal_state_period(conf=__conf__): return conf.get_int("Extensions.InitialGoalStatePeriod", default_value=lambda: get_goal_state_period(conf=conf)) def get_allow_reset_sys_user(conf=__conf__): return conf.get_switch("Provisioning.AllowResetSysUser", False) def get_regenerate_ssh_host_key(conf=__conf__): return conf.get_switch("Provisioning.RegenerateSshHostKeyPair", False) def get_delete_root_password(conf=__conf__): return conf.get_switch("Provisioning.DeleteRootPassword", False) def get_decode_customdata(conf=__conf__): return conf.get_switch("Provisioning.DecodeCustomData", False) def get_execute_customdata(conf=__conf__): return conf.get_switch("Provisioning.ExecuteCustomData", False) def get_password_cryptid(conf=__conf__): return conf.get("Provisioning.PasswordCryptId", "6") def get_provisioning_agent(conf=__conf__): return conf.get("Provisioning.Agent", "auto") def get_provision_enabled(conf=__conf__): """ Provisioning (as far as waagent is concerned) is enabled if either the agent is set to 'auto' or 'waagent'. This wraps logic that was introduced for flexible provisioning agent configuration and detection. The replaces the older bool setting to turn provisioning on or off. """ return get_provisioning_agent(conf) in ("auto", "waagent") def get_password_crypt_salt_len(conf=__conf__): return conf.get_int("Provisioning.PasswordCryptSaltLength", 10) def get_monitor_hostname(conf=__conf__): return conf.get_switch("Provisioning.MonitorHostName", False) def get_monitor_hostname_period(conf=__conf__): return conf.get_int("Provisioning.MonitorHostNamePeriod", 30) def get_httpproxy_host(conf=__conf__): return conf.get("HttpProxy.Host", None) def get_httpproxy_port(conf=__conf__): return conf.get_int("HttpProxy.Port", None) def get_detect_scvmm_env(conf=__conf__): return conf.get_switch("DetectScvmmEnv", False) def get_resourcedisk_format(conf=__conf__): return conf.get_switch("ResourceDisk.Format", False) def get_resourcedisk_enable_swap(conf=__conf__): return conf.get_switch("ResourceDisk.EnableSwap", False) def get_resourcedisk_enable_swap_encryption(conf=__conf__): return conf.get_switch("ResourceDisk.EnableSwapEncryption", False) def get_resourcedisk_mountpoint(conf=__conf__): return conf.get("ResourceDisk.MountPoint", "/mnt/resource") def get_resourcedisk_mountoptions(conf=__conf__): return conf.get("ResourceDisk.MountOptions", None) def get_resourcedisk_filesystem(conf=__conf__): return conf.get("ResourceDisk.Filesystem", "ext3") def get_resourcedisk_swap_size_mb(conf=__conf__): return conf.get_int("ResourceDisk.SwapSizeMB", 0) def get_autoupdate_gafamily(conf=__conf__): return conf.get("AutoUpdate.GAFamily", "Prod") def get_autoupdate_enabled(conf=__conf__): return conf.get_switch("AutoUpdate.Enabled", True) def get_autoupdate_frequency(conf=__conf__): return conf.get_int("Autoupdate.Frequency", 3600) def get_enable_overprovisioning(conf=__conf__): return conf.get_switch("EnableOverProvisioning", True) def get_allow_http(conf=__conf__): return conf.get_switch("OS.AllowHTTP", False) def get_disable_agent_file_path(conf=__conf__): return os.path.join(get_lib_dir(conf), DISABLE_AGENT_FILE) def get_cgroups_enabled(conf=__conf__): return conf.get_switch("CGroups.Enabled", True) def get_monitor_network_configuration_changes(conf=__conf__): return conf.get_switch("Monitor.NetworkConfigurationChanges", False) def get_auto_update_to_latest_version(conf=__conf__): """ If set to True, agent will update to the latest version NOTE: when both turned on, both AutoUpdate.Enabled and AutoUpdate.UpdateToLatestVersion same meaning: update to latest version when turned off, AutoUpdate.Enabled: reverts to pre-installed agent, AutoUpdate.UpdateToLatestVersion: uses latest version already installed on the vm and does not download new agents Even we are deprecating AutoUpdate.Enabled, we still need to support if users explicitly setting it instead new flag. If AutoUpdate.UpdateToLatestVersion is present, it overrides any value set for AutoUpdate.Enabled (if present). If AutoUpdate.UpdateToLatestVersion is not present but AutoUpdate.Enabled is present and set to 'n', we adhere to AutoUpdate.Enabled flag's behavior if both not present, we default to True. """ default = get_autoupdate_enabled(conf=conf) return conf.get_switch("AutoUpdate.UpdateToLatestVersion", default) def get_protocol_endpoint_discovery(conf=__conf__): return conf.get("Protocol.EndpointDiscovery", "dhcp") def get_dhcp_discovery_enabled(conf=__conf__): """ Determines how the agent will discover the wireserver endpoint. If set to 'dhcp', the agent will use DHCP to get the wireserver endpoint. Otherwise, the agent will use the known wireserver endpoint (168.63.129.16). """ return get_protocol_endpoint_discovery(conf) == "dhcp" def get_cgroup_check_period(conf=__conf__): """ How often to perform checks on cgroups (are the processes in the cgroups as expected, has the agent exceeded its quota, etc) NOTE: This option is experimental and may be removed in later versions of the Agent. """ return conf.get_int("Debug.CgroupCheckPeriod", 300) def get_cgroup_log_metrics(conf=__conf__): """ If True, resource usage metrics are written to the local log NOTE: This option is experimental and may be removed in later versions of the Agent. """ return conf.get_switch("Debug.CgroupLogMetrics", False) def get_cgroup_disable_on_process_check_failure(conf=__conf__): """ If True, cgroups will be disabled if the process check fails NOTE: This option is experimental and may be removed in later versions of the Agent. """ return conf.get_switch("Debug.CgroupDisableOnProcessCheckFailure", True) def get_cgroup_disable_on_quota_check_failure(conf=__conf__): """ If True, cgroups will be disabled if the CPU quota check fails NOTE: This option is experimental and may be removed in later versions of the Agent. """ return conf.get_switch("Debug.CgroupDisableOnQuotaCheckFailure", True) def get_agent_cpu_quota(conf=__conf__): """ CPU quota for the agent as a percentage of 1 CPU (100% == 1 CPU) NOTE: This option is experimental and may be removed in later versions of the Agent. """ return conf.get_int("Debug.AgentCpuQuota", 50) def get_agent_cpu_throttled_time_threshold(conf=__conf__): """ Throttled time threshold for agent cpu in seconds. NOTE: This option is experimental and may be removed in later versions of the Agent. """ return conf.get_int("Debug.AgentCpuThrottledTimeThreshold", 120) def get_agent_memory_quota(conf=__conf__): """ Memory quota for the agent in bytes. NOTE: This option is experimental and may be removed in later versions of the Agent. """ return conf.get_int("Debug.AgentMemoryQuota", 30 * 1024 ** 2) def get_enable_agent_memory_usage_check(conf=__conf__): """ If True, Agent checks it's Memory usage. NOTE: This option is experimental and may be removed in later versions of the Agent. """ return conf.get_switch("Debug.EnableAgentMemoryUsageCheck", False) def get_enable_fast_track(conf=__conf__): """ If True, the agent use FastTrack when retrieving goal states NOTE: This option is experimental and may be removed in later versions of the Agent. """ return conf.get_switch("Debug.EnableFastTrack", True) def get_etp_collection_period(conf=__conf__): """ Determines the frequency to perform ETP collection on extensions telemetry events. NOTE: This option is experimental and may be removed in later versions of the Agent. """ return conf.get_int("Debug.EtpCollectionPeriod", 300) def get_self_update_hotfix_frequency(conf=__conf__): """ Determines the frequency to check for Hotfix upgrades ( version changed in new upgrades). NOTE: This option is experimental and may be removed in later versions of the Agent. """ return conf.get_int("Debug.SelfUpdateHotfixFrequency", 4 * 60 * 60) def get_self_update_regular_frequency(conf=__conf__): """ Determines the frequency to check for regular upgrades (.. version changed in new upgrades). NOTE: This option is experimental and may be removed in later versions of the Agent. """ return conf.get_int("Debug.SelfUpdateRegularFrequency", 24 * 60 * 60) def get_enable_ga_versioning(conf=__conf__): """ If True, the agent looks for rsm updates(checking requested version in GS) otherwise it will fall back to self-update and finds the highest version from PIR. NOTE: This option is experimental and may be removed in later versions of the Agent. """ return conf.get_switch("Debug.EnableGAVersioning", True) def get_firewall_rules_log_period(conf=__conf__): """ Determine the frequency to perform the periodic operation of logging firewall rules. NOTE: This option is experimental and may be removed in later versions of the Agent. """ return conf.get_int("Debug.FirewallRulesLogPeriod", 86400) def get_extension_policy_enabled(conf=__conf__): """ Determine whether extension policy is enabled. If true, policy will be enforced before installing any extensions. NOTE: This option is experimental and may be removed in later versions of the Agent. """ return conf.get_switch("Debug.EnableExtensionPolicy", False) def get_enable_cgroup_v2_resource_limiting(conf=__conf__): """ If True, the agent will enable resource monitoring and enforcement for the log collector on machines using cgroup v2. NOTE: This option is experimental and may be removed in later versions of the Agent. """ return conf.get_switch("Debug.EnableCgroupV2ResourceLimiting", True) def get_log_collector_initial_delay(conf=__conf__): """ Determine the initial delay at service start before the first periodic log collection. NOTE: This option is experimental and may be removed in later versions of the Agent. """ return conf.get_int("Debug.LogCollectorInitialDelay", 5 * 60) def get_signature_validation_enabled(conf=__conf__): """ Determine whether signature validation is enabled. If true, package signature will be validated before installing any signed extensions. NOTE: This option is experimental and may be removed in later versions of the Agent. """ return conf.get_switch("Debug.EnableSignatureValidation", False) def get_enable_rsm_downgrade(conf=__conf__): """ If False, the agent will not downgrade to a lower version when a lower version is requested in the goal state. Todo: Flag will be removed once we have a fix for rsm downgrade scenario. """ return conf.get_switch("Debug.EnableRsmDowngrade", False) Azure-WALinuxAgent-a976115/azurelinuxagent/common/datacontract.py000066400000000000000000000052561510742556200251400ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2019 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from azurelinuxagent.common.exception import ProtocolError import azurelinuxagent.common.logger as logger # pylint: disable=W0105 """ Base class for data contracts between guest and host and utilities to manipulate the properties in those contracts """ # pylint: enable=W0105 class DataContract(object): pass class DataContractList(list): def __init__(self, item_cls): # pylint: disable=W0231 self.item_cls = item_cls def validate_param(name, val, expected_type): if val is None: raise ProtocolError("{0} is None".format(name)) if not isinstance(val, expected_type): raise ProtocolError(("{0} type should be {1} not {2}" "").format(name, expected_type, type(val))) def set_properties(name, obj, data): if isinstance(obj, DataContract): validate_param("Property '{0}'".format(name), data, dict) for prob_name, prob_val in data.items(): prob_full_name = "{0}.{1}".format(name, prob_name) try: prob = getattr(obj, prob_name) except AttributeError: logger.warn("Unknown property: {0}", prob_full_name) continue prob = set_properties(prob_full_name, prob, prob_val) setattr(obj, prob_name, prob) return obj elif isinstance(obj, DataContractList): validate_param("List '{0}'".format(name), data, list) for item_data in data: item = obj.item_cls() item = set_properties(name, item, item_data) obj.append(item) return obj else: return data def get_properties(obj): if isinstance(obj, DataContract): data = {} props = vars(obj) for prob_name, prob in list(props.items()): data[prob_name] = get_properties(prob) return data elif isinstance(obj, DataContractList): data = [] for item in obj: item_data = get_properties(item) data.append(item_data) return data else: return obj Azure-WALinuxAgent-a976115/azurelinuxagent/common/dhcp.py000066400000000000000000000362411510742556200234050ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import array import os import socket import time import azurelinuxagent.common.logger as logger from azurelinuxagent.common import conf from azurelinuxagent.common.exception import DhcpError from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.utils.restutil import KNOWN_WIRESERVER_IP from azurelinuxagent.common.utils.textutil import hex_dump, hex_dump2, \ hex_dump3, \ compare_bytes, str_to_ord, \ unpack_big_endian, \ int_to_ip4_addr # the kernel routing table representation of 168.63.129.16 KNOWN_WIRESERVER_IP_ENTRY = '10813FA8' def get_dhcp_handler(): return DhcpHandler() class DhcpHandler(object): """ Azure use DHCP option 245 to pass endpoint ip to VMs. """ def __init__(self): self.osutil = get_osutil() self.endpoint = None self.gateway = None self.routes = None self._request_broadcast = False self.skip_cache = False def run(self): """ Send dhcp request Configure default gateway and routes Save wire server endpoint if found """ if self.wireserver_route_exists or self.dhcp_cache_exists: return self.send_dhcp_req() self.conf_routes() def wait_for_network(self): """ Wait for network stack to be initialized. """ ipv4 = self.osutil.get_ip4_addr() while ipv4 == '' or ipv4 == '0.0.0.0': logger.info("Waiting for network.") time.sleep(10) logger.info("Try to start network interface.") self.osutil.start_network() ipv4 = self.osutil.get_ip4_addr() @property def wireserver_route_exists(self): """ Determine whether a route to the known wireserver ip already exists, and if so use that as the endpoint. This is true when running in a virtual network. :return: True if a route to KNOWN_WIRESERVER_IP exists. """ route_exists = False logger.info("Test for route to {0}".format(KNOWN_WIRESERVER_IP)) try: route_table = self.osutil.read_route_table() if any((KNOWN_WIRESERVER_IP_ENTRY in route) for route in route_table): # reset self.gateway and self.routes # we do not need to alter the routing table self.endpoint = KNOWN_WIRESERVER_IP self.gateway = None self.routes = None route_exists = True logger.info("Route to {0} exists".format(KNOWN_WIRESERVER_IP)) else: logger.warn("No route exists to {0}".format(KNOWN_WIRESERVER_IP)) except Exception as e: logger.error( "Could not determine whether route exists to {0}: {1}".format( KNOWN_WIRESERVER_IP, e)) return route_exists @property def dhcp_cache_exists(self): """ Check whether the dhcp options cache exists and contains the wireserver endpoint, unless skip_cache is True. :return: True if the cached endpoint was found in the dhcp lease """ if self.skip_cache: return False exists = False logger.info("Checking for dhcp lease cache") cached_endpoint = self.osutil.get_dhcp_lease_endpoint() # pylint: disable=E1128 if cached_endpoint is not None: self.endpoint = cached_endpoint exists = True logger.info("Cache exists [{0}]".format(exists)) return exists def conf_routes(self): logger.info("Configure routes") logger.info("Gateway:{0}", self.gateway) logger.info("Routes:{0}", self.routes) # Add default gateway if self.gateway is not None and self.osutil.is_missing_default_route(): self.osutil.route_add(0, 0, self.gateway) if self.routes is not None: for route in self.routes: self.osutil.route_add(route[0], route[1], route[2]) def _send_dhcp_req(self, request): __waiting_duration__ = [0, 10, 30, 60, 60] for duration in __waiting_duration__: try: self.osutil.allow_dhcp_broadcast() response = socket_send(request) validate_dhcp_resp(request, response) return response except DhcpError as e: logger.warn("Failed to send DHCP request: {0}", e) time.sleep(duration) return None def send_dhcp_req(self): """ Check if DHCP is available """ dhcp_available = self.osutil.is_dhcp_available() # If user has DHCP disabled for their VM then the dhcp request will fail. The user can configure the agent to # use the known wire server ip instead. use_dhcp = conf.get_dhcp_discovery_enabled() if not dhcp_available or not use_dhcp: if not use_dhcp: logger.info("send_dhcp_req: DHCP usage for endpoint discovery is disabled (Protocol.EndpointDiscovery={0}). Will use known wireserver endpoint.".format(conf.get_protocol_endpoint_discovery())) elif not dhcp_available: logger.info("send_dhcp_req: DHCP not available") self.endpoint = KNOWN_WIRESERVER_IP return # pylint: disable=W0105 """ Build dhcp request with mac addr Configure route to allow dhcp traffic Stop dhcp service if necessary """ # pylint: enable=W0105 logger.info("Send dhcp request") mac_addr = self.osutil.get_mac_addr() # Do unicast first, then fallback to broadcast if fails. req = build_dhcp_request(mac_addr, self._request_broadcast) if not self._request_broadcast: self._request_broadcast = True # Temporary allow broadcast for dhcp. Remove the route when done. missing_default_route = self.osutil.is_missing_default_route() ifname = self.osutil.get_if_name() if missing_default_route: self.osutil.set_route_for_dhcp_broadcast(ifname) # In some distros, dhcp service needs to be shutdown before agent probe # endpoint through dhcp. if self.osutil.is_dhcp_enabled(): self.osutil.stop_dhcp_service() resp = self._send_dhcp_req(req) if self.osutil.is_dhcp_enabled(): self.osutil.start_dhcp_service() if missing_default_route: self.osutil.remove_route_for_dhcp_broadcast(ifname) if resp is None: raise DhcpError("Failed to receive dhcp response.") self.endpoint, self.gateway, self.routes = parse_dhcp_resp(resp) def validate_dhcp_resp(request, response): # pylint: disable=R1710 bytes_recv = len(response) if bytes_recv < 0xF6: logger.error("HandleDhcpResponse: Too few bytes received:{0}", bytes_recv) return False logger.verbose("BytesReceived:{0}", hex(bytes_recv)) logger.verbose("DHCP response:{0}", hex_dump(response, bytes_recv)) # check transactionId, cookie, MAC address cookie should never mismatch # transactionId and MAC address may mismatch if we see a response # meant from another machine if not compare_bytes(request, response, 0xEC, 4): logger.verbose("Cookie not match:\nsend={0},\nreceive={1}", hex_dump3(request, 0xEC, 4), hex_dump3(response, 0xEC, 4)) raise DhcpError("Cookie in dhcp respones doesn't match the request") if not compare_bytes(request, response, 4, 4): logger.verbose("TransactionID not match:\nsend={0},\nreceive={1}", hex_dump3(request, 4, 4), hex_dump3(response, 4, 4)) raise DhcpError("TransactionID in dhcp respones " "doesn't match the request") if not compare_bytes(request, response, 0x1C, 6): logger.verbose("Mac Address not match:\nsend={0},\nreceive={1}", hex_dump3(request, 0x1C, 6), hex_dump3(response, 0x1C, 6)) raise DhcpError("Mac Addr in dhcp respones " "doesn't match the request") def parse_route(response, option, i, length, bytes_recv): # pylint: disable=W0613 # http://msdn.microsoft.com/en-us/library/cc227282%28PROT.10%29.aspx logger.verbose("Routes at offset: {0} with length:{1}", hex(i), hex(length)) routes = [] if length < 5: logger.error("Data too small for option:{0}", option) j = i + 2 while j < (i + length + 2): mask_len_bits = str_to_ord(response[j]) mask_len_bytes = (((mask_len_bits + 7) & ~7) >> 3) mask = 0xFFFFFFFF & (0xFFFFFFFF << (32 - mask_len_bits)) j += 1 net = unpack_big_endian(response, j, mask_len_bytes) net <<= (32 - mask_len_bytes * 8) net &= mask j += mask_len_bytes gateway = unpack_big_endian(response, j, 4) j += 4 routes.append((net, mask, gateway)) if j != (i + length + 2): logger.error("Unable to parse routes") return routes def parse_ip_addr(response, option, i, length, bytes_recv): if i + 5 < bytes_recv: if length != 4: logger.error("Endpoint or Default Gateway not 4 bytes") return None addr = unpack_big_endian(response, i + 2, 4) ip_addr = int_to_ip4_addr(addr) return ip_addr else: logger.error("Data too small for option:{0}", option) return None def parse_dhcp_resp(response): """ Parse DHCP response: Returns endpoint server or None on error. """ logger.verbose("parse Dhcp Response") bytes_recv = len(response) endpoint = None gateway = None routes = None # Walk all the returned options, parsing out what we need, ignoring the # others. We need the custom option 245 to find the the endpoint we talk to # as well as to handle some Linux DHCP client incompatibilities; # options 3 for default gateway and 249 for routes; 255 is end. i = 0xF0 # offset to first option while i < bytes_recv: option = str_to_ord(response[i]) length = 0 if (i + 1) < bytes_recv: length = str_to_ord(response[i + 1]) logger.verbose("DHCP option {0} at offset:{1} with length:{2}", hex(option), hex(i), hex(length)) if option == 255: logger.verbose("DHCP packet ended at offset:{0}", hex(i)) break elif option == 249: routes = parse_route(response, option, i, length, bytes_recv) elif option == 3: gateway = parse_ip_addr(response, option, i, length, bytes_recv) logger.verbose("Default gateway:{0}, at {1}", gateway, hex(i)) elif option == 245: endpoint = parse_ip_addr(response, option, i, length, bytes_recv) logger.verbose("Azure wire protocol endpoint:{0}, at {1}", endpoint, hex(i)) else: logger.verbose("Skipping DHCP option:{0} at {1} with length {2}", hex(option), hex(i), hex(length)) i += length + 2 return endpoint, gateway, routes def socket_send(request): sock = None try: sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP) sock.setsockopt(socket.SOL_SOCKET, socket.SO_BROADCAST, 1) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) sock.bind(("0.0.0.0", 68)) sock.sendto(request, ("", 67)) sock.settimeout(10) logger.verbose("Send DHCP request: Setting socket.timeout=10, " "entering recv") response = sock.recv(1024) return response except IOError as e: raise DhcpError("{0}".format(e)) finally: if sock is not None: sock.close() def build_dhcp_request(mac_addr, request_broadcast): """ Build DHCP request string. """ # # typedef struct _DHCP { # UINT8 Opcode; /* op: BOOTREQUEST or BOOTREPLY */ # UINT8 HardwareAddressType; /* htype: ethernet */ # UINT8 HardwareAddressLength; /* hlen: 6 (48 bit mac address) */ # UINT8 Hops; /* hops: 0 */ # UINT8 TransactionID[4]; /* xid: random */ # UINT8 Seconds[2]; /* secs: 0 */ # UINT8 Flags[2]; /* flags: 0 or 0x8000 for broadcast*/ # UINT8 ClientIpAddress[4]; /* ciaddr: 0 */ # UINT8 YourIpAddress[4]; /* yiaddr: 0 */ # UINT8 ServerIpAddress[4]; /* siaddr: 0 */ # UINT8 RelayAgentIpAddress[4]; /* giaddr: 0 */ # UINT8 ClientHardwareAddress[16]; /* chaddr: 6 byte eth MAC address */ # UINT8 ServerName[64]; /* sname: 0 */ # UINT8 BootFileName[128]; /* file: 0 */ # UINT8 MagicCookie[4]; /* 99 130 83 99 */ # /* 0x63 0x82 0x53 0x63 */ # /* options -- hard code ours */ # # UINT8 MessageTypeCode; /* 53 */ # UINT8 MessageTypeLength; /* 1 */ # UINT8 MessageType; /* 1 for DISCOVER */ # UINT8 End; /* 255 */ # } DHCP; # # tuple of 244 zeros # (struct.pack_into would be good here, but requires Python 2.5) request = [0] * 244 trans_id = gen_trans_id() # Opcode = 1 # HardwareAddressType = 1 (ethernet/MAC) # HardwareAddressLength = 6 (ethernet/MAC/48 bits) for a in range(0, 3): request[a] = [1, 1, 6][a] # fill in transaction id (random number to ensure response matches request) for a in range(0, 4): request[4 + a] = str_to_ord(trans_id[a]) logger.verbose("BuildDhcpRequest: transactionId:%s,%04X" % ( hex_dump2(trans_id), unpack_big_endian(request, 4, 4))) if request_broadcast: # set broadcast flag to true to request the dhcp server # to respond to a boradcast address, # this is useful when user dhclient fails. request[0x0A] = 0x80 # fill in ClientHardwareAddress for a in range(0, 6): request[0x1C + a] = str_to_ord(mac_addr[a]) # DHCP Magic Cookie: 99, 130, 83, 99 # MessageTypeCode = 53 DHCP Message Type # MessageTypeLength = 1 # MessageType = DHCPDISCOVER # End = 255 DHCP_END for a in range(0, 8): request[0xEC + a] = [99, 130, 83, 99, 53, 1, 1, 255][a] return array.array("B", request) def gen_trans_id(): return os.urandom(4) Azure-WALinuxAgent-a976115/azurelinuxagent/common/errorstate.py000066400000000000000000000022441510742556200246550ustar00rootroot00000000000000from datetime import datetime, timedelta from azurelinuxagent.common.future import UTC ERROR_STATE_DELTA_DEFAULT = timedelta(minutes=15) ERROR_STATE_DELTA_INSTALL = timedelta(minutes=5) ERROR_STATE_HOST_PLUGIN_FAILURE = timedelta(minutes=5) class ErrorState(object): def __init__(self, min_timedelta=ERROR_STATE_DELTA_DEFAULT): self.min_timedelta = min_timedelta self.count = 0 self.timestamp = None def incr(self): if self.count == 0: self.timestamp = datetime.now(UTC) self.count += 1 def reset(self): self.count = 0 self.timestamp = None def is_triggered(self): if self.timestamp is None: return False delta = datetime.now(UTC) - self.timestamp if delta >= self.min_timedelta: return True return False @property def fail_time(self): if self.timestamp is None: return 'unknown' delta = round((datetime.now(UTC) - self.timestamp).seconds / 60.0, 2) if delta < 60: return '{0} min'.format(delta) delta_hr = round(delta / 60.0, 2) return '{0} hr'.format(delta_hr) Azure-WALinuxAgent-a976115/azurelinuxagent/common/event.py000066400000000000000000001064361510742556200236140ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import atexit import json import os import platform import re import sys import threading import time import traceback from datetime import datetime import azurelinuxagent.common.conf as conf import azurelinuxagent.common.logger as logger from azurelinuxagent.common.AgentGlobals import AgentGlobals from azurelinuxagent.common.exception import EventError, OSUtilError from azurelinuxagent.common.future import ustr, UTC from azurelinuxagent.common.datacontract import get_properties, set_properties from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.telemetryevent import TelemetryEventParam, TelemetryEvent, CommonTelemetryEventSchema, \ GuestAgentGenericLogsSchema, GuestAgentExtensionEventsSchema, GuestAgentPerfCounterEventsSchema from azurelinuxagent.common.utils import fileutil, textutil, timeutil from azurelinuxagent.common.utils.textutil import parse_doc, findall, find, getattrib, str_to_encoded_ustr, \ redact_sas_token from azurelinuxagent.common.version import CURRENT_VERSION, CURRENT_AGENT, AGENT_NAME, DISTRO_NAME, DISTRO_VERSION, DISTRO_CODE_NAME, AGENT_EXECUTION_MODE from azurelinuxagent.common.protocol.imds import get_imds_client EVENTS_DIRECTORY = "events" _EVENT_MSG = "Event: name={0}, op={1}, message={2}, duration={3}" TELEMETRY_EVENT_PROVIDER_ID = "69B669B9-4AF8-4C50-BDC4-6006FA76E975" TELEMETRY_EVENT_EVENT_ID = 1 TELEMETRY_METRICS_EVENT_ID = 4 TELEMETRY_LOG_PROVIDER_ID = "FFF0196F-EE4C-4EAF-9AA5-776F622DEB4F" TELEMETRY_LOG_EVENT_ID = 7 # # When this flag is enabled the TODO comment in Logger.log() needs to be addressed; also the tests # marked with "Enable this test when SEND_LOGS_TO_TELEMETRY is enabled" should be enabled. # SEND_LOGS_TO_TELEMETRY = False MAX_NUMBER_OF_EVENTS = 1000 AGENT_EVENT_FILE_EXTENSION = '.waagent.tld' EVENT_FILE_REGEX = re.compile(r'(?P\.waagent)?\.tld$') def send_logs_to_telemetry(): return SEND_LOGS_TO_TELEMETRY class WALAEventOperation: ActivateResourceDisk = "ActivateResourceDisk" AgentDisabled = "AgentDisabled" AgentEnabled = "AgentEnabled" AgentMemory = "AgentMemory" AgentUpgrade = "AgentUpgrade" ArtifactsProfileBlob = "ArtifactsProfileBlob" CGroupsCleanUp = "CGroupsCleanUp" CGroupsDisabled = "CGroupsDisabled" CGroupsInfo = "CGroupsInfo" CloudInit = "CloudInit" CollectEventErrors = "CollectEventErrors" CollectEventUnicodeErrors = "CollectEventUnicodeErrors" ConfigurationChange = "ConfigurationChange" CustomData = "CustomData" DefaultChannelChange = "DefaultChannelChange" Deploy = "Deploy" Disable = "Disable" Downgrade = "Downgrade" Download = "Download" Enable = "Enable" ExtensionHandlerManifest = "ExtensionHandlerManifest" ExtensionPolicy = "ExtensionPolicy" ExtensionProcessing = "ExtensionProcessing" ExtensionResourceGovernance = "ExtensionResourceGovernance" ExtensionTelemetryEventProcessing = "ExtensionTelemetryEventProcessing" FetchGoalState = "FetchGoalState" Firewall = "Firewall" GoalState = "GoalState" GoalStateCertificates = "GoalStateCertificates" GoalStateUnsupportedFeatures = "GoalStateUnsupportedFeatures" HealthCheck = "HealthCheck" HealthObservation = "HealthObservation" HeartBeat = "HeartBeat" HostnamePublishing = "HostnamePublishing" HostPlugin = "HostPlugin" HostPluginHeartbeat = "HostPluginHeartbeat" HostPluginHeartbeatExtended = "HostPluginHeartbeatExtended" HttpErrors = "HttpErrors" HttpGet = "HttpGet" ImdsHeartbeat = "ImdsHeartbeat" Install = "Install" InitializeHostPlugin = "InitializeHostPlugin" Log = "Log" LogCollection = "LogCollection" NoExec = "NoExec" OSInfo = "OSInfo" OpenSsl = "OpenSsl" PersistFirewallRules = "PersistFirewallRules" Policy = "Policy" ProvisionAfterExtensions = "ProvisionAfterExtensions" PluginSettingsVersionMismatch = "PluginSettingsVersionMismatch" InvalidExtensionConfig = "InvalidExtensionConfig" ProtocolEndpoint = "ProtocolEndpoint" Provision = "Provision" ProvisionGuestAgent = "ProvisionGuestAgent" RemoteAccessHandling = "RemoteAccessHandling" ReportEventErrors = "ReportEventErrors" ReportEventUnicodeErrors = "ReportEventUnicodeErrors" ReportStatus = "ReportStatus" ReportStatusExtended = "ReportStatusExtended" RequestedStateDisabled = "RequestedStateDisabled" RequestedVersionMismatch = "RequestedVersionMismatch" ResetFirewall = "ResetFirewall" Restart = "Restart" SignatureValidation = "SignatureValidation" ExtensionSigned = "ExtensionSigned" PackageSignatureResult = "PackageSignatureResult" PackageSigningInfoResult = "PackageSigningInfoResult" SetCGroupsLimits = "SetCGroupsLimits" SkipUpdate = "SkipUpdate" StatusProcessing = "StatusProcessing" UnhandledError = "UnhandledError" UnInstall = "UnInstall" Unknown = "Unknown" Update = "Update" VmSettings = "VmSettings" VmSettingsSummary = "VmSettingsSummary" SHOULD_ENCODE_MESSAGE_LEN = 80 SHOULD_ENCODE_MESSAGE_OP = [ WALAEventOperation.Disable, WALAEventOperation.Enable, WALAEventOperation.Install, WALAEventOperation.UnInstall, ] class EventStatus(object): EVENT_STATUS_FILE = "event_status.json" def __init__(self): self._path = None self._status = {} def clear(self): self._status = {} self._save() def event_marked(self, name, version, op): return self._event_name(name, version, op) in self._status def event_succeeded(self, name, version, op): event = self._event_name(name, version, op) if event not in self._status: return True return self._status[event] is True def initialize(self, status_dir=conf.get_lib_dir()): self._path = os.path.join(status_dir, EventStatus.EVENT_STATUS_FILE) self._load() def mark_event_status(self, name, version, op, status): event = self._event_name(name, version, op) self._status[event] = (status is True) self._save() def _event_name(self, name, version, op): return "{0}-{1}-{2}".format(name, version, op) def _load(self): try: self._status = {} if os.path.isfile(self._path): with open(self._path, 'r') as f: self._status = json.load(f) except Exception as e: logger.warn("Exception occurred loading event status: {0}".format(e)) self._status = {} def _save(self): try: with open(self._path, 'w') as f: json.dump(self._status, f) except Exception as e: logger.warn("Exception occurred saving event status: {0}".format(e)) __event_status__ = EventStatus() __event_status_operations__ = [ WALAEventOperation.ReportStatus ] def parse_json_event(data_str): data = json.loads(data_str) event = TelemetryEvent() set_properties("TelemetryEvent", event, data) event.file_type = "json" return event def parse_event(data_str): try: return parse_json_event(data_str) except ValueError: return parse_xml_event(data_str) def parse_xml_param(param_node): name = getattrib(param_node, "Name") value_str = getattrib(param_node, "Value") attr_type = getattrib(param_node, "T") value = value_str if attr_type == 'mt:uint64': value = int(value_str) elif attr_type == 'mt:bool': value = bool(value_str) elif attr_type == 'mt:float64': value = float(value_str) return TelemetryEventParam(name, value) def parse_xml_event(data_str): try: xml_doc = parse_doc(data_str) event_id = getattrib(find(xml_doc, "Event"), 'id') provider_id = getattrib(find(xml_doc, "Provider"), 'id') event = TelemetryEvent(event_id, provider_id) param_nodes = findall(xml_doc, 'Param') for param_node in param_nodes: event.parameters.append(parse_xml_param(param_node)) event.file_type = "xml" return event except Exception as e: raise ValueError(ustr(e)) def redact_event_msg(event): """ Redact the message in the event if it contains SAS tokens. """ for param in event.parameters: if param.name == GuestAgentExtensionEventsSchema.Message: param.value = redact_sas_token(param.value) def _encode_message(op, message): """ Gzip and base64 encode a message based on the operation. The intent of this message is to make the logs human readable and include the stdout/stderr from extension operations. Extension operations tend to generate a lot of noise, which makes it difficult to parse the line-oriented waagent.log. The compromise is to encode the stdout/stderr so we preserve the data and do not destroy the line oriented nature. The data can be recovered using the following command: $ echo '' | base64 -d | pigz -zd You may need to install the pigz command. :param op: Operation, e.g. Enable or Install :param message: Message to encode :return: gzip'ed and base64 encoded message, or the original message """ if len(message) == 0: return message if op not in SHOULD_ENCODE_MESSAGE_OP: return message try: return textutil.compress(message) except Exception: # If the message could not be encoded a dummy message ('<>') is returned. # The original message was still sent via telemetry, so all is not lost. return "<>" def _log_event(name, op, message, duration, is_success=True): global _EVENT_MSG # pylint: disable=W0602, W0603 if not is_success: logger.error(_EVENT_MSG, name, op, message, duration) else: logger.info(_EVENT_MSG, name, op, message, duration) class CollectOrReportEventDebugInfo(object): """ This class is used for capturing and reporting debug info that is captured during event collection and reporting to wireserver. It captures the count of unicode errors and any unexpected errors and also a subset of errors with stacks to help with debugging any potential issues. """ __MAX_ERRORS_TO_REPORT = 5 OP_REPORT = "Report" OP_COLLECT = "Collect" def __init__(self, operation=OP_REPORT): self.__unicode_error_count = 0 self.__unicode_errors = set() self.__op_error_count = 0 self.__op_errors = set() if operation == self.OP_REPORT: self.__unicode_error_event = WALAEventOperation.ReportEventUnicodeErrors self.__op_errors_event = WALAEventOperation.ReportEventErrors elif operation == self.OP_COLLECT: self.__unicode_error_event = WALAEventOperation.CollectEventUnicodeErrors self.__op_errors_event = WALAEventOperation.CollectEventErrors def report_debug_info(self): def report_dropped_events_error(count, errors, operation_name): err_msg_format = "DroppedEventsCount: {0}\nReasons (first {1} errors): {2}" if count > 0: add_event(op=operation_name, message=err_msg_format.format(count, CollectOrReportEventDebugInfo.__MAX_ERRORS_TO_REPORT, ', '.join(errors)), is_success=False) report_dropped_events_error(self.__op_error_count, self.__op_errors, self.__op_errors_event) report_dropped_events_error(self.__unicode_error_count, self.__unicode_errors, self.__unicode_error_event) @staticmethod def _update_errors_and_get_count(error_count, errors, error_msg): error_count += 1 if len(errors) < CollectOrReportEventDebugInfo.__MAX_ERRORS_TO_REPORT: errors.add("{0}: {1}".format(ustr(error_msg), traceback.format_exc())) return error_count def update_unicode_error(self, unicode_err): self.__unicode_error_count = self._update_errors_and_get_count(self.__unicode_error_count, self.__unicode_errors, unicode_err) def update_op_error(self, op_err): self.__op_error_count = self._update_errors_and_get_count(self.__op_error_count, self.__op_errors, op_err) def get_error_count(self): return self.__op_error_count + self.__unicode_error_count class EventLogger(object): def __init__(self): self.event_dir = None self.periodic_events = {} self.protocol = None # # All events should have these parameters. # # The first set comes from the current OS and is initialized here. These values don't change during # the agent's lifetime. # # The next two sets come from the goal state and IMDS and must be explicitly initialized using # initialize_vminfo_common_parameters() once a protocol for communication with the host has been # created. Their values don't change during the agent's lifetime. Note that we initialize these # parameters here using dummy values (*_UNINITIALIZED) since events sent to the host should always # match the schema defined for them in the telemetry pipeline. # # There is another set of common parameters that must be computed at the time the event is created # (e.g. the timestamp and the container ID); those are added to events (along with the parameters # below) in _add_common_event_parameters() # # Note that different kinds of events may also include other parameters; those are added by the # corresponding add_* method (e.g. add_metric for performance metrics). # self._common_parameters = [] # Parameters from OS osutil = get_osutil() keyword_name = { "CpuArchitecture": osutil.get_vm_arch() } self._common_parameters.append(TelemetryEventParam(CommonTelemetryEventSchema.OSVersion, EventLogger._get_os_version())) self._common_parameters.append(TelemetryEventParam(CommonTelemetryEventSchema.ExecutionMode, AGENT_EXECUTION_MODE)) self._common_parameters.append(TelemetryEventParam(CommonTelemetryEventSchema.RAM, int(EventLogger._get_ram(osutil)))) self._common_parameters.append(TelemetryEventParam(CommonTelemetryEventSchema.Processors, int(EventLogger._get_processors(osutil)))) self._common_parameters.append(TelemetryEventParam(CommonTelemetryEventSchema.KeywordName, json.dumps(keyword_name))) # Parameters from goal state self._common_parameters.append(TelemetryEventParam(CommonTelemetryEventSchema.TenantName, "TenantName_UNINITIALIZED")) self._common_parameters.append(TelemetryEventParam(CommonTelemetryEventSchema.RoleName, "RoleName_UNINITIALIZED")) self._common_parameters.append(TelemetryEventParam(CommonTelemetryEventSchema.RoleInstanceName, "RoleInstanceName_UNINITIALIZED")) # # # Parameters from IMDS self._common_parameters.append(TelemetryEventParam(CommonTelemetryEventSchema.Location, "Location_UNINITIALIZED")) self._common_parameters.append(TelemetryEventParam(CommonTelemetryEventSchema.SubscriptionId, "SubscriptionId_UNINITIALIZED")) self._common_parameters.append(TelemetryEventParam(CommonTelemetryEventSchema.ResourceGroupName, "ResourceGroupName_UNINITIALIZED")) self._common_parameters.append(TelemetryEventParam(CommonTelemetryEventSchema.VMId, "VMId_UNINITIALIZED")) self._common_parameters.append(TelemetryEventParam(CommonTelemetryEventSchema.ImageOrigin, 0)) @staticmethod def _get_os_version(): return "{0}:{1}-{2}-{3}:{4}".format(platform.system(), DISTRO_NAME, DISTRO_VERSION, DISTRO_CODE_NAME, platform.release()) @staticmethod def _get_ram(osutil): try: return osutil.get_total_mem() except OSUtilError as e: logger.warn("Failed to get RAM info; will be missing from telemetry: {0}", ustr(e)) return 0 @staticmethod def _get_processors(osutil): try: return osutil.get_processor_cores() except OSUtilError as e: logger.warn("Failed to get Processors info; will be missing from telemetry: {0}", ustr(e)) return 0 def initialize_vminfo_common_parameters(self, protocol): """ Initializes the common parameters that come from the goal state and IMDS """ # create an index of the event parameters for faster updates parameters = {} for p in self._common_parameters: parameters[p.name] = p try: vminfo = protocol.get_vminfo() parameters[CommonTelemetryEventSchema.TenantName].value = vminfo.tenantName parameters[CommonTelemetryEventSchema.RoleName].value = vminfo.roleName parameters[CommonTelemetryEventSchema.RoleInstanceName].value = vminfo.roleInstanceName except Exception as e: logger.warn("Failed to get VM info from goal state; will be missing from telemetry: {0}", ustr(e)) try: imds_client = get_imds_client() imds_info = imds_client.get_compute() parameters[CommonTelemetryEventSchema.Location].value = imds_info.location parameters[CommonTelemetryEventSchema.SubscriptionId].value = imds_info.subscriptionId parameters[CommonTelemetryEventSchema.ResourceGroupName].value = imds_info.resourceGroupName parameters[CommonTelemetryEventSchema.VMId].value = imds_info.vmId parameters[CommonTelemetryEventSchema.ImageOrigin].value = int(imds_info.image_origin) except Exception as e: logger.warn("Failed to get IMDS info; will be missing from telemetry: {0}", ustr(e)) def save_event(self, data): if self.event_dir is None: logger.warn("Cannot save event -- Event reporter is not initialized.") return try: fileutil.mkdir(self.event_dir, mode=0o700) except (IOError, OSError) as e: msg = "Failed to create events folder {0}. Error: {1}".format(self.event_dir, ustr(e)) raise EventError(msg) try: existing_events = os.listdir(self.event_dir) if len(existing_events) >= MAX_NUMBER_OF_EVENTS: logger.periodic_warn(logger.EVERY_MINUTE, "[PERIODIC] Too many files under: {0}, current count: {1}, " "removing oldest event files".format(self.event_dir, len(existing_events))) existing_events.sort() oldest_files = existing_events[:-999] for event_file in oldest_files: os.remove(os.path.join(self.event_dir, event_file)) except (IOError, OSError) as e: msg = "Failed to remove old events from events folder {0}. Error: {1}".format(self.event_dir, ustr(e)) raise EventError(msg) filename = os.path.join(self.event_dir, ustr(int(time.time() * 1000000))) try: with open(filename + ".tmp", 'wb+') as hfile: hfile.write(data.encode("utf-8")) os.rename(filename + ".tmp", filename + AGENT_EVENT_FILE_EXTENSION) except (IOError, OSError) as e: msg = "Failed to write events to file: {0}".format(e) raise EventError(msg) def reset_periodic(self): self.periodic_events = {} def is_period_elapsed(self, delta, h): return h not in self.periodic_events or \ (self.periodic_events[h] + delta) <= datetime.now(UTC) def add_periodic(self, delta, name, op=WALAEventOperation.Unknown, is_success=True, duration=0, version=str(CURRENT_VERSION), message="", log_event=True, force=False): h = hash(name + op + ustr(is_success) + message) if force or self.is_period_elapsed(delta, h): self.add_event(name, op=op, is_success=is_success, duration=duration, version=version, message=message, log_event=log_event) self.periodic_events[h] = datetime.now(UTC) def add_event(self, name, op=WALAEventOperation.Unknown, is_success=True, duration=0, version=str(CURRENT_VERSION), message="", log_event=True, flush=False): """ :param flush: Flush the event immediately to the wire server """ if (not is_success) and log_event: _log_event(name, op, message, duration, is_success=is_success) event = TelemetryEvent(TELEMETRY_EVENT_EVENT_ID, TELEMETRY_EVENT_PROVIDER_ID) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Name, str_to_encoded_ustr(name))) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Version, str_to_encoded_ustr(version))) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Operation, str_to_encoded_ustr(op))) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.OperationSuccess, bool(is_success))) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Message, str_to_encoded_ustr(message))) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Duration, int(duration))) self.add_common_event_parameters(event, datetime.now(UTC)) self.report_or_save_event(event, flush) def add_log_event(self, level, message): event = TelemetryEvent(TELEMETRY_LOG_EVENT_ID, TELEMETRY_LOG_PROVIDER_ID) event.parameters.append(TelemetryEventParam(GuestAgentGenericLogsSchema.EventName, WALAEventOperation.Log)) event.parameters.append(TelemetryEventParam(GuestAgentGenericLogsSchema.CapabilityUsed, logger.LogLevel.STRINGS[level])) event.parameters.append(TelemetryEventParam(GuestAgentGenericLogsSchema.Context1, str_to_encoded_ustr(self._clean_up_message(message)))) event.parameters.append(TelemetryEventParam(GuestAgentGenericLogsSchema.Context2, timeutil.create_utc_timestamp(datetime.now(UTC)))) event.parameters.append(TelemetryEventParam(GuestAgentGenericLogsSchema.Context3, '')) self.add_common_event_parameters(event, datetime.now(UTC)) self.report_or_save_event(event) def add_metric(self, category, counter, instance, value, log_event=False): """ Create and save an event which contains a telemetry event. :param str category: The category of metric (e.g. "cpu", "memory") :param str counter: The specific metric within the category (e.g. "%idle") :param str instance: For instanced metrics, the instance identifier (filesystem name, cpu core#, etc.) :param value: Value of the metric :param bool log_event: If true, log the collected metric in the agent log """ if log_event: message = "Metric {0}/{1} [{2}] = {3}".format(category, counter, instance, value) _log_event(AGENT_NAME, "METRIC", message, 0) event = TelemetryEvent(TELEMETRY_METRICS_EVENT_ID, TELEMETRY_EVENT_PROVIDER_ID) event.parameters.append(TelemetryEventParam(GuestAgentPerfCounterEventsSchema.Category, str_to_encoded_ustr(category))) event.parameters.append(TelemetryEventParam(GuestAgentPerfCounterEventsSchema.Counter, str_to_encoded_ustr(counter))) event.parameters.append(TelemetryEventParam(GuestAgentPerfCounterEventsSchema.Instance, str_to_encoded_ustr(instance))) event.parameters.append(TelemetryEventParam(GuestAgentPerfCounterEventsSchema.Value, float(value))) self.add_common_event_parameters(event, datetime.now(UTC)) self.report_or_save_event(event) def report_or_save_event(self, event, flush=False): """ Flush the event to wireserver if flush to set to true, else save it disk if we fail to send or not required to flush immediately. TODO: pickup as many events as possible and send them in one go. """ # redact message before save it to disk redact_event_msg(event) report_success = False if flush and self.protocol is not None: report_success = self.protocol.report_event([event], flush) if not report_success: try: data = get_properties(event) self.save_event(json.dumps(data)) except EventError as e: logger.periodic_error(logger.EVERY_FIFTEEN_MINUTES, "[PERIODIC] {0}".format(ustr(e))) @staticmethod def _clean_up_message(message): # By the time the message has gotten to this point it is formatted as # # Old Time format # YYYY/MM/DD HH:mm:ss.fffffff LEVEL . # YYYY/MM/DD HH:mm:ss.fffffff . # YYYY/MM/DD HH:mm:ss LEVEL . # YYYY/MM/DD HH:mm:ss . # # UTC ISO Time format added in #1716 # YYYY-MM-DDTHH:mm:ss.fffffffZ LEVEL . # YYYY-MM-DDTHH:mm:ss.fffffffZ . # YYYY-MM-DDTHH:mm:ssZ LEVEL . # YYYY-MM-DDTHH:mm:ssZ . # # The timestamp and the level are redundant, and should be stripped. The logging library does not schematize # this data, so I am forced to parse the message using a regex. The format is regular, so the burden is low, # and usability on the telemetry side is high. if not message: return message # Adding two regexs to simplify the handling of logs and to keep it maintainable. Most of the logs would have # level includent in the log itself, but if it doesn't have, the second regex is a catch all case and will work # for all the cases. log_level_format_parser = re.compile(r"^.*(INFO|WARNING|ERROR|VERBOSE)\s*(.*)$") log_format_parser = re.compile(r"^[0-9:/\-TZ\s.]*\s(.*)$") # Parsing the log messages containing levels in it extract_level_message = log_level_format_parser.search(message) if extract_level_message: return extract_level_message.group(2) # The message bit else: # Parsing the log messages without levels in it. extract_message = log_format_parser.search(message) if extract_message: return extract_message.group(1) # The message bit else: return message def add_common_event_parameters(self, event, event_timestamp): """ This method is called for all events and ensures all telemetry fields are added before the event is sent out. Note that the event timestamp is saved in the OpcodeName field. """ common_params = [TelemetryEventParam(CommonTelemetryEventSchema.GAVersion, CURRENT_AGENT), TelemetryEventParam(CommonTelemetryEventSchema.ContainerId, AgentGlobals.get_container_id()), TelemetryEventParam(CommonTelemetryEventSchema.OpcodeName, timeutil.create_utc_timestamp(event_timestamp)), TelemetryEventParam(CommonTelemetryEventSchema.EventTid, threading.current_thread().ident), TelemetryEventParam(CommonTelemetryEventSchema.EventPid, os.getpid()), TelemetryEventParam(CommonTelemetryEventSchema.TaskName, threading.current_thread().name)] if event.eventId == TELEMETRY_EVENT_EVENT_ID and event.providerId == TELEMETRY_EVENT_PROVIDER_ID: # Currently only the GuestAgentExtensionEvents has these columns, the other tables dont have them so skipping # this data in those tables. common_params.extend([TelemetryEventParam(GuestAgentExtensionEventsSchema.ExtensionType, event.file_type), TelemetryEventParam(GuestAgentExtensionEventsSchema.IsInternal, False)]) event.parameters.extend(common_params) event.parameters.extend(self._common_parameters) __event_logger__ = EventLogger() def get_event_logger(): return __event_logger__ def elapsed_milliseconds(utc_start): now = datetime.now(UTC) if now < utc_start: return 0 d = now - utc_start return int(((d.days * 24 * 60 * 60 + d.seconds) * 1000) + \ (d.microseconds / 1000.0)) def report_event(op, is_success=True, message='', log_event=True, flush=False): """ :param flush: if true, flush the event immediately to the wire server """ add_event(AGENT_NAME, version=str(CURRENT_VERSION), is_success=is_success, message=message, op=op, log_event=log_event, flush=flush) def report_periodic(delta, op, is_success=True, message=''): add_periodic(delta, AGENT_NAME, version=str(CURRENT_VERSION), is_success=is_success, message=message, op=op) def report_metric(category, counter, instance, value, log_event=False, reporter=__event_logger__): """ Send a telemetry event reporting a single instance of a performance counter. :param str category: The category of the metric (cpu, memory, etc) :param str counter: The name of the metric ("%idle", etc) :param str instance: For instanced metrics, the identifier of the instance. E.g. a disk drive name, a cpu core# :param value: The value of the metric :param bool log_event: If True, log the metric in the agent log as well :param EventLogger reporter: The EventLogger instance to which metric events should be sent """ if reporter.event_dir is None: logger.warn("Cannot report metric event -- Event reporter is not initialized.") message = "Metric {0}/{1} [{2}] = {3}".format(category, counter, instance, value) _log_event(AGENT_NAME, "METRIC", message, 0) return try: reporter.add_metric(category, counter, instance, float(value), log_event) except ValueError: logger.periodic_warn(logger.EVERY_HALF_HOUR, "[PERIODIC] Cannot cast the metric value. Details of the Metric - " "{0}/{1} [{2}] = {3}".format(category, counter, instance, value)) def initialize_event_logger_vminfo_common_parameters_and_protocol(protocol, reporter=__event_logger__): # Initialize protocal for event logger to directly send events to wireserver reporter.protocol = protocol reporter.initialize_vminfo_common_parameters(protocol) def add_event(name=AGENT_NAME, op=WALAEventOperation.Unknown, is_success=True, duration=0, version=str(CURRENT_VERSION), message="", log_event=True, flush=False, reporter=__event_logger__): """ :param flush: if true, flush the event immediately to the wire server """ if reporter.event_dir is None: logger.warn("Cannot add event -- Event reporter is not initialized.") _log_event(name, op, message, duration, is_success=is_success) return if should_emit_event(name, version, op, is_success): mark_event_status(name, version, op, is_success) reporter.add_event(name, op=op, is_success=is_success, duration=duration, version=str(version), message=message, log_event=log_event, flush=flush) def info(op, fmt, *args): """ Creates a telemetry event and logs the message as INFO. """ logger.info(fmt, *args) add_event(op=op, message=fmt.format(*args), is_success=True) def warn(op, fmt, *args): """ Creates a telemetry event and logs the message as WARNING. """ logger.warn(fmt, *args) add_event(op=op, message="[WARNING] " + fmt.format(*args), is_success=False, log_event=False) def error(op, fmt, *args): """ Creates a telemetry event and logs the message as ERROR. """ logger.error(fmt, *args) add_event(op=op, message=fmt.format(*args), is_success=False, log_event=False) class LogEvent(object): """ Helper class that allows the use of info()/warn()/error() using a specific instance of a logger. """ def __init__(self, logger_): self._logger = logger_ def info(self, op, fmt, *args): self._logger.info(fmt, *args) add_event(op=op, message=fmt.format(*args), is_success=True) def warn(self, op, fmt, *args): self._logger.warn(fmt, *args) add_event(op=op, message="[WARNING] " + fmt.format(*args), is_success=False, log_event=False) def error(self, op, fmt, *args): self._logger.error(fmt, *args) add_event(op=op, message=fmt.format(*args), is_success=False, log_event=False) def add_log_event(level, message, forced=False, reporter=__event_logger__): """ :param level: LoggerLevel of the log event :param message: Message :param forced: Force write the event even if send_logs_to_telemetry() is disabled (NOTE: Remove this flag once send_logs_to_telemetry() is enabled for all events) :param reporter: The EventLogger instance to which metric events should be sent :return: """ if reporter.event_dir is None: return if not (forced or send_logs_to_telemetry()): return if level >= logger.LogLevel.WARNING: reporter.add_log_event(level, message) def add_periodic(delta, name, op=WALAEventOperation.Unknown, is_success=True, duration=0, version=str(CURRENT_VERSION), message="", log_event=True, force=False, reporter=__event_logger__): if reporter.event_dir is None: logger.warn("Cannot add periodic event -- Event reporter is not initialized.") _log_event(name, op, message, duration, is_success=is_success) return reporter.add_periodic(delta, name, op=op, is_success=is_success, duration=duration, version=str(version), message=message, log_event=log_event, force=force) def mark_event_status(name, version, op, status): if op in __event_status_operations__: __event_status__.mark_event_status(name, version, op, status) def should_emit_event(name, version, op, status): return \ op not in __event_status_operations__ or \ __event_status__ is None or \ not __event_status__.event_marked(name, version, op) or \ __event_status__.event_succeeded(name, version, op) != status def init_event_logger(event_dir): __event_logger__.event_dir = event_dir def init_event_status(status_dir): __event_status__.initialize(status_dir) def dump_unhandled_err(name): if hasattr(sys, 'last_type') and hasattr(sys, 'last_value') and \ hasattr(sys, 'last_traceback'): last_type = getattr(sys, 'last_type') last_value = getattr(sys, 'last_value') last_traceback = getattr(sys, 'last_traceback') trace = traceback.format_exception(last_type, last_value, last_traceback) message = "".join(trace) add_event(name, is_success=False, message=message, op=WALAEventOperation.UnhandledError) def enable_unhandled_err_dump(name): atexit.register(dump_unhandled_err, name) Azure-WALinuxAgent-a976115/azurelinuxagent/common/exception.py000066400000000000000000000177151510742556200244720ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # """ Defines all exceptions """ class ExitException(BaseException): """ Used to exit the agent's process """ def __init__(self, reason): super(ExitException, self).__init__() self.reason = reason class AgentUpgradeExitException(ExitException): """ Used to exit the agent's process due to Agent Upgrade """ class AgentError(Exception): """ Base class of agent error. """ def __init__(self, msg, inner=None): msg = u"[{0}] {1}".format(type(self).__name__, msg) if inner is not None: msg = u"{0}\nInner error: {1}".format(msg, inner) super(AgentError, self).__init__(msg) class AgentConfigError(AgentError): """ When configure file is not found or malformed. """ def __init__(self, msg=None, inner=None): super(AgentConfigError, self).__init__(msg, inner) class AgentMemoryExceededException(AgentError): """ When Agent memory limit reached. """ def __init__(self, msg=None, inner=None): super(AgentMemoryExceededException, self).__init__(msg, inner) class AgentNetworkError(AgentError): """ When network is not available. """ def __init__(self, msg=None, inner=None): super(AgentNetworkError, self).__init__(msg, inner) class AgentUpdateError(AgentError): """ When agent failed to update. """ def __init__(self, msg=None, inner=None): super(AgentUpdateError, self).__init__(msg, inner) class AgentFamilyMissingError(AgentError): """ When agent family is missing. """ def __init__(self, msg=None, inner=None): super(AgentFamilyMissingError, self).__init__(msg, inner) class CGroupsException(AgentError): """ Exception to classify any cgroups related issue. """ def __init__(self, msg=None, inner=None): super(CGroupsException, self).__init__(msg, inner) class ExtensionError(AgentError): """ When failed to execute an extension """ def __init__(self, msg=None, inner=None, code=-1): super(ExtensionError, self).__init__(msg, inner) self.code = code class ExtensionOperationError(ExtensionError): """ When the command times out or returns with a non-zero exit_code """ def __init__(self, msg=None, inner=None, code=-1, exit_code=-1): super(ExtensionOperationError, self).__init__(msg, inner) self.code = code self.exit_code = exit_code class ExtensionUpdateError(ExtensionError): """ Error raised when failed to update an extension """ class ExtensionDownloadError(ExtensionError): """ Error raised when failed to download and setup an extension """ class ExtensionsGoalStateError(ExtensionError): """ Error raised when the ExtensionsGoalState is malformed """ class ExtensionsConfigError(ExtensionsGoalStateError): """ Error raised when the ExtensionsConfig is malformed """ class MultiConfigExtensionEnableError(ExtensionError): """ Error raised when enable for a Multi-Config extension is failing. """ class ProvisionError(AgentError): """ When provision failed """ def __init__(self, msg=None, inner=None): super(ProvisionError, self).__init__(msg, inner) class ResourceDiskError(AgentError): """ Mount resource disk failed """ def __init__(self, msg=None, inner=None): super(ResourceDiskError, self).__init__(msg, inner) class DhcpError(AgentError): """ Failed to handle dhcp response """ def __init__(self, msg=None, inner=None): super(DhcpError, self).__init__(msg, inner) class OSUtilError(AgentError): """ Failed to perform operation to OS configuration """ def __init__(self, msg=None, inner=None): super(OSUtilError, self).__init__(msg, inner) class ProtocolError(AgentError): """ Azure protocol error """ def __init__(self, msg=None, inner=None): super(ProtocolError, self).__init__(msg, inner) class ProtocolNotFoundError(ProtocolError): """ Error raised when Azure protocol endpoint not found """ class HttpError(AgentError): """ Http request failure """ def __init__(self, msg=None, inner=None): super(HttpError, self).__init__(msg, inner) class InvalidContainerError(HttpError): """ Error raised when Container id sent in the header is invalid """ class EventError(AgentError): """ Event reporting error """ def __init__(self, msg=None, inner=None): super(EventError, self).__init__(msg, inner) class CryptError(AgentError): """ Encrypt/Decrypt error """ def __init__(self, msg=None, inner=None): super(CryptError, self).__init__(msg, inner) class UpdateError(AgentError): """ Update Guest Agent error """ def __init__(self, msg=None, inner=None): super(UpdateError, self).__init__(msg, inner) class ResourceGoneError(HttpError): """ The requested resource no longer exists (i.e., status code 410) """ def __init__(self, msg=None, inner=None): if msg is None: msg = "Resource is gone" super(ResourceGoneError, self).__init__(msg, inner) class InvalidExtensionEventError(AgentError): """ Error thrown when the extension telemetry event is invalid as defined per the contract with extensions. """ # Types of InvalidExtensionEventError MissingKeyError = "MissingKeyError" EmptyMessageError = "EmptyMessageError" OversizeEventError = "OversizeEventError" def __init__(self, msg=None, inner=None): super(InvalidExtensionEventError, self).__init__(msg, inner) class ServiceStoppedError(AgentError): """ Error thrown when trying to access a Service which is stopped """ def __init__(self, msg=None, inner=None): super(ServiceStoppedError, self).__init__(msg, inner) class ExtensionErrorCodes(object): """ Common Error codes used across by Compute RP for better understanding the cause and clarify common occurring errors """ # Unknown Failures PluginUnknownFailure = -1 # Success PluginSuccess = 0 # Catch all error code. PluginProcessingError = 1000 # Plugin failed to download PluginManifestDownloadError = 1001 # Cannot find or load successfully the HandlerManifest.json PluginHandlerManifestNotFound = 1002 # Cannot successfully serialize the HandlerManifest.json PluginHandlerManifestDeserializationError = 1003 # Cannot download the plugin package PluginPackageDownloadFailed = 1004 # Cannot extract the plugin form package PluginPackageExtractionFailed = 1005 # Install failed PluginInstallProcessingFailed = 1007 # Update failed PluginUpdateProcessingFailed = 1008 # Enable failed PluginEnableProcessingFailed = 1009 # Disable failed PluginDisableProcessingFailed = 1010 # Extension script timed out PluginHandlerScriptTimedout = 1011 # Invalid status file of the extension. PluginSettingsStatusInvalid = 1012 def __init__(self): pass class GoalStateAggregateStatusCodes(object): # Success Success = 0 # Unknown failure GoalStateUnknownFailure = -1 # The goal state requires features that are not supported by this version of the VM agent GoalStateUnsupportedRequiredFeatures = 2001 Azure-WALinuxAgent-a976115/azurelinuxagent/common/future.py000066400000000000000000000164101510742556200237750ustar00rootroot00000000000000import contextlib import datetime import platform import sys import os import re # Note broken dependency handling to avoid potential backward # compatibility issues on different distributions try: import distro # pylint: disable=E0401 except Exception: pass # pylint: disable=W0105 """ Add alias for python2 and python3 libs and functions. """ # pylint: enable=W0105 if sys.version_info[0] == 3: import http.client as httpclient # pylint: disable=W0611,import-error from urllib.parse import urlparse # pylint: disable=W0611,import-error,no-name-in-module """Rename Python3 str to ustr""" # pylint: disable=W0105 ustr = str bytebuffer = memoryview # We aren't using these imports in this file, but we want them to be available # to import from this module in others. # Additionally, python2 doesn't have this, so we need to disable import-error # as well. # unused-import, import-error Disabled: Due to backward compatibility between py2 and py3 from builtins import int, range # pylint: disable=unused-import,import-error from collections import OrderedDict # pylint: disable=W0611 from queue import Queue, Empty # pylint: disable=W0611,import-error # unused-import Disabled: python2.7 doesn't have subprocess.DEVNULL # so this import is only used by python3. import subprocess # pylint: disable=unused-import elif sys.version_info[0] == 2: import httplib as httpclient # pylint: disable=E0401,W0611 from urlparse import urlparse # pylint: disable=E0401 from Queue import Queue, Empty # pylint: disable=W0611,import-error # We want to suppress the following: # - undefined-variable: # These builtins are not defined in python3 # - redefined-builtin: # This is intentional, so that code that wants to use builtins we're # assigning new names to doesn't need to check python versions before # doing so. # pylint: disable=undefined-variable,redefined-builtin ustr = unicode # Rename Python2 unicode to ustr bytebuffer = buffer range = xrange int = long if sys.version_info[1] >= 7: from collections import OrderedDict # For Py 2.7+ else: from ordereddict import OrderedDict # Works only on 2.6 # pylint: disable=E0401 else: raise ImportError("Unknown python version: {0}".format(sys.version_info)) # # datetime.utcnow triggers a DeprecationWarning on 3.12 and will be removed in a future version. # # To work around this, we use timezone.utc on 3.5-3.10 (it was introduced on 3.2, but currently we test from 3.5), and # datetime.UTC (introduced on Python 3.11) for >= 3.11. # if sys.version_info[0] > 3 or sys.version_info[0] == 3 and sys.version_info[1] >= 11: # E1101: Module 'datetime' has no 'UTC' member (no-member) UTC = datetime.UTC # pylint: disable=E1101 elif sys.version_info[0] == 3 and sys.version_info[1] >= 5: UTC = datetime.timezone.utc else: from datetime import tzinfo, timedelta class _UTC(tzinfo): def utcoffset(self, dt): return timedelta(0) def tzname(self, dt): return "UTC" def dst(self, dt): return timedelta(0) UTC = _UTC() datetime_max_utc = datetime.datetime.max.replace(tzinfo=UTC) datetime_min_utc = datetime.datetime.min.replace(tzinfo=UTC) def get_linux_distribution(get_full_name, supported_dists): """Abstract platform.linux_distribution() call which is deprecated as of Python 3.5 and removed in Python 3.7""" try: supported = platform._supported_dists + (supported_dists,) osinfo = list( platform.linux_distribution( # pylint: disable=W1505 full_distribution_name=get_full_name, supported_dists=supported ) ) # The platform.linux_distribution() lib has issue with detecting OpenWRT linux distribution. # Merge the following patch provided by OpenWRT as a temporary fix. if os.path.exists("/etc/openwrt_release"): osinfo = get_openwrt_platform() if not osinfo or osinfo == ['', '', '']: return get_linux_distribution_from_distro(get_full_name) full_name = platform.linux_distribution()[0].strip() # pylint: disable=W1505 osinfo.append(full_name) except AttributeError: return get_linux_distribution_from_distro(get_full_name) return osinfo def get_linux_distribution_from_distro(get_full_name): """Get the distribution information from the distro Python module.""" # If we get here we have to have the distro module, thus we do # not wrap the call in a try-except block as it would mask the problem # and result in a broken agent installation osinfo = list( distro.linux_distribution( full_distribution_name=get_full_name ) ) full_name = distro.linux_distribution()[0].strip() osinfo.append(full_name) # Fixing is the problem https://github.com/Azure/WALinuxAgent/issues/2715. Distro.linux_distribution method not retuning full version # If best is true, the most precise version number out of all examined sources is returned. if "mariner" in osinfo[0].lower(): osinfo[1] = distro.version(best=True) return osinfo def get_openwrt_platform(): """ Add this workaround for detecting OpenWRT products because the version and product information is contained in the /etc/openwrt_release file. """ result = [None, None, None] openwrt_version = re.compile(r"^DISTRIB_RELEASE=['\"](\d+\.\d+.\d+)['\"]") openwrt_product = re.compile(r"^DISTRIB_ID=['\"]([\w-]+)['\"]") with open('/etc/openwrt_release', 'r') as fh: content = fh.readlines() for line in content: version_matches = openwrt_version.match(line) product_matches = openwrt_product.match(line) if version_matches: result[1] = version_matches.group(1) elif product_matches: if product_matches.group(1) == "OpenWrt": result[0] = "openwrt" return result def is_file_not_found_error(exception): # pylint for python2 complains, but FileNotFoundError is # defined for python3. # pylint: disable=undefined-variable if sys.version_info[0] == 2: # Python 2 uses OSError(errno=2) return isinstance(exception, OSError) and exception.errno == 2 elif sys.version_info[0] == 3: return isinstance(exception, FileNotFoundError) return isinstance(exception, FileNotFoundError) @contextlib.contextmanager def subprocess_dev_null(): if sys.version_info[0] == 3: # Suppress no-member errors on python2.7 yield subprocess.DEVNULL # pylint: disable=no-member else: try: devnull = open(os.devnull, "a+") yield devnull except Exception: yield None finally: if devnull is not None: devnull.close() def array_to_bytes(buff): # Python 3.9 removed the tostring() method on arrays, the new alias is tobytes() if sys.version_info[0] == 2: return buff.tostring() if sys.version_info[0] == 3 and sys.version_info[1] <= 8: return buff.tostring() return buff.tobytes() Azure-WALinuxAgent-a976115/azurelinuxagent/common/logger.py000066400000000000000000000270531510742556200237470ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and openssl_bin 1.0+ # """ Log utils """ import sys from datetime import datetime, timedelta from threading import current_thread from azurelinuxagent.common.future import ustr, UTC from azurelinuxagent.common.utils import timeutil from azurelinuxagent.common.utils.textutil import redact_sas_token EVERY_DAY = timedelta(days=1) EVERY_HALF_DAY = timedelta(hours=12) EVERY_SIX_HOURS = timedelta(hours=6) EVERY_HOUR = timedelta(hours=1) EVERY_HALF_HOUR = timedelta(minutes=30) EVERY_FIFTEEN_MINUTES = timedelta(minutes=15) EVERY_MINUTE = timedelta(minutes=1) class Logger(object): """ Logger class """ def __init__(self, logger=None, prefix=None): self.appenders = [] self.logger = self if logger is None else logger self.periodic_messages = {} self.prefix = prefix self.silent = False def reset_periodic(self): self.logger.periodic_messages = {} def set_prefix(self, prefix): self.prefix = prefix def _is_period_elapsed(self, delta, h): return h not in self.logger.periodic_messages or \ (self.logger.periodic_messages[h] + delta) <= datetime.now(UTC) def _periodic(self, delta, log_level_op, msg_format, *args): h = hash(msg_format) if self._is_period_elapsed(delta, h): log_level_op(msg_format, *args) self.logger.periodic_messages[h] = datetime.now(UTC) def periodic_info(self, delta, msg_format, *args): self._periodic(delta, self.info, msg_format, *args) def periodic_verbose(self, delta, msg_format, *args): self._periodic(delta, self.verbose, msg_format, *args) def periodic_warn(self, delta, msg_format, *args): self._periodic(delta, self.warn, msg_format, *args) def periodic_error(self, delta, msg_format, *args): self._periodic(delta, self.error, msg_format, *args) def verbose(self, msg_format, *args): self.log(LogLevel.VERBOSE, msg_format, *args) def info(self, msg_format, *args): self.log(LogLevel.INFO, msg_format, *args) def warn(self, msg_format, *args): self.log(LogLevel.WARNING, msg_format, *args) def error(self, msg_format, *args): self.log(LogLevel.ERROR, msg_format, *args) def log(self, level, msg_format, *args): def write_log(log_appender): # pylint: disable=W0612 """ The appender_lock flag is used to signal if the logger is currently in use. This prevents a subsequent log coming in due to writing of a log statement to be not written. Eg: Assuming a logger with two appenders - FileAppender and TelemetryAppender. Here is an example of how using appender_lock flag can help. logger.warn("foo") |- log.warn() (azurelinuxagent.common.logger.Logger.warn) |- log() (azurelinuxagent.common.logger.Logger.log) |- FileAppender.appender_lock is currently False not log_appender.appender_lock is True |- We sets it to True. |- FileAppender.write completes. |- FileAppender.appender_lock sets to False. |- TelemetryAppender.appender_lock is currently False not log_appender.appender_lock is True |- We sets it to True. [A] |- TelemetryAppender.write gets called but has an error and writes a log.warn("bar") |- log() (azurelinuxagent.common.logger.Logger.log) |- FileAppender.appender_lock is set to True (log_appender.appender_lock was false when entering). |- FileAppender.write completes. |- FileAppender.appender_lock sets to False. |- TelemetryAppender.appender_lock is already True, not log_appender.appender_lock is False Thus [A] cannot happen again if TelemetryAppender.write is not getting called. It prevents faulty appenders to not get called again and again. :param log_appender: Appender :return: None """ if not log_appender.appender_lock: try: log_appender.appender_lock = True log_appender.write(level, log_item) finally: log_appender.appender_lock = False if self.silent: return # if msg_format is not unicode convert it to unicode if type(msg_format) is not ustr: msg_format = ustr(msg_format, errors="backslashreplace") if len(args) > 0: msg = msg_format.format(*args) else: msg = msg_format # redact the sas token from the message before logging redacted_msg = redact_sas_token(msg) time = timeutil.create_utc_timestamp(datetime.now(UTC)) level_str = LogLevel.STRINGS[level] thread_name = current_thread().name if self.prefix is not None: log_item = u"{0} {1} {2} {3} {4}\n".format(time, level_str, thread_name, self.prefix, redacted_msg) else: log_item = u"{0} {1} {2} {3}\n".format(time, level_str, thread_name, redacted_msg) log_item = ustr(log_item.encode('ascii', "backslashreplace"), encoding="ascii") for appender in self.appenders: appender.write(level, log_item) # # TODO: we should actually call # # write_log(appender) # # (see PR #1659). Before doing that, write_log needs to be thread-safe. # # This needs to be done when SEND_LOGS_TO_TELEMETRY is enabled. # if self.logger != self: for appender in self.logger.appenders: appender.write(level, log_item) # # TODO: call write_log instead (see comment above) # def add_appender(self, appender_type, level, path): appender = _create_logger_appender(appender_type, level, path) self.appenders.append(appender) def console_output_enabled(self): """ Returns True if the current list of appenders includes at least one ConsoleAppender """ return any(isinstance(appender, ConsoleAppender) for appender in self.appenders) def disable_console_output(self): """ Removes all ConsoleAppenders from the current list of appenders """ self.appenders = [appender for appender in self.appenders if not isinstance(appender, ConsoleAppender)] class Appender(object): def __init__(self, level): self.appender_lock = False self.level = level def write(self, level, msg): pass class ConsoleAppender(Appender): def __init__(self, level, path): super(ConsoleAppender, self).__init__(level) self.path = path def write(self, level, msg): if self.level <= level: try: with open(self.path, "w") as console: console.write(msg) except IOError: pass class FileAppender(Appender): def __init__(self, level, path): super(FileAppender, self).__init__(level) self.path = path def write(self, level, msg): if self.level <= level: try: with open(self.path, "a+") as log_file: log_file.write(msg) except IOError: pass class StdoutAppender(Appender): def __init__(self, level): # pylint: disable=W0235 super(StdoutAppender, self).__init__(level) def write(self, level, msg): if self.level <= level: try: sys.stdout.write(msg) except IOError: pass class TelemetryAppender(Appender): def __init__(self, level, event_func): super(TelemetryAppender, self).__init__(level) self.event_func = event_func def write(self, level, msg): if self.level <= level: try: self.event_func(level, msg) except IOError: pass # Initialize logger instance DEFAULT_LOGGER = Logger() class LogLevel(object): VERBOSE = 0 INFO = 1 WARNING = 2 ERROR = 3 STRINGS = [ "VERBOSE", "INFO", "WARNING", "ERROR" ] class AppenderType(object): FILE = 0 CONSOLE = 1 STDOUT = 2 TELEMETRY = 3 def add_logger_appender(appender_type, level=LogLevel.INFO, path=None): DEFAULT_LOGGER.add_appender(appender_type, level, path) def console_output_enabled(): return DEFAULT_LOGGER.console_output_enabled() def disable_console_output(): DEFAULT_LOGGER.disable_console_output() def reset_periodic(): DEFAULT_LOGGER.reset_periodic() def set_prefix(prefix): DEFAULT_LOGGER.set_prefix(prefix) def periodic_info(delta, msg_format, *args): """ The hash-map maintaining the state of the logs gets reset here - azurelinuxagent.ga.monitor.MonitorHandler.reset_loggers. The current time period is defined by RESET_LOGGERS_PERIOD. """ DEFAULT_LOGGER.periodic_info(delta, msg_format, *args) def periodic_verbose(delta, msg_format, *args): """ The hash-map maintaining the state of the logs gets reset here - azurelinuxagent.ga.monitor.MonitorHandler.reset_loggers. The current time period is defined by RESET_LOGGERS_PERIOD. """ DEFAULT_LOGGER.periodic_verbose(delta, msg_format, *args) def periodic_error(delta, msg_format, *args): """ The hash-map maintaining the state of the logs gets reset here - azurelinuxagent.ga.monitor.MonitorHandler.reset_loggers. The current time period is defined by RESET_LOGGERS_PERIOD. """ DEFAULT_LOGGER.periodic_error(delta, msg_format, *args) def periodic_warn(delta, msg_format, *args): """ The hash-map maintaining the state of the logs gets reset here - azurelinuxagent.ga.monitor.MonitorHandler.reset_loggers. The current time period is defined by RESET_LOGGERS_PERIOD. """ DEFAULT_LOGGER.periodic_warn(delta, msg_format, *args) def verbose(msg_format, *args): DEFAULT_LOGGER.verbose(msg_format, *args) def info(msg_format, *args): DEFAULT_LOGGER.info(msg_format, *args) def warn(msg_format, *args): DEFAULT_LOGGER.warn(msg_format, *args) def error(msg_format, *args): DEFAULT_LOGGER.error(msg_format, *args) def log(level, msg_format, *args): DEFAULT_LOGGER.log(level, msg_format, args) def _create_logger_appender(appender_type, level=LogLevel.INFO, path=None): if appender_type == AppenderType.CONSOLE: return ConsoleAppender(level, path) elif appender_type == AppenderType.FILE: return FileAppender(level, path) elif appender_type == AppenderType.STDOUT: return StdoutAppender(level) elif appender_type == AppenderType.TELEMETRY: return TelemetryAppender(level, path) else: raise ValueError("Unknown appender type") Azure-WALinuxAgent-a976115/azurelinuxagent/common/osutil/000077500000000000000000000000001510742556200234265ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/azurelinuxagent/common/osutil/__init__.py000066400000000000000000000012631510742556200255410ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from azurelinuxagent.common.osutil.factory import get_osutil Azure-WALinuxAgent-a976115/azurelinuxagent/common/osutil/alpine.py000066400000000000000000000033041510742556200252500ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.shellutil as shellutil from azurelinuxagent.common.osutil.default import DefaultOSUtil class AlpineOSUtil(DefaultOSUtil): def __init__(self): super(AlpineOSUtil, self).__init__() self.agent_conf_file_path = '/etc/waagent.conf' self.jit_enabled = True def is_dhcp_enabled(self): return True def get_dhcp_pid(self): return sorted(self._get_dhcp_pid(["pidof", "dhcpcd"])) # TODO: We really should get the pid from `dhcpcd --printpidfile` def restart_if(self, ifname, retries=None, wait=None): logger.info('restarting {} (sort of, actually SIGHUPing dhcpcd)'.format(ifname)) pid = self.get_dhcp_pid()[0] if pid != None: ret = shellutil.run_get_output('kill -HUP {}'.format(pid)) # pylint: disable=W0612 def set_ssh_client_alive_interval(self): # Alpine will handle this. pass def conf_sshd(self, disable_password): # Alpine will handle this. pass Azure-WALinuxAgent-a976115/azurelinuxagent/common/osutil/arch.py000066400000000000000000000041611510742556200247170ustar00rootroot00000000000000# # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import azurelinuxagent.common.utils.shellutil as shellutil from azurelinuxagent.common.osutil.default import DefaultOSUtil class ArchUtil(DefaultOSUtil): def __init__(self): super(ArchUtil, self).__init__() self.jit_enabled = True @staticmethod def get_systemd_unit_file_install_path(): return "/usr/lib/systemd/system" @staticmethod def get_agent_bin_path(): return "/usr/bin" def is_dhcp_enabled(self): return True def start_network(self): return shellutil.run("systemctl start systemd-networkd", chk_err=False) def restart_if(self, ifname=None, retries=None, wait=None): shellutil.run("systemctl restart systemd-networkd") def restart_ssh_service(self): # SSH is socket activated on CoreOS. No need to restart it. pass def stop_dhcp_service(self): return shellutil.run("systemctl stop systemd-networkd", chk_err=False) def start_dhcp_service(self): return shellutil.run("systemctl start systemd-networkd", chk_err=False) def start_agent_service(self): return shellutil.run("systemctl start {0}".format(self.service_name), chk_err=False) def stop_agent_service(self): return shellutil.run("systemctl stop {0}".format(self.service_name), chk_err=False) def get_dhcp_pid(self): return self._get_dhcp_pid(["pidof", "systemd-networkd"]) def conf_sshd(self, disable_password): # Don't whack the system default sshd conf pass Azure-WALinuxAgent-a976115/azurelinuxagent/common/osutil/bigip.py000066400000000000000000000331731510742556200251010ustar00rootroot00000000000000# Copyright 2016 F5 Networks Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import array import fcntl import os import platform import re import socket import struct import time from azurelinuxagent.common.future import array_to_bytes try: # WAAgent > 2.1.3 import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.shellutil as shellutil from azurelinuxagent.common.exception import OSUtilError from azurelinuxagent.common.osutil.default import DefaultOSUtil except ImportError: # WAAgent <= 2.1.3 import azurelinuxagent.logger as logger import azurelinuxagent.utils.shellutil as shellutil from azurelinuxagent.exception import OSUtilError from azurelinuxagent.distro.default.osutil import DefaultOSUtil class BigIpOSUtil(DefaultOSUtil): def __init__(self): # pylint: disable=W0235 super(BigIpOSUtil, self).__init__() def _wait_until_mcpd_is_initialized(self): """Wait for mcpd to become available All configuration happens in mcpd so we need to wait that this is available before we go provisioning the system. I call this method at the first opportunity I have (during the DVD mounting call). This ensures that the rest of the provisioning does not need to wait for mcpd to be available unless it absolutely wants to. :return bool: Returns True upon success :raises OSUtilError: Raises exception if mcpd does not come up within roughly 50 minutes (100 * 30 seconds) """ for retries in range(1, 100): # pylint: disable=W0612 # Retry until mcpd completes startup: logger.info("Checking to see if mcpd is up") rc = shellutil.run("/usr/bin/tmsh -a show sys mcp-state field-fmt 2>/dev/null | grep phase | grep running", chk_err=False) if rc == 0: logger.info("mcpd is up!") break time.sleep(30) if rc == 0: return True raise OSUtilError( "mcpd hasn't completed initialization! Cannot proceed!" ) def _save_sys_config(self): cmd = "/usr/bin/tmsh save sys config" rc = shellutil.run(cmd) if rc != 0: logger.error("WARNING: Cannot save sys config on 1st boot.") return rc def restart_ssh_service(self): return shellutil.run("/usr/bin/bigstart restart sshd", chk_err=False) def stop_agent_service(self): return shellutil.run("/sbin/service {0} stop".format(self.service_name), chk_err=False) def start_agent_service(self): return shellutil.run("/sbin/service {0} start".format(self.service_name), chk_err=False) def register_agent_service(self): return shellutil.run("/sbin/chkconfig --add {0}".format(self.service_name), chk_err=False) def unregister_agent_service(self): return shellutil.run("/sbin/chkconfig --del {0}".format(self.service_name), chk_err=False) def get_dhcp_pid(self): return self._get_dhcp_pid(["/sbin/pidof", "dhclient"]) def set_hostname(self, hostname): """Set the static hostname of the device Normally, tmsh is used to set the hostname for the system. For our purposes at this time though, I would hesitate to trust this function. Azure(Stack) uses the name that you provide in the Web UI or ARM (for example) as the value of the hostname argument to this method. The problem is that there is nowhere in the UI that specifies the restrictions and checks that tmsh has for the hostname. For example, if you set the name "bigip1" in the Web UI, Azure(Stack) considers that a perfectly valid name. When WAAgent gets around to running though, tmsh will reject that value because it is not a fully qualified domain name. The proper value should have been bigip.xxx.yyy WAAgent will not fail if this command fails, but the hostname will not be what the user set either. Currently we do not set the hostname when WAAgent starts up, so I am passing on setting it here too. :param hostname: The hostname to set on the device """ return None def set_dhcp_hostname(self, hostname): """Sets the DHCP hostname See `set_hostname` for an explanation of why I pass here :param hostname: The hostname to set on the device """ return None def useradd(self, username, expiration=None, comment=None): """Create user account using tmsh Our policy is to create two accounts when booting a BIG-IP instance. The first account is the one that the user specified when they did the instance creation. The second one is the admin account that is, or should be, built in to the system. :param username: The username that you want to add to the system :param expiration: The expiration date to use. We do not use this value. :param comment: description of the account. We do not use this value. """ if self.get_userentry(username): logger.info("User {0} already exists, skip useradd", username) return None cmd = ['/usr/bin/tmsh', 'create', 'auth', 'user', username, 'partition-access', 'add', '{', 'all-partitions', '{', 'role', 'admin', '}', '}', 'shell', 'bash'] self._run_command_raising_OSUtilError(cmd, err_msg="Failed to create user account:{0}".format(username)) self._save_sys_config() return 0 def chpasswd(self, username, password, crypt_id=6, salt_len=10): """Change a user's password with tmsh Since we are creating the user specified account and additionally changing the password of the built-in 'admin' account, both must be modified in this method. Note that the default method also checks for a "system level" of the user; based on the value of UID_MIN in /etc/login.defs. In our env, all user accounts have the UID 0. So we can't rely on this value. :param username: The username whose password to change :param password: The unencrypted password to set for the user :param crypt_id: If encrypting the password, the crypt_id that was used :param salt_len: If encrypting the password, the length of the salt value used to do it. """ # Start by setting the password of the user provided account self._run_command_raising_OSUtilError( ['/usr/bin/tmsh', 'modify', 'auth', 'user', username, 'password', password], err_msg="Failed to set password for {0}".format(username)) # Next, set the password of the built-in 'admin' account to be have # the same password as the user provided account userentry = self.get_userentry('admin') if userentry is None: raise OSUtilError("The 'admin' user account was not found!") self._run_command_raising_OSUtilError( ['/usr/bin/tmsh', 'modify', 'auth', 'user', 'admin', 'password', password], err_msg="Failed to set password for admin") self._save_sys_config() return 0 def del_account(self, username): """Deletes a user account. Note that the default method also checks for a "system level" of the user; based on the value of UID_MIN in /etc/login.defs. In our env, all user accounts have the UID 0. So we can't rely on this value. We also don't use sudo, so we remove that method call as well. :param username: :return: """ self._run_command_without_raising(["touch", "/var/run/utmp"]) self._run_command_without_raising(['/usr/bin/tmsh', 'delete', 'auth', 'user', username]) def get_dvd_device(self, dev_dir='/dev'): """Find BIG-IP's CD/DVD device This device is almost certainly /dev/cdrom so I added the ? to this pattern. Note that this method will return upon the first device found, but in my tests with 12.1.1 it will also find /dev/sr0 on occasion. This is NOT the correct CD/DVD device though. :todo: Consider just always returning "/dev/cdrom" here if that device device exists on all platforms that are supported on Azure(Stack) :param dev_dir: The root directory from which to look for devices """ patten = r'(sr[0-9]|hd[c-z]|cdrom[0-9]?)' for dvd in [re.match(patten, dev) for dev in os.listdir(dev_dir)]: if dvd is not None: return "/dev/{0}".format(dvd.group(0)) raise OSUtilError("Failed to get dvd device") # The linter reports that this function's arguments differ from those # of the function this overrides. This doesn't seem to be a problem, however, # because this function accepts any option that could'be been specified for # the original (and, by forwarding the kwargs to the original, will reject any # option _not_ accepted by the original). Additionally, this method allows us # to keep the defaults for mount_dvd in one place (the original function) instead # of having to duplicate it here as well. def mount_dvd(self, **kwargs): # pylint: disable=W0221 """Mount the DVD containing the provisioningiso.iso file This is the _first_ hook that WAAgent provides for us, so this is the point where we should wait for mcpd to load. I am just overloading this method to add the mcpd wait. Then I proceed with the stock code. :param max_retry: Maximum number of retries waagent will make when mounting the provisioningiso.iso DVD :param chk_err: Whether to check for errors or not in the mounting commands """ self._wait_until_mcpd_is_initialized() return super(BigIpOSUtil, self).mount_dvd(**kwargs) def eject_dvd(self, chk_err=True): """Runs the eject command to eject the provisioning DVD BIG-IP does not include an eject command. It is sufficient to just umount the DVD disk. But I will log that we do not support this for future reference. :param chk_err: Whether or not to check for errors raised by the eject command """ logger.warn("Eject is not supported on this platform") def get_first_if(self): """Return the interface name, and ip addr of the management interface. We need to add a struct_size check here because, curiously, our 64bit platform is identified by python in Azure(Stack) as 32 bit and without adjusting the struct_size, we can't get the information we need. I believe this may be caused by only python i686 being shipped with BIG-IP instead of python x86_64?? """ iface = '' expected = 16 # how many devices should I expect... python_arc = platform.architecture()[0] if python_arc == '64bit': struct_size = 40 # for 64bit the size is 40 bytes else: struct_size = 32 # for 32bit the size is 32 bytes sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP) buff = array.array('B', b'\0' * (expected * struct_size)) param = struct.pack('iL', expected*struct_size, buff.buffer_info()[0]) ret = fcntl.ioctl(sock.fileno(), 0x8912, param) retsize = (struct.unpack('iL', ret)[0]) if retsize == (expected * struct_size): logger.warn(('SIOCGIFCONF returned more than {0} up ' 'network interfaces.'), expected) sock = array_to_bytes(buff) for i in range(0, struct_size * expected, struct_size): iface = self._format_single_interface_name(sock, i) # Azure public was returning "lo:1" when deploying WAF if b'lo' in iface: continue else: break return iface.decode('latin-1'), socket.inet_ntoa(sock[i+20:i+24]) # pylint: disable=undefined-loop-variable def _format_single_interface_name(self, sock, offset): return sock[offset:offset+16].split(b'\0', 1)[0] def route_add(self, net, mask, gateway): """Add specified route using tmsh. :param net: :param mask: :param gateway: :return: """ cmd = ("/usr/bin/tmsh create net route " "{0}/{1} gw {2}").format(net, mask, gateway) return shellutil.run(cmd, chk_err=False) def device_for_ide_port(self, port_id): """Return device name attached to ide port 'n'. Include a wait in here because BIG-IP may not have yet initialized this list of devices. :param port_id: :return: """ for retries in range(1, 100): # pylint: disable=W0612 # Retry until devices are ready if os.path.exists("/sys/bus/vmbus/devices/"): break else: time.sleep(10) return super(BigIpOSUtil, self).device_for_ide_port(port_id) Azure-WALinuxAgent-a976115/azurelinuxagent/common/osutil/chainguard.py000066400000000000000000000076721510742556200261210ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # Copyright 2025 Chainguard Inc # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import time import glob import textwrap import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.shellutil as shellutil from azurelinuxagent.common.osutil.default import DefaultOSUtil class ChainguardOSUtil(DefaultOSUtil): def __init__(self): super(ChainguardOSUtil, self).__init__() self.agent_conf_file_path = '/etc/waagent.conf' self.jit_enabled = True self.__name__ = 'Chainguard' self.service_name = self.get_service_name() @staticmethod def get_agent_bin_path(): return "/usr/bin" @staticmethod def get_systemd_unit_file_install_path(): return "/usr/lib/systemd/system" def restart_if(self, ifname, retries=3, wait=5): """ Restart systemd-networkd """ retry_limit=retries+1 for attempt in range(1, retry_limit): try: shellutil.run_command(["systemctl", "restart", "systemd-networkd"]) except shellutil.CommandError as cmd_err: logger.warn("failed to restart systemd-networkd: return code {1}".format(cmd_err.returncode)) if attempt < retry_limit: logger.info("retrying in {0} seconds".format(wait)) time.sleep(wait) else: logger.warn("exceeded restart retries") def is_dhcp_available(self): return True def is_dhcp_enabled(self): return shellutil.run("systemctl is-enabled systemd-networkd", chk_err=False) == "enabled" def get_dhcp_pid(self): return self._get_dhcp_pid(["pidof", "systemd-networkd"]) def start_network(self): return shellutil.run("systemctl start systemd-networkd", chk_err=False) def stop_network(self): return shellutil.run("systemctl stop systemd-networkd", chk_err=False) def start_dhcp_service(self): return self.start_network() def stop_dhcp_service(self): return self.stop_network() def start_agent_service(self): return shellutil.run("systemctl start {0}".format(self.service_name), chk_err=False) def stop_agent_service(self): return shellutil.run("systemctl stop {0}".format(self.service_name), chk_err=False) def get_dhcp_lease_endpoint(self): pathglob = "/run/systemd/netif/leases/*" logger.info("looking for leases in path [{0}]".format(pathglob)) endpoint = None for lease_file in glob.glob(pathglob): try: with open(lease_file) as f: lease = f.read() for line in lease.splitlines(): if line.startswith("OPTION_245"): option_245 = line.split("=")[1] options = [int(i, 16) for i in textwrap.wrap(option_245, 2)] endpoint = "{0}.{1}.{2}.{3}".format(*options) logger.info("found endpoint [{0}]".format(endpoint)) except Exception as e: logger.info( "Failed to parse {0}: {1}".format(lease_file, str(e)) ) if endpoint is not None: logger.info("cached endpoint found [{0}]".format(endpoint)) else: logger.info("cached endpoint not found") return endpoint Azure-WALinuxAgent-a976115/azurelinuxagent/common/osutil/clearlinux.py000066400000000000000000000075351510742556200261600ustar00rootroot00000000000000# # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os # pylint: disable=W0611 import re # pylint: disable=W0611 import pwd # pylint: disable=W0611 import shutil # pylint: disable=W0611 import socket # pylint: disable=W0611 import array # pylint: disable=W0611 import struct # pylint: disable=W0611 import fcntl # pylint: disable=W0611 import time # pylint: disable=W0611 import base64 # pylint: disable=W0611 import errno import azurelinuxagent.common.conf as conf import azurelinuxagent.common.logger as logger # pylint: disable=W0611 import azurelinuxagent.common.utils.fileutil as fileutil import azurelinuxagent.common.utils.shellutil as shellutil import azurelinuxagent.common.utils.textutil as textutil # pylint: disable=W0611 from azurelinuxagent.common.osutil.default import DefaultOSUtil from azurelinuxagent.common.exception import OSUtilError class ClearLinuxUtil(DefaultOSUtil): def __init__(self): super(ClearLinuxUtil, self).__init__() self.agent_conf_file_path = '/usr/share/defaults/waagent/waagent.conf' self.jit_enabled = True @staticmethod def get_systemd_unit_file_install_path(): return "/usr/lib/systemd/system" @staticmethod def get_agent_bin_path(): return "/usr/bin" def is_dhcp_enabled(self): return True def start_network(self) : return shellutil.run("systemctl start systemd-networkd", chk_err=False) def restart_if(self, ifname=None, retries=None, wait=None): shellutil.run("systemctl restart systemd-networkd") def restart_ssh_service(self): # SSH is socket activated. No need to restart it. pass def stop_dhcp_service(self): return shellutil.run("systemctl stop systemd-networkd", chk_err=False) def start_dhcp_service(self): return shellutil.run("systemctl start systemd-networkd", chk_err=False) def start_agent_service(self): return shellutil.run("systemctl start {0}".format(self.service_name), chk_err=False) def stop_agent_service(self): return shellutil.run("systemctl stop {0}".format(self.service_name), chk_err=False) def get_dhcp_pid(self): return self._get_dhcp_pid(["pidof", "systemd-networkd"]) def conf_sshd(self, disable_password): # Don't whack the system default sshd conf pass def del_root_password(self): try: passwd_file_path = conf.get_passwd_file_path() try: passwd_content = fileutil.read_file(passwd_file_path) if not passwd_content: # Empty file is no better than no file raise IOError(errno.ENOENT, "Empty File", passwd_file_path) except (IOError, OSError) as file_read_err: if file_read_err.errno != errno.ENOENT: raise new_passwd = ["root:*LOCK*:14600::::::"] else: passwd = passwd_content.split('\n') new_passwd = [x for x in passwd if not x.startswith("root:")] new_passwd.insert(0, "root:*LOCK*:14600::::::") fileutil.write_file(passwd_file_path, "\n".join(new_passwd)) except IOError as e: raise OSUtilError("Failed to delete root password:{0}".format(e)) pass # pylint: disable=W0107 Azure-WALinuxAgent-a976115/azurelinuxagent/common/osutil/coreos.py000066400000000000000000000057201510742556200252760ustar00rootroot00000000000000# # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os from azurelinuxagent.common.utils import shellutil from azurelinuxagent.common.osutil.default import DefaultOSUtil class CoreOSUtil(DefaultOSUtil): def __init__(self): super(CoreOSUtil, self).__init__() self.agent_conf_file_path = '/usr/share/oem/waagent.conf' self.waagent_path = '/usr/share/oem/bin/waagent' self.python_path = '/usr/share/oem/python/bin' self.jit_enabled = True if 'PATH' in os.environ: path = "{0}:{1}".format(os.environ['PATH'], self.python_path) else: path = self.python_path os.environ['PATH'] = path if 'PYTHONPATH' in os.environ: py_path = os.environ['PYTHONPATH'] py_path = "{0}:{1}".format(py_path, self.waagent_path) else: py_path = self.waagent_path os.environ['PYTHONPATH'] = py_path @staticmethod def get_agent_bin_path(): return "/usr/share/oem/bin" def is_sys_user(self, username): # User 'core' is not a sysuser. if username == 'core': return False return super(CoreOSUtil, self).is_sys_user(username) def is_dhcp_enabled(self): return True def start_network(self): return shellutil.run("systemctl start systemd-networkd", chk_err=False) def restart_if(self, ifname=None, retries=None, wait=None): shellutil.run("systemctl restart systemd-networkd") def restart_ssh_service(self): # SSH is socket activated on CoreOS. No need to restart it. pass def stop_dhcp_service(self): return shellutil.run("systemctl stop systemd-networkd", chk_err=False) def start_dhcp_service(self): return shellutil.run("systemctl start systemd-networkd", chk_err=False) def start_agent_service(self): return shellutil.run("systemctl start {0}".format(self.service_name), chk_err=False) def stop_agent_service(self): return shellutil.run("systemctl stop {0}".format(self.service_name), chk_err=False) def get_dhcp_pid(self): return self._get_dhcp_pid( ["systemctl", "show", "-p", "MainPID", "systemd-networkd"], transform_command_output=lambda o: o.replace("MainPID=", "")) def conf_sshd(self, disable_password): # In CoreOS, /etc/sshd_config is mount readonly. Skip the setting. pass Azure-WALinuxAgent-a976115/azurelinuxagent/common/osutil/debian.py000066400000000000000000000052431510742556200252260ustar00rootroot00000000000000# # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os # pylint: disable=W0611 import re # pylint: disable=W0611 import pwd # pylint: disable=W0611 import shutil # pylint: disable=W0611 import socket # pylint: disable=W0611 import array # pylint: disable=W0611 import struct # pylint: disable=W0611 import fcntl # pylint: disable=W0611 import time # pylint: disable=W0611 import base64 # pylint: disable=W0611 import azurelinuxagent.common.logger as logger # pylint: disable=W0611 import azurelinuxagent.common.utils.fileutil as fileutil # pylint: disable=W0611 import azurelinuxagent.common.utils.shellutil as shellutil import azurelinuxagent.common.utils.textutil as textutil # pylint: disable=W0611 from azurelinuxagent.common.osutil.default import DefaultOSUtil class DebianOSBaseUtil(DefaultOSUtil): def __init__(self): super(DebianOSBaseUtil, self).__init__() self.jit_enabled = True def restart_ssh_service(self): return shellutil.run("systemctl --job-mode=ignore-dependencies try-reload-or-restart ssh", chk_err=False) def stop_agent_service(self): return shellutil.run("service azurelinuxagent stop", chk_err=False) def start_agent_service(self): return shellutil.run("service azurelinuxagent start", chk_err=False) def start_network(self): pass def remove_rules_files(self, rules_files=""): pass def restore_rules_files(self, rules_files=""): pass def get_dhcp_lease_endpoint(self): return self.get_endpoint_from_leases_path('/var/lib/dhcp/dhclient.*.leases') class DebianOSModernUtil(DebianOSBaseUtil): def __init__(self): super(DebianOSModernUtil, self).__init__() self.jit_enabled = True self.service_name = self.get_service_name() @staticmethod def get_service_name(): return "walinuxagent" def stop_agent_service(self): return shellutil.run("systemctl stop {0}".format(self.service_name), chk_err=False) def start_agent_service(self): return shellutil.run("systemctl start {0}".format(self.service_name), chk_err=False) Azure-WALinuxAgent-a976115/azurelinuxagent/common/osutil/default.py000066400000000000000000001553421510742556200254360ustar00rootroot00000000000000# # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import array import base64 import datetime import errno import fcntl import glob import json import multiprocessing import os import platform import pwd import random import re import shutil import socket import string import struct import sys import time import warnings from pwd import getpwall from azurelinuxagent.common.exception import OSUtilError # # The 'crypt' package was removed in Python 3.13. # # To work around this, WALinuxAgent 2.12 and 2.13 added a dependency on legacycrypt and imported crypt from there. From 2.14, # we instead get crypt from the crypt-r package. The code below needs to handle the case where the self-update WALinuxAgent # is running on a machine where the pre-installed WALinuxAgent is 2.12/2.13 (and crypt may be coming from legacycrypt). # # We first try importing from crypt, which may have been installed from the crypt or crypt-r packages, then try # importing from legacy crypt, then fallback to a dummy function that raises an exception when invoked. The Provisioning Agent # and JIT requests use the crypt function, so those features would fail if none of the required dependencies are installed. # try: with warnings.catch_warnings(): warnings.filterwarnings("ignore", category=DeprecationWarning) from crypt import crypt # pylint: disable=deprecated-module except ImportError: __CRYPT_IMPORTED__ = False if sys.version_info[0] == 3 and sys.version_info[1] >= 13 or sys.version_info[0] > 3: try: from legacycrypt import crypt __CRYPT_IMPORTED__ = True except ImportError: pass if not __CRYPT_IMPORTED__: def crypt(password, salt): raise OSUtilError("This feature requires one of the 'crypt', 'legacycrypt' or 'crypt-r' Python packages to be installed.") from azurelinuxagent.common import conf from azurelinuxagent.common import logger from azurelinuxagent.common.utils import fileutil from azurelinuxagent.common.utils import shellutil from azurelinuxagent.common.utils import textutil from azurelinuxagent.common.future import ustr, array_to_bytes, UTC from azurelinuxagent.common.utils.cryptutil import CryptUtil from azurelinuxagent.common.utils.networkutil import RouteEntry, NetworkInterfaceCard from azurelinuxagent.common.utils.shellutil import CommandError __RULES_FILES__ = ["/lib/udev/rules.d/75-persistent-net-generator.rules", "/etc/udev/rules.d/70-persistent-net.rules"] """ Define distro specific behavior. OSUtil class defines default behavior for all distros. Each concrete distro classes could overwrite default behavior if needed. """ ALL_CPUS_REGEX = re.compile('^cpu .*') ALL_MEMS_REGEX = re.compile('^Mem.*') DMIDECODE_CMD = 'dmidecode --string system-uuid' PRODUCT_ID_FILE = '/sys/class/dmi/id/product_uuid' UUID_PATTERN = re.compile( r'^\s*[A-F0-9]{8}(?:\-[A-F0-9]{4}){3}\-[A-F0-9]{12}\s*$', re.IGNORECASE) IOCTL_SIOCGIFCONF = 0x8912 IOCTL_SIOCGIFFLAGS = 0x8913 IOCTL_SIOCGIFHWADDR = 0x8927 IFNAMSIZ = 16 IP_COMMAND_OUTPUT = re.compile(r'^\d+:\s+([\w@]+):\s+(.*)$') STORAGE_DEVICE_PATH = '/sys/bus/vmbus/devices/' GEN2_DEVICE_ID = 'f8b3781a-1e82-4818-a1c3-63d806ec15bb' class DefaultOSUtil(object): def __init__(self): self.agent_conf_file_path = '/etc/waagent.conf' self.selinux = None self.disable_route_warning = False self.jit_enabled = False self.service_name = self.get_service_name() @staticmethod def get_service_name(): return "waagent" @staticmethod def get_systemd_unit_file_install_path(): return "/lib/systemd/system" @classmethod def get_network_setup_service_install_path(cls): return cls.get_systemd_unit_file_install_path() @staticmethod def get_agent_bin_path(): return "/usr/sbin" @staticmethod def get_vm_arch(): try: return platform.machine() except Exception as e: logger.warn("Unable to determine cpu architecture: {0}", ustr(e)) return "unknown" @staticmethod def _correct_instance_id(instance_id): """ Azure stores the instance ID with an incorrect byte ordering for the first parts. For example, the ID returned by the metadata service: D0DF4C54-4ECB-4A4B-9954-5BDF3ED5C3B8 will be found as: 544CDFD0-CB4E-4B4A-9954-5BDF3ED5C3B8 This code corrects the byte order such that it is consistent with that returned by the metadata service. """ if not UUID_PATTERN.match(instance_id): return instance_id parts = instance_id.split('-') return '-'.join([ textutil.swap_hexstring(parts[0], width=2), textutil.swap_hexstring(parts[1], width=2), textutil.swap_hexstring(parts[2], width=2), parts[3], parts[4] ]) def is_current_instance_id(self, id_that): """ Compare two instance IDs for equality, but allow that some IDs may have been persisted using the incorrect byte ordering. """ id_this = self.get_instance_id() logger.verbose("current instance id: {0}".format(id_this)) logger.verbose(" former instance id: {0}".format(id_that)) return id_this.lower() == id_that.lower() or \ id_this.lower() == self._correct_instance_id(id_that).lower() def get_agent_conf_file_path(self): return self.agent_conf_file_path def get_instance_id(self): """ Azure records a UUID as the instance ID First check /sys/class/dmi/id/product_uuid. If that is missing, then extracts from dmidecode If nothing works (for old VMs), return the empty string """ if os.path.isfile(PRODUCT_ID_FILE): s = fileutil.read_file(PRODUCT_ID_FILE).strip() else: rc, s = shellutil.run_get_output(DMIDECODE_CMD) if rc != 0 or UUID_PATTERN.match(s) is None: return "" return self._correct_instance_id(s.strip()) @staticmethod def get_userentry(username): try: return pwd.getpwnam(username) except KeyError: return None def get_root_username(self): return "root" def is_sys_user(self, username): """ Check whether use is a system user. If reset sys user is allowed in conf, return False Otherwise, check whether UID is less than UID_MIN """ if conf.get_allow_reset_sys_user(): return False userentry = self.get_userentry(username) uidmin = None try: uidmin_def = fileutil.get_line_startingwith("UID_MIN", "/etc/login.defs") if uidmin_def is not None: uidmin = int(uidmin_def.split()[1]) except IOError as e: # pylint: disable=W0612 pass if uidmin == None: uidmin = 100 if userentry != None and userentry[2] < uidmin: return True else: return False def useradd(self, username, expiration=None, comment=None): """ Create user account with 'username' """ userentry = self.get_userentry(username) if userentry is not None: logger.info("User {0} already exists, skip useradd", username) return if expiration is not None: cmd = ["useradd", "-m", username, "-e", expiration] else: cmd = ["useradd", "-m", username] if comment is not None: cmd.extend(["-c", comment]) self._run_command_raising_OSUtilError(cmd, err_msg="Failed to create user account:{0}".format(username)) def chpasswd(self, username, password, crypt_id=6, salt_len=10): if self.is_sys_user(username): raise OSUtilError(("User {0} is a system user, " "will not set password.").format(username)) passwd_hash = DefaultOSUtil.gen_password_hash(password, crypt_id, salt_len) self._run_command_raising_OSUtilError(["usermod", "-p", passwd_hash, username], err_msg="Failed to set password for {0}".format(username)) @staticmethod def gen_password_hash(password, crypt_id, salt_len): collection = string.ascii_letters + string.digits salt = ''.join(random.choice(collection) for _ in range(salt_len)) salt = "${0}${1}".format(crypt_id, salt) if sys.version_info[0] == 2: # if python 2.*, encode to type 'str' to prevent Unicode Encode Error from crypt.crypt password = password.encode('utf-8') return crypt(password, salt) def get_users(self): return getpwall() def conf_sudoer(self, username, nopasswd=False, remove=False): sudoers_dir = conf.get_sudoers_dir() sudoers_wagent = os.path.join(sudoers_dir, 'waagent') if not remove: # for older distros create sudoers.d if not os.path.isdir(sudoers_dir): # create the sudoers.d directory fileutil.mkdir(sudoers_dir) # add the include of sudoers.d to the /etc/sudoers sudoers_file = os.path.join(sudoers_dir, os.pardir, 'sudoers') include_sudoers_dir = "\n#includedir {0}\n".format(sudoers_dir) fileutil.append_file(sudoers_file, include_sudoers_dir) sudoer = None if nopasswd: sudoer = "{0} ALL=(ALL) NOPASSWD: ALL".format(username) else: sudoer = "{0} ALL=(ALL) ALL".format(username) if not os.path.isfile(sudoers_wagent) or \ fileutil.findstr_in_file(sudoers_wagent, sudoer) is False: fileutil.append_file(sudoers_wagent, "{0}\n".format(sudoer)) fileutil.chmod(sudoers_wagent, 0o440) else: # remove user from sudoers if os.path.isfile(sudoers_wagent): try: content = fileutil.read_file(sudoers_wagent) sudoers = content.split("\n") sudoers = [x for x in sudoers if username not in x] fileutil.write_file(sudoers_wagent, "\n".join(sudoers)) except IOError as e: raise OSUtilError("Failed to remove sudoer: {0}".format(e)) def del_root_password(self): try: passwd_file_path = conf.get_passwd_file_path() passwd_content = fileutil.read_file(passwd_file_path) passwd = passwd_content.split('\n') new_passwd = [x for x in passwd if not x.startswith("root:")] new_passwd.insert(0, "root:*LOCK*:14600::::::") fileutil.write_file(passwd_file_path, "\n".join(new_passwd)) except IOError as e: raise OSUtilError("Failed to delete root password:{0}".format(e)) @staticmethod def _norm_path(filepath): home = conf.get_home_dir() # Expand HOME variable if present in path path = os.path.normpath(filepath.replace("$HOME", home)) return path def deploy_ssh_keypair(self, username, keypair): """ Deploy id_rsa and id_rsa.pub """ path, thumbprint = keypair path = self._norm_path(path) dir_path = os.path.dirname(path) fileutil.mkdir(dir_path, mode=0o700, owner=username) lib_dir = conf.get_lib_dir() prv_path = os.path.join(lib_dir, thumbprint + '.prv') if not os.path.isfile(prv_path): raise OSUtilError("Can't find {0}.prv".format(thumbprint)) shutil.copyfile(prv_path, path) pub_path = path + '.pub' crytputil = CryptUtil(conf.get_openssl_cmd()) pub = crytputil.get_pubkey_from_prv(prv_path) fileutil.write_file(pub_path, pub) self.set_selinux_context(pub_path, 'unconfined_u:object_r:ssh_home_t:s0') self.set_selinux_context(path, 'unconfined_u:object_r:ssh_home_t:s0') os.chmod(path, 0o644) os.chmod(pub_path, 0o600) def openssl_to_openssh(self, input_file, output_file): cryptutil = CryptUtil(conf.get_openssl_cmd()) cryptutil.crt_to_ssh(input_file, output_file) def deploy_ssh_pubkey(self, username, pubkey): """ Deploy authorized_key """ path, thumbprint, value = pubkey if path is None: raise OSUtilError("Public key path is None") crytputil = CryptUtil(conf.get_openssl_cmd()) path = self._norm_path(path) dir_path = os.path.dirname(path) fileutil.mkdir(dir_path, mode=0o700, owner=username) if value is not None: if not value.startswith("ssh-"): raise OSUtilError("Bad public key: {0}".format(value)) if not value.endswith("\n"): value += "\n" fileutil.write_file(path, value, append=True) elif thumbprint is not None: lib_dir = conf.get_lib_dir() crt_path = os.path.join(lib_dir, thumbprint + '.crt') if not os.path.isfile(crt_path): raise OSUtilError("Can't find {0}.crt".format(thumbprint)) pub_path = os.path.join(lib_dir, thumbprint + '.pub') pub = crytputil.get_pubkey_from_crt(crt_path) fileutil.write_file(pub_path, pub) self.set_selinux_context(pub_path, 'unconfined_u:object_r:ssh_home_t:s0') self.openssl_to_openssh(pub_path, path) fileutil.chmod(pub_path, 0o600) else: raise OSUtilError("SSH public key Fingerprint and Value are None") self.set_selinux_context(path, 'unconfined_u:object_r:ssh_home_t:s0') fileutil.chowner(path, username) fileutil.chmod(path, 0o644) def is_selinux_system(self): """ Checks and sets self.selinux = True if SELinux is available on system. """ if self.selinux == None: if shellutil.run("which getenforce", chk_err=False) == 0: self.selinux = True else: self.selinux = False return self.selinux def is_selinux_enforcing(self): """ Calls shell command 'getenforce' and returns True if 'Enforcing'. """ if self.is_selinux_system(): output = shellutil.run_get_output("getenforce")[1] return output.startswith("Enforcing") else: return False def set_selinux_context(self, path, con): # pylint: disable=R1710 """ Calls shell 'chcon' with 'path' and 'con' context. Returns exit result. """ if self.is_selinux_system(): if not os.path.exists(path): logger.error("Path does not exist: {0}".format(path)) return 1 try: shellutil.run_command(['chcon', con, path], log_error=True) except shellutil.CommandError as cmd_err: return cmd_err.returncode return 0 def conf_sshd(self, disable_password): option = "no" if disable_password else "yes" conf_file_path = conf.get_sshd_conf_file_path() conf_file = fileutil.read_file(conf_file_path).split("\n") textutil.set_ssh_config(conf_file, "PasswordAuthentication", option) textutil.set_ssh_config(conf_file, "ChallengeResponseAuthentication", option) textutil.set_ssh_config(conf_file, "ClientAliveInterval", str(conf.get_ssh_client_alive_interval())) fileutil.write_file(conf_file_path, "\n".join(conf_file)) logger.info("{0} SSH password-based authentication methods." .format("Disabled" if disable_password else "Enabled")) logger.info("Configured SSH client probing to keep connections alive.") def get_dvd_device(self, dev_dir='/dev'): pattern = r'(sr[0-9]|hd[c-z]|cdrom[0-9]|cd[0-9]|vd[b-z])' device_list = os.listdir(dev_dir) for dvd in [re.match(pattern, dev) for dev in device_list]: if dvd is not None: return "/dev/{0}".format(dvd.group(0)) inner_detail = "The following devices were found, but none matched " \ "the pattern [{0}]: {1}\n".format(pattern, device_list) raise OSUtilError(msg="Failed to get dvd device from {0}".format(dev_dir), inner=inner_detail) def mount_dvd(self, max_retry=6, chk_err=True, dvd_device=None, mount_point=None, sleep_time=5): if dvd_device is None: dvd_device = self.get_dvd_device() if mount_point is None: mount_point = conf.get_dvd_mount_point() mount_list = shellutil.run_get_output("mount")[1] existing = self.get_mount_point(mount_list, dvd_device) if existing is not None: # already mounted logger.info("{0} is already mounted at {1}", dvd_device, existing) return if not os.path.isdir(mount_point): os.makedirs(mount_point) err = '' for retry in range(1, max_retry): return_code, err = self.mount(dvd_device, mount_point, option=["-o", "ro", "-t", "udf,iso9660,vfat"], chk_err=False) if return_code == 0: logger.info("Successfully mounted dvd") return else: logger.warn( "Mounting dvd failed [retry {0}/{1}, sleeping {2} sec]", retry, max_retry - 1, sleep_time) if retry < max_retry: time.sleep(sleep_time) if chk_err: raise OSUtilError("Failed to mount dvd device", inner=err) def umount_dvd(self, chk_err=True, mount_point=None): if mount_point is None: mount_point = conf.get_dvd_mount_point() return_code = self.umount(mount_point, chk_err=chk_err) if chk_err and return_code != 0: raise OSUtilError("Failed to unmount dvd device at {0}".format(mount_point)) def eject_dvd(self, chk_err=True): dvd = self.get_dvd_device() dev = dvd.rsplit('/', 1)[1] pattern = r'(vd[b-z])' # We should not eject if the disk is not a cdrom if re.search(pattern, dev): return try: shellutil.run_command(["eject", dvd]) except shellutil.CommandError as cmd_err: if chk_err: msg = "Failed to eject dvd: ret={0}\n[stdout]\n{1}\n\n[stderr]\n{2}"\ .format(cmd_err.returncode, cmd_err.stdout, cmd_err.stderr) raise OSUtilError(msg) def try_load_atapiix_mod(self): try: self.load_atapiix_mod() except Exception as e: logger.warn("Could not load ATAPI driver: {0}".format(e)) def load_atapiix_mod(self): if self.is_atapiix_mod_loaded(): return ret, kern_version = shellutil.run_get_output("uname -r") if ret != 0: raise Exception("Failed to call uname -r") mod_path = os.path.join('/lib/modules', kern_version.strip('\n'), 'kernel/drivers/ata/ata_piix.ko') if not os.path.isfile(mod_path): raise Exception("Can't find module file:{0}".format(mod_path)) ret, output = shellutil.run_get_output("insmod " + mod_path) # pylint: disable=W0612 if ret != 0: raise Exception("Error calling insmod for ATAPI CD-ROM driver") if not self.is_atapiix_mod_loaded(max_retry=3): raise Exception("Failed to load ATAPI CD-ROM driver") def is_atapiix_mod_loaded(self, max_retry=1): for retry in range(0, max_retry): ret = shellutil.run("lsmod | grep ata_piix", chk_err=False) if ret == 0: logger.info("Module driver for ATAPI CD-ROM is already present.") return True if retry < max_retry - 1: time.sleep(1) return False def mount(self, device, mount_point, option=None, chk_err=True): if not option: option = [] cmd = ["mount"] cmd.extend(option + [device, mount_point]) try: output = shellutil.run_command(cmd, log_error=chk_err) except shellutil.CommandError as cmd_err: detail = "[{0}] returned {1}:\n stdout: {2}\n\nstderr: {3}".format(cmd, cmd_err.returncode, cmd_err.stdout, cmd_err.stderr) return cmd_err.returncode, detail return 0, output def umount(self, mount_point, chk_err=True): try: shellutil.run_command(["umount", mount_point], log_error=chk_err) except shellutil.CommandError as cmd_err: return cmd_err.returncode return 0 def allow_dhcp_broadcast(self): # Open DHCP port if iptables is enabled. # We supress error logging on error. shellutil.run("iptables -D INPUT -p udp --dport 68 -j ACCEPT", chk_err=False) shellutil.run("iptables -I INPUT -p udp --dport 68 -j ACCEPT", chk_err=False) def remove_rules_files(self, rules_files=None): if rules_files is None: rules_files = __RULES_FILES__ lib_dir = conf.get_lib_dir() for src in rules_files: file_name = fileutil.base_name(src) dest = os.path.join(lib_dir, file_name) if os.path.isfile(dest): os.remove(dest) if os.path.isfile(src): logger.warn("Move rules file {0} to {1}", file_name, dest) shutil.move(src, dest) def restore_rules_files(self, rules_files=None): if rules_files is None: rules_files = __RULES_FILES__ lib_dir = conf.get_lib_dir() for dest in rules_files: filename = fileutil.base_name(dest) src = os.path.join(lib_dir, filename) if os.path.isfile(dest): continue if os.path.isfile(src): logger.warn("Move rules file {0} to {1}", filename, dest) shutil.move(src, dest) def get_mac_addr(self): """ Convenience function, returns mac addr bound to first non-loopback interface. """ ifname = self.get_if_name() addr = self.get_if_mac(ifname) return textutil.hexstr_to_bytearray(addr) def get_if_mac(self, ifname): """ Return the mac-address bound to the socket. """ sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP) param = struct.pack('256s', (ifname[:15] + ('\0' * 241)).encode('latin-1')) info = fcntl.ioctl(sock.fileno(), IOCTL_SIOCGIFHWADDR, param) sock.close() return ''.join(['%02X' % textutil.str_to_ord(char) for char in info[18:24]]) @staticmethod def _get_struct_ifconf_size(): """ Return the sizeof struct ifinfo. On 64-bit platforms the size is 40 bytes; on 32-bit platforms the size is 32 bytes. """ python_arc = platform.architecture()[0] struct_size = 32 if python_arc == '32bit' else 40 return struct_size def _get_all_interfaces(self): """ Return a dictionary mapping from interface name to IPv4 address. Interfaces without a name are ignored. """ expected = 16 # how many devices should I expect... struct_size = DefaultOSUtil._get_struct_ifconf_size() array_size = expected * struct_size buff = array.array('B', b'\0' * array_size) param = struct.pack('iL', array_size, buff.buffer_info()[0]) sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP) ret = fcntl.ioctl(sock.fileno(), IOCTL_SIOCGIFCONF, param) retsize = (struct.unpack('iL', ret)[0]) sock.close() if retsize == array_size: logger.warn(('SIOCGIFCONF returned more than {0} up ' 'network interfaces.'), expected) ifconf_buff = array_to_bytes(buff) ifaces = {} for i in range(0, array_size, struct_size): iface = ifconf_buff[i:i + IFNAMSIZ].split(b'\0', 1)[0] if len(iface) > 0: iface_name = iface.decode('latin-1') if iface_name not in ifaces: ifaces[iface_name] = socket.inet_ntoa(ifconf_buff[i + 20:i + 24]) return ifaces def get_first_if(self): """ Return the interface name, and IPv4 addr of the "primary" interface or, failing that, any active non-loopback interface. """ primary = self.get_primary_interface() ifaces = self._get_all_interfaces() if primary in ifaces: return primary, ifaces[primary] for iface_name in ifaces.keys(): if not self.is_loopback(iface_name): logger.info("Choosing non-primary [{0}]".format(iface_name)) return iface_name, ifaces[iface_name] return '', '' @staticmethod def _build_route_list(proc_net_route): """ Construct a list of network route entries :param list(str) proc_net_route: Route table lines, including headers, containing at least one route :return: List of network route objects :rtype: list(RouteEntry) """ idx = 0 column_index = {} header_line = proc_net_route[0] for header in filter(lambda h: len(h) > 0, header_line.split("\t")): column_index[header.strip()] = idx idx += 1 try: idx_iface = column_index["Iface"] idx_dest = column_index["Destination"] idx_gw = column_index["Gateway"] idx_flags = column_index["Flags"] idx_metric = column_index["Metric"] idx_mask = column_index["Mask"] except KeyError: msg = "/proc/net/route is missing key information; headers are [{0}]".format(header_line) logger.error(msg) return [] route_list = [] for entry in proc_net_route[1:]: route = entry.split("\t") if len(route) > 0: route_obj = RouteEntry(route[idx_iface], route[idx_dest], route[idx_gw], route[idx_mask], route[idx_flags], route[idx_metric]) route_list.append(route_obj) return route_list @staticmethod def read_route_table(): """ Return a list of strings comprising the route table, including column headers. Each line is stripped of leading or trailing whitespace but is otherwise unmolested. :return: Entries in the text route table :rtype: list(str) """ try: with open('/proc/net/route') as routing_table: return list(map(str.strip, routing_table.readlines())) except Exception as e: logger.error("Cannot read route table [{0}]", ustr(e)) return [] @staticmethod def get_list_of_routes(route_table): """ Construct a list of all network routes known to this system. :param list(str) route_table: List of text entries from route table, including headers :return: a list of network routes :rtype: list(RouteEntry) """ route_list = [] count = len(route_table) if count < 1: logger.error("/proc/net/route is missing headers") elif count == 1: logger.error("/proc/net/route contains no routes") else: route_list = DefaultOSUtil._build_route_list(route_table) return route_list def get_primary_interface(self): """ Get the name of the primary interface, which is the one with the default route attached to it; if there are multiple default routes, the primary has the lowest Metric. :return: the interface which has the default route """ # from linux/route.h RTF_GATEWAY = 0x02 DEFAULT_DEST = "00000000" primary_interface = None if not self.disable_route_warning: logger.info("Examine /proc/net/route for primary interface") route_table = DefaultOSUtil.read_route_table() def is_default(route): return route.destination == DEFAULT_DEST and int(route.flags) & RTF_GATEWAY == RTF_GATEWAY candidates = list(filter(is_default, DefaultOSUtil.get_list_of_routes(route_table))) if len(candidates) > 0: def get_metric(route): return int(route.metric) primary_route = min(candidates, key=get_metric) primary_interface = primary_route.interface if primary_interface is None: primary_interface = '' if not self.disable_route_warning: with open('/proc/net/route') as routing_table_fh: routing_table_text = routing_table_fh.read() logger.warn('Could not determine primary interface, ' 'please ensure /proc/net/route is correct') logger.warn('Contents of /proc/net/route:\n{0}'.format(routing_table_text)) logger.warn('Primary interface examination will retry silently') self.disable_route_warning = True else: logger.info('Primary interface is [{0}]'.format(primary_interface)) self.disable_route_warning = False return primary_interface def is_primary_interface(self, ifname): """ Indicate whether the specified interface is the primary. :param ifname: the name of the interface - eth0, lo, etc. :return: True if this interface binds the default route """ return self.get_primary_interface() == ifname def is_loopback(self, ifname): """ Determine if a named interface is loopback. """ s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP) ifname_buff = ifname + ('\0' * 256) result = fcntl.ioctl(s.fileno(), IOCTL_SIOCGIFFLAGS, ifname_buff) flags, = struct.unpack('H', result[16:18]) isloopback = flags & 8 == 8 if not self.disable_route_warning: logger.info('interface [{0}] has flags [{1}], ' 'is loopback [{2}]'.format(ifname, flags, isloopback)) s.close() return isloopback def get_dhcp_lease_endpoint(self): """ OS specific, this should return the decoded endpoint of the wireserver from option 245 in the dhcp leases file if it exists on disk. :return: The endpoint if available, or None """ return None @staticmethod def get_endpoint_from_leases_path(pathglob): """ Try to discover and decode the wireserver endpoint in the specified dhcp leases path. :param pathglob: The path containing dhcp lease files :return: The endpoint if available, otherwise None """ endpoint = None HEADER_LEASE = "lease" HEADER_OPTION_245 = "option unknown-245" HEADER_EXPIRE = "expire" FOOTER_LEASE = "}" FORMAT_DATETIME = "%Y/%m/%d %H:%M:%S" option_245_re = re.compile( r'\s*option\s+unknown-245\s+([0-9a-fA-F]+):([0-9a-fA-F]+):([0-9a-fA-F]+):([0-9a-fA-F]+);') logger.info("looking for leases in path [{0}]".format(pathglob)) for lease_file in glob.glob(pathglob): leases = open(lease_file).read() if HEADER_OPTION_245 in leases: cached_endpoint = None option_245_match = None expired = True # assume expired for line in leases.splitlines(): if line.startswith(HEADER_LEASE): cached_endpoint = None expired = True elif HEADER_EXPIRE in line: if "never" in line: expired = False else: try: expire_string = line.split(" ", 4)[-1].strip(";") expire_date = datetime.datetime.strptime(expire_string, FORMAT_DATETIME).replace(tzinfo=UTC) if expire_date > datetime.datetime.now(UTC): expired = False except: # pylint: disable=W0702 logger.error("could not parse expiry token '{0}'".format(line)) elif FOOTER_LEASE in line: logger.info("dhcp entry:{0}, 245:{1}, expired:{2}".format( cached_endpoint, option_245_match is not None, expired)) if not expired and cached_endpoint is not None: endpoint = cached_endpoint logger.info("found endpoint [{0}]".format(endpoint)) # we want to return the last valid entry, so # keep searching else: option_245_match = option_245_re.match(line) if option_245_match is not None: cached_endpoint = '{0}.{1}.{2}.{3}'.format( int(option_245_match.group(1), 16), int(option_245_match.group(2), 16), int(option_245_match.group(3), 16), int(option_245_match.group(4), 16)) if endpoint is not None: logger.info("cached endpoint found [{0}]".format(endpoint)) else: logger.info("cached endpoint not found") return endpoint def is_missing_default_route(self): try: route_cmd = ["ip", "route", "show"] routes = shellutil.run_command(route_cmd) for route in routes.split("\n"): if route.startswith("0.0.0.0 ") or route.startswith("default "): return False return True except CommandError as e: logger.warn("Cannot get the routing table. {0} failed: {1}", ustr(route_cmd), ustr(e)) return False def get_if_name(self): if_name = '' if_found = False while not if_found: if_name = self.get_first_if()[0] if_found = len(if_name) >= 2 if not if_found: time.sleep(2) return if_name def get_ip4_addr(self): return self.get_first_if()[1] def set_route_for_dhcp_broadcast(self, ifname): try: route_cmd = ["ip", "route", "add", "255.255.255.255", "dev", ifname] return shellutil.run_command(route_cmd) except CommandError: return "" def remove_route_for_dhcp_broadcast(self, ifname): try: route_cmd = ["ip", "route", "del", "255.255.255.255", "dev", ifname] shellutil.run_command(route_cmd) except CommandError: pass def is_dhcp_available(self): return True def is_dhcp_enabled(self): return False def stop_dhcp_service(self): pass def start_dhcp_service(self): pass def start_network(self): pass def start_agent_service(self): pass def stop_agent_service(self): pass def register_agent_service(self): pass def unregister_agent_service(self): pass def restart_ssh_service(self): pass def route_add(self, net, mask, gateway): # pylint: disable=W0613 """ Add specified route """ try: cmd = ["ip", "route", "add", str(net), "via", gateway] return shellutil.run_command(cmd) except CommandError: return "" @staticmethod def _text_to_pid_list(text): return [int(n) for n in text.split()] @staticmethod def _get_dhcp_pid(command, transform_command_output=None): try: output = shellutil.run_command(command) if transform_command_output is not None: output = transform_command_output(output) return DefaultOSUtil._text_to_pid_list(output) except CommandError as exception: # pylint: disable=W0612 return [] def get_dhcp_pid(self): return self._get_dhcp_pid(["pidof", "dhclient"]) def set_hostname(self, hostname): fileutil.write_file('/etc/hostname', hostname) self._run_command_without_raising(["hostname", hostname], log_error=False) def set_dhcp_hostname(self, hostname): autosend = r'^[^#]*?send\s*host-name.*?(|gethostname[(,)])' dhclient_files = ['/etc/dhcp/dhclient.conf', '/etc/dhcp3/dhclient.conf', '/etc/dhclient.conf'] for conf_file in dhclient_files: if not os.path.isfile(conf_file): continue if fileutil.findre_in_file(conf_file, autosend): # Return if auto send host-name is configured return fileutil.update_conf_file(conf_file, 'send host-name', 'send host-name "{0}";'.format(hostname)) def restart_if(self, ifname, retries=3, wait=5): retry_limit = retries + 1 for attempt in range(1, retry_limit): return_code = shellutil.run("ifdown {0} && ifup {0}".format(ifname), expected_errors=[1] if attempt < retries else []) if return_code == 0: return logger.warn("failed to restart {0}: return code {1}".format(ifname, return_code)) if attempt < retry_limit: logger.info("retrying in {0} seconds".format(wait)) time.sleep(wait) else: logger.warn("exceeded restart retries") def check_and_recover_nic_state(self, ifname): # TODO: This should be implemented for all distros where we reset the network during publishing hostname. Currently it is only implemented in RedhatOSUtil. pass def publish_hostname(self, hostname, recover_nic=False): """ Publishes the provided hostname. """ self.set_dhcp_hostname(hostname) self.set_hostname_record(hostname) ifname = self.get_if_name() self.restart_if(ifname) if recover_nic: self.check_and_recover_nic_state(ifname) def set_scsi_disks_timeout(self, timeout): for dev in os.listdir("/sys/block"): if dev.startswith('sd'): self.set_block_device_timeout(dev, timeout) def set_block_device_timeout(self, dev, timeout): if dev is not None and timeout is not None: file_path = "/sys/block/{0}/device/timeout".format(dev) content = fileutil.read_file(file_path) original = content.splitlines()[0].rstrip() if original != timeout: fileutil.write_file(file_path, timeout) logger.info("Set block dev timeout: {0} with timeout: {1}", dev, timeout) def get_mount_point(self, mountlist, device): """ Example of mountlist: /dev/sda1 on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0") none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) /dev/sdb1 on /mnt/resource type ext4 (rw) """ if (mountlist and device): for entry in mountlist.split('\n'): if (re.search(device, entry)): tokens = entry.split() # Return the 3rd column of this line return tokens[2] if len(tokens) > 2 else None return None @staticmethod def _enumerate_device_id(): """ Enumerate all storage device IDs. Args: None Returns: Iterator[Tuple[str, str]]: VmBus and storage devices. """ if os.path.exists(STORAGE_DEVICE_PATH): for vmbus in os.listdir(STORAGE_DEVICE_PATH): deviceid = fileutil.read_file(os.path.join(STORAGE_DEVICE_PATH, vmbus, "device_id")) guid = deviceid.strip('{}\n') yield vmbus, guid @staticmethod def search_for_resource_disk(gen1_device_prefix, gen2_device_id): """ Search the filesystem for a device by ID or prefix. Args: gen1_device_prefix (str): Gen1 resource disk prefix. gen2_device_id (str): Gen2 resource device ID. Returns: str: The found device. """ device = None # We have to try device IDs for both Gen1 and Gen2 VMs. logger.info('Searching gen1 prefix {0} or gen2 {1}'.format(gen1_device_prefix, gen2_device_id)) try: for vmbus, guid in DefaultOSUtil._enumerate_device_id(): if guid.startswith(gen1_device_prefix) or guid == gen2_device_id: for root, dirs, files in os.walk(STORAGE_DEVICE_PATH + vmbus): # pylint: disable=W0612 root_path_parts = root.split('/') # For Gen1 VMs we only have to check for the block dir in the # current device. But for Gen2 VMs all of the disks (sda, sdb, # sr0) are presented in this device on the same SCSI controller. # Because of that we need to also read the LUN. It will be: # 0 - OS disk # 1 - Resource disk # 2 - CDROM if root_path_parts[-1] == 'block' and ( guid != gen2_device_id or root_path_parts[-2].split(':')[-1] == '1'): device = dirs[0] return device else: # older distros for d in dirs: if ':' in d and "block" == d.split(':')[0]: device = d.split(':')[1] return device except (OSError, IOError) as exc: logger.warn('Error getting device for {0} or {1}: {2}', gen1_device_prefix, gen2_device_id, ustr(exc)) return None def device_for_ide_port(self, port_id): """ Return device name attached to ide port 'n'. """ if port_id > 3: return None g0 = "00000000" if port_id > 1: g0 = "00000001" port_id = port_id - 2 gen1_device_prefix = '{0}-000{1}'.format(g0, port_id) device = DefaultOSUtil.search_for_resource_disk( gen1_device_prefix=gen1_device_prefix, gen2_device_id=GEN2_DEVICE_ID ) logger.info('Found device: {0}'.format(device)) return device def set_hostname_record(self, hostname): fileutil.write_file(conf.get_published_hostname(), contents=hostname) def get_hostname_record(self): hostname_record = conf.get_published_hostname() if not os.path.exists(hostname_record): # older agents (but newer or equal to 2.2.3) create published_hostname during provisioning; when provisioning is done # by cloud-init the hostname is written to set-hostname hostname = self._get_cloud_init_hostname() if hostname is None: logger.info("Retrieving hostname using socket.gethostname()") hostname = socket.gethostname() logger.info('Published hostname record does not exist, creating [{0}] with hostname [{1}]', hostname_record, hostname) self.set_hostname_record(hostname) record = fileutil.read_file(hostname_record) return record @staticmethod def _get_cloud_init_hostname(): """ Retrieves the hostname set by cloud-init; returns None if cloud-init did not set the hostname or if there is an error retrieving it. """ hostname_file = '/var/lib/cloud/data/set-hostname' try: if os.path.exists(hostname_file): # # The format is similar to # # $ cat /var/lib/cloud/data/set-hostname # { # "fqdn": "nam-u18", # "hostname": "nam-u18" # } # logger.info("Retrieving hostname from {0}", hostname_file) with open(hostname_file, 'r') as file_: hostname_info = json.load(file_) if "hostname" in hostname_info: return hostname_info["hostname"] except Exception as exception: logger.warn("Error retrieving hostname: {0}", ustr(exception)) return None def del_account(self, username): if self.is_sys_user(username): logger.error("{0} is a system user. Will not delete it.", username) self._run_command_without_raising(["touch", "/var/run/utmp"]) self._run_command_without_raising(['userdel', '-f', '-r', username]) self.conf_sudoer(username, remove=True) def decode_customdata(self, data): return base64.b64decode(data).decode('utf-8') def get_total_mem(self): # Get total memory in bytes and divide by 1024**2 to get the value in MB. return os.sysconf('SC_PAGE_SIZE') * os.sysconf('SC_PHYS_PAGES') / (1024 ** 2) def get_processor_cores(self): return multiprocessing.cpu_count() def check_pid_alive(self, pid): try: pid = int(pid) os.kill(pid, 0) except (ValueError, TypeError): return False except OSError as os_error: if os_error.errno == errno.EPERM: return True return False return True @property def is_64bit(self): return sys.maxsize > 2 ** 32 @staticmethod def _get_proc_stat(): """ Get the contents of /proc/stat. # cpu 813599 3940 909253 154538746 874851 0 6589 0 0 0 # cpu0 401094 1516 453006 77276738 452939 0 3312 0 0 0 # cpu1 412505 2423 456246 77262007 421912 0 3276 0 0 0 :return: A single string with the contents of /proc/stat :rtype: str """ results = None try: results = fileutil.read_file('/proc/stat') except (OSError, IOError) as ex: logger.warn("Couldn't read /proc/stat: {0}".format(ex.strerror)) raise return results @staticmethod def get_total_cpu_ticks_since_boot(): """ Compute the number of USER_HZ units of time that have elapsed in all categories, across all cores, since boot. :return: int """ system_cpu = 0 proc_stat = DefaultOSUtil._get_proc_stat() if proc_stat is not None: for line in proc_stat.splitlines(): if ALL_CPUS_REGEX.match(line): system_cpu = sum( int(i) for i in line.split()[1:8]) # see "man proc" for a description of these fields break return system_cpu @staticmethod def get_used_and_available_system_memory(): """ Get the contents of free -b in bytes. # free -b # total used free shared buff/cache available # Mem: 8340144128 619352064 5236809728 1499136 2483982336 7426314240 # Swap: 0 0 0 :return: used and available memory in megabytes """ used_mem = available_mem = 0 free_cmd = ["free", "-b"] memory = shellutil.run_command(free_cmd) for line in memory.split("\n"): if ALL_MEMS_REGEX.match(line): mems = line.split() used_mem = int(mems[2]) available_mem = int(mems[6]) # see "man free" for a description of these fields return used_mem/(1024 ** 2), available_mem/(1024 ** 2) def get_nic_state(self, as_string=False): """ Capture NIC state (IPv4 and IPv6 addresses plus link state). :return: By default returns a dictionary of NIC state objects, with the NIC name as key. If as_string is True returns the state as a string :rtype: dict(str,NetworkInformationCard) """ state = {} all_command = ["ip", "-a", "-o", "link"] inet_command = ["ip", "-4", "-a", "-o", "address"] inet6_command = ["ip", "-6", "-a", "-o", "address"] try: all_output = shellutil.run_command(all_command) except shellutil.CommandError as command_error: logger.verbose("Could not fetch NIC link info: {0}", ustr(command_error)) return "" if as_string else {} if as_string: def run_command(command): try: return shellutil.run_command(command) except shellutil.CommandError as command_error: return str(command_error) inet_output = run_command(inet_command) inet6_output = run_command(inet6_command) return "Executing {0}:\n{1}\nExecuting {2}:\n{3}\nExecuting {4}:\n{5}\n".format(all_command, all_output, inet_command, inet_output, inet6_command, inet6_output) else: self._update_nic_state_all(state, all_output) self._update_nic_state(state, inet_command, NetworkInterfaceCard.add_ipv4, "an IPv4 address") self._update_nic_state(state, inet6_command, NetworkInterfaceCard.add_ipv6, "an IPv6 address") return state @staticmethod def _update_nic_state_all(state, command_output): for entry in command_output.splitlines(): # Sample output: # 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 addrgenmode eui64 # 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:30:c3:5a brd ff:ff:ff:ff:ff:ff promiscuity 0 addrgenmode eui64 # 3: docker0: mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default \ link/ether 02:42:b5:d5:00:1d brd ff:ff:ff:ff:ff:ff promiscuity 0 \ bridge forward_delay 1500 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q addrgenmode eui64 result = IP_COMMAND_OUTPUT.match(entry) if result: name = result.group(1) state[name] = NetworkInterfaceCard(name, result.group(2)) @staticmethod def _update_nic_state(state, ip_command, handler, description): """ Update the state of NICs based on the output of a specified ip subcommand. :param dict(str, NetworkInterfaceCard) state: Dictionary of NIC state objects :param str ip_command: The ip command to run :param handler: A method on the NetworkInterfaceCard class :param str description: Description of the particular information being added to the state """ try: output = shellutil.run_command(ip_command) for entry in output.splitlines(): # family inet sample output: # 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever # 2: eth0 inet 10.145.187.220/26 brd 10.145.187.255 scope global eth0\ valid_lft forever preferred_lft forever # 3: docker0 inet 192.168.43.1/24 brd 192.168.43.255 scope global docker0\ valid_lft forever preferred_lft forever # # family inet6 sample output: # 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever # 2: eth0 inet6 fe80::20d:3aff:fe30:c35a/64 scope link \ valid_lft forever preferred_lft forever result = IP_COMMAND_OUTPUT.match(entry) if result: interface_name = result.group(1) if interface_name in state: handler(state[interface_name], result.group(2)) else: logger.error("Interface {0} has {1} but no link state".format(interface_name, description)) except shellutil.CommandError as command_error: logger.error("[{0}] failed: {1}", ' '.join(ip_command), str(command_error)) @staticmethod def _run_command_without_raising(cmd, log_error=True): try: shellutil.run_command(cmd, log_error=log_error) # Original implementation of run() does a blanket catch, so mimicking the behaviour here except Exception: pass @staticmethod def _run_multiple_commands_without_raising(commands, log_error=True, continue_on_error=False): for cmd in commands: try: shellutil.run_command(cmd, log_error=log_error) # Original implementation of run() does a blanket catch, so mimicking the behaviour here except Exception: if continue_on_error: continue break @staticmethod def _run_command_raising_OSUtilError(cmd, err_msg, cmd_input=None): # This method runs shell command using the new secure shellutil.run_command and raises OSUtilErrors on failures. try: return shellutil.run_command(cmd, log_error=True, input=cmd_input) except shellutil.CommandError as e: raise OSUtilError( "{0}, Retcode: {1}, Output: {2}, Error: {3}".format(err_msg, e.returncode, e.stdout, e.stderr)) except Exception as e: raise OSUtilError("{0}, Retcode: {1}, Error: {2}".format(err_msg, -1, ustr(e))) Azure-WALinuxAgent-a976115/azurelinuxagent/common/osutil/devuan.py000066400000000000000000000034531510742556200252670ustar00rootroot00000000000000# # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.shellutil as shellutil from azurelinuxagent.common.osutil.default import DefaultOSUtil class DevuanOSUtil(DefaultOSUtil): def __init__(self): super(DevuanOSUtil, self).__init__() self.jit_enabled = True def restart_ssh_service(self): logger.info("DevuanOSUtil::restart_ssh_service - trying to restart sshd") return shellutil.run("/usr/sbin/service restart ssh", chk_err=False) def stop_agent_service(self): logger.info("DevuanOSUtil::stop_agent_service - trying to stop waagent") return shellutil.run("/usr/sbin/service walinuxagent stop", chk_err=False) def start_agent_service(self): logger.info("DevuanOSUtil::start_agent_service - trying to start waagent") return shellutil.run("/usr/sbin/service walinuxagent start", chk_err=False) def start_network(self): pass def remove_rules_files(self, rules_files=""): pass def restore_rules_files(self, rules_files=""): pass def get_dhcp_lease_endpoint(self): return self.get_endpoint_from_leases_path('/var/lib/dhcp/dhclient.*.leases') Azure-WALinuxAgent-a976115/azurelinuxagent/common/osutil/factory.py000066400000000000000000000133751510742556200254600ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import azurelinuxagent.common.logger as logger from azurelinuxagent.common.version import DISTRO_NAME, DISTRO_CODE_NAME, DISTRO_VERSION, DISTRO_FULL_NAME from azurelinuxagent.common.utils.distro_version import DistroVersion from .alpine import AlpineOSUtil from .arch import ArchUtil from .bigip import BigIpOSUtil from .clearlinux import ClearLinuxUtil from .coreos import CoreOSUtil from .chainguard import ChainguardOSUtil from .debian import DebianOSBaseUtil, DebianOSModernUtil from .default import DefaultOSUtil from .devuan import DevuanOSUtil from .freebsd import FreeBSDOSUtil from .gaia import GaiaOSUtil from .iosxe import IosxeOSUtil from .mariner import MarinerOSUtil from .nsbsd import NSBSDOSUtil from .openbsd import OpenBSDOSUtil from .openwrt import OpenWRTOSUtil from .redhat import RedhatOSUtil, Redhat6xOSUtil, RedhatOSModernUtil from .suse import SUSEOSUtil, SUSE11OSUtil from .photonos import PhotonOSUtil from .ubuntu import UbuntuOSUtil, Ubuntu12OSUtil, Ubuntu14OSUtil, \ UbuntuSnappyOSUtil, Ubuntu16OSUtil, Ubuntu18OSUtil from .fedora import FedoraOSUtil def get_osutil(distro_name=DISTRO_NAME, distro_code_name=DISTRO_CODE_NAME, distro_version=DISTRO_VERSION, distro_full_name=DISTRO_FULL_NAME): # We are adding another layer of abstraction here since we want to be able to mock the final result of the # function call. Since the get_osutil function is imported in various places in our tests, we can't mock # it globally. Instead, we add _get_osutil function and mock it in the test base class, AgentTestCase. return _get_osutil(distro_name, distro_code_name, distro_version, distro_full_name) def _get_osutil(distro_name, distro_code_name, distro_version, distro_full_name): if distro_name == "photonos": return PhotonOSUtil() if distro_name == "arch": return ArchUtil() if "Clear Linux" in distro_full_name: return ClearLinuxUtil() if distro_name == "ubuntu": ubuntu_version = DistroVersion(distro_version) if ubuntu_version in [DistroVersion("12.04"), DistroVersion("12.10")]: return Ubuntu12OSUtil() if ubuntu_version in [DistroVersion("14.04"), DistroVersion("14.10")]: return Ubuntu14OSUtil() if ubuntu_version in [DistroVersion('16.04'), DistroVersion('16.10'), DistroVersion('17.04')]: return Ubuntu16OSUtil() if DistroVersion('18.04') <= ubuntu_version <= DistroVersion('24.04'): return Ubuntu18OSUtil() if distro_full_name == "Snappy Ubuntu Core": return UbuntuSnappyOSUtil() return UbuntuOSUtil() if distro_name in ("alpine", "alpaquita"): return AlpineOSUtil() if distro_name == "chainguard": return ChainguardOSUtil() if distro_name == "kali": return DebianOSBaseUtil() if distro_name in ("flatcar", "coreos") or distro_code_name in ("flatcar", "coreos"): return CoreOSUtil() if distro_name in ("suse", "sle-micro", "sle_hpc", "sles", "opensuse"): if distro_full_name == 'SUSE Linux Enterprise Server' \ and DistroVersion(distro_version) < DistroVersion('12') \ or distro_full_name == 'openSUSE' and DistroVersion(distro_version) < DistroVersion('13.2'): return SUSE11OSUtil() return SUSEOSUtil() if distro_name == "debian": if "sid" in distro_version or DistroVersion(distro_version) > DistroVersion("7"): return DebianOSModernUtil() return DebianOSBaseUtil() # Devuan support only works with v4+ # Reason is that Devuan v4 (Chimaera) uses python v3.9, in which the # platform.linux_distribution module has been removed. This was unable # to distinguish between debian and devuan. The new distro.linux_distribution module # is able to distinguish between the two. if distro_name == "devuan" and DistroVersion(distro_version) >= DistroVersion("4"): return DevuanOSUtil() if distro_name in ("redhat", "rhel", "centos", "oracle", "almalinux", "cloudlinux", "rocky"): if DistroVersion(distro_version) < DistroVersion("7"): return Redhat6xOSUtil() if DistroVersion(distro_version) >= DistroVersion("8.6"): return RedhatOSModernUtil() return RedhatOSUtil() if distro_name == "euleros": return RedhatOSUtil() if distro_name == "uos": return RedhatOSUtil() if distro_name == "freebsd": return FreeBSDOSUtil() if distro_name == "openbsd": return OpenBSDOSUtil() if distro_name == "bigip": return BigIpOSUtil() if distro_name == "gaia": return GaiaOSUtil() if distro_name == "iosxe": return IosxeOSUtil() if distro_name in ["mariner", "azurelinux"]: return MarinerOSUtil() if distro_name == "nsbsd": return NSBSDOSUtil() if distro_name == "openwrt": return OpenWRTOSUtil() if distro_name == "fedora": return FedoraOSUtil() logger.warn("Unable to load distro implementation for {0}. Using default distro implementation instead.", distro_name) return DefaultOSUtil() Azure-WALinuxAgent-a976115/azurelinuxagent/common/osutil/fedora.py000066400000000000000000000045221510742556200252430ustar00rootroot00000000000000# # Copyright 2022 Red Hat Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import time import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.shellutil as shellutil from azurelinuxagent.common.osutil.default import DefaultOSUtil class FedoraOSUtil(DefaultOSUtil): def __init__(self): super(FedoraOSUtil, self).__init__() self.agent_conf_file_path = '/etc/waagent.conf' @staticmethod def get_systemd_unit_file_install_path(): return '/usr/lib/systemd/system' @staticmethod def get_agent_bin_path(): return '/usr/sbin' def is_dhcp_enabled(self): return True def start_network(self): pass def restart_if(self, ifname=None, retries=None, wait=None): retry_limit = retries+1 for attempt in range(1, retry_limit): return_code = shellutil.run("ip link set {0} down && ip link set {0} up".format(ifname)) if return_code == 0: return logger.warn("failed to restart {0}: return code {1}".format(ifname, return_code)) if attempt < retry_limit: logger.info("retrying in {0} seconds".format(wait)) time.sleep(wait) else: logger.warn("exceeded restart retries") def restart_ssh_service(self): shellutil.run('systemctl restart sshd') def stop_dhcp_service(self): pass def start_dhcp_service(self): pass def start_agent_service(self): return shellutil.run('systemctl start waagent', chk_err=False) def stop_agent_service(self): return shellutil.run('systemctl stop waagent', chk_err=False) def get_dhcp_pid(self): return self._get_dhcp_pid(["pidof", "dhclient"]) def conf_sshd(self, disable_password): pass Azure-WALinuxAgent-a976115/azurelinuxagent/common/osutil/freebsd.py000066400000000000000000000607021510742556200254170ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import socket import struct import binascii import azurelinuxagent.common.utils.fileutil as fileutil import azurelinuxagent.common.utils.shellutil as shellutil import azurelinuxagent.common.utils.textutil as textutil import azurelinuxagent.common.logger as logger from azurelinuxagent.common.exception import OSUtilError from azurelinuxagent.common.osutil.default import DefaultOSUtil from azurelinuxagent.common.future import ustr class FreeBSDOSUtil(DefaultOSUtil): def __init__(self): super(FreeBSDOSUtil, self).__init__() self.agent_conf_file_path = '/usr/local/etc/waagent.conf' self._scsi_disks_timeout_set = False self.jit_enabled = True @staticmethod def get_agent_bin_path(): return "/usr/local/sbin" def set_hostname(self, hostname): rc_file_path = '/etc/rc.conf' conf_file = fileutil.read_file(rc_file_path).split("\n") textutil.set_ini_config(conf_file, "hostname", hostname) fileutil.write_file(rc_file_path, "\n".join(conf_file)) self._run_command_without_raising(["hostname", hostname], log_error=False) def restart_ssh_service(self): return shellutil.run('service sshd restart', chk_err=False) def useradd(self, username, expiration=None, comment=None): """ Create user account with 'username' """ userentry = self.get_userentry(username) if userentry is not None: logger.warn("User {0} already exists, skip useradd", username) return if expiration is not None: cmd = ["pw", "useradd", username, "-e", expiration, "-m"] else: cmd = ["pw", "useradd", username, "-m"] if comment is not None: cmd.extend(["-c", comment]) self._run_command_raising_OSUtilError(cmd, err_msg="Failed to create user account:{0}".format(username)) def del_account(self, username): if self.is_sys_user(username): logger.error("{0} is a system user. Will not delete it.", username) self._run_command_without_raising(['touch', '/var/run/utx.active']) self._run_command_without_raising(['rmuser', '-y', username]) self.conf_sudoer(username, remove=True) def chpasswd(self, username, password, crypt_id=6, salt_len=10): if self.is_sys_user(username): raise OSUtilError(("User {0} is a system user, " "will not set password.").format(username)) passwd_hash = DefaultOSUtil.gen_password_hash(password, crypt_id, salt_len) self._run_command_raising_OSUtilError(['pw', 'usermod', username, '-H', '0'], cmd_input=passwd_hash, err_msg="Failed to set password for {0}".format(username)) def del_root_password(self): err = shellutil.run('pw usermod root -h -') if err: raise OSUtilError("Failed to delete root password: Failed to update password database.") def get_if_mac(self, ifname): data = self._get_net_info() if data[0] == ifname: return data[2].replace(':', '').upper() return None def get_first_if(self): return self._get_net_info()[:2] @staticmethod def read_route_table(): """ Return a list of strings comprising the route table as in the Linux /proc/net/route format. The input taken is from FreeBSDs `netstat -rn -f inet` command. Here is what the function does in detail: 1. Runs `netstat -rn -f inet` which outputs a column formatted list of ipv4 routes in priority order like so: > Routing tables > > Internet: > Destination Gateway Flags Refs Use Netif Expire > default 61.221.xx.yy UGS 0 247 em1 > 10 10.10.110.5 UGS 0 50 em0 > 10.10.110/26 link#1 UC 0 0 em0 > 10.10.110.5 00:1b:0d:e6:58:40 UHLW 2 0 em0 1145 > 61.221.xx.yy/29 link#2 UC 0 0 em1 > 61.221.xx.yy 00:1b:0d:e6:57:c0 UHLW 2 0 em1 1055 > 61.221.xx/24 link#2 UC 0 0 em1 > 127.0.0.1 127.0.0.1 UH 0 0 lo0 2. Convert it to an array of lines that resemble an equivalent /proc/net/route content on a Linux system like so: > Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT > gre828 00000000 00000000 0001 0 0 0 000000F8 0 0 0 > ens160 00000000 FE04700A 0003 0 0 100 00000000 0 0 0 > gre828 00000008 00000000 0001 0 0 0 000000FE 0 0 0 > ens160 0004700A 00000000 0001 0 0 100 00FFFFFF 0 0 0 > gre828 2504700A 00000000 0005 0 0 0 FFFFFFFF 0 0 0 > gre828 3704700A 00000000 0005 0 0 0 FFFFFFFF 0 0 0 > gre828 4104700A 00000000 0005 0 0 0 FFFFFFFF 0 0 0 :return: Entries in the ipv4 route priority list from `netstat -rn -f inet` in the linux `/proc/net/route` style :rtype: list(str) """ def _get_netstat_rn_ipv4_routes(): """ Runs `netstat -rn -f inet` and parses its output and returns a list of routes where the key is the column name and the value is the value in the column, stripped of leading and trailing whitespace. :return: List of dictionaries representing routes in the ipv4 route priority list from `netstat -rn -f inet` :rtype: list(dict) """ cmd = ["netstat", "-rn", "-f", "inet"] output = shellutil.run_command(cmd, log_error=True) output_lines = output.split("\n") if len(output_lines) < 3: raise OSUtilError("`netstat -rn -f inet` output seems to be empty") output_lines = [line.strip() for line in output_lines if line] if "Internet:" not in output_lines: raise OSUtilError("`netstat -rn -f inet` output seems to contain no ipv4 routes") route_header_line = output_lines.index("Internet:") + 1 # Parse the file structure and left justify the routes route_start_line = route_header_line + 1 route_line_length = max(len(line) for line in output_lines[route_header_line:]) netstat_route_list = [line.ljust(route_line_length) for line in output_lines[route_start_line:]] # Parse the headers _route_headers = output_lines[route_header_line].split() n_route_headers = len(_route_headers) route_columns = {} for i in range(0, n_route_headers - 1): route_columns[_route_headers[i]] = ( output_lines[route_header_line].index(_route_headers[i]), (output_lines[route_header_line].index(_route_headers[i + 1]) - 1) ) route_columns[_route_headers[n_route_headers - 1]] = ( output_lines[route_header_line].index(_route_headers[n_route_headers - 1]), None ) # Parse the routes netstat_routes = [] n_netstat_routes = len(netstat_route_list) for i in range(0, n_netstat_routes): netstat_route = {} for column in route_columns: netstat_route[column] = netstat_route_list[i][ route_columns[column][0]:route_columns[column][1]].strip() netstat_route["Metric"] = n_netstat_routes - i netstat_routes.append(netstat_route) # Return the Sections return netstat_routes def _ipv4_ascii_address_to_hex(ipv4_ascii_address): """ Converts an IPv4 32bit address from its ASCII notation (ie. 127.0.0.1) to an 8 digit padded hex notation (ie. "0100007F") string. :return: 8 character long hex string representation of the IP :rtype: string """ # Raises socket.error if the IP is not a valid IPv4 return "%08X" % int(binascii.hexlify( struct.pack("!I", struct.unpack("=I", socket.inet_pton(socket.AF_INET, ipv4_ascii_address))[0])), 16) def _ipv4_cidr_mask_to_hex(ipv4_cidr_mask): """ Converts an subnet mask from its CIDR integer notation (ie. 32) to an 8 digit padded hex notation (ie. "FFFFFFFF") string representing its bitmask form. :return: 8 character long hex string representation of the IP :rtype: string """ return "{0:08x}".format( struct.unpack("=I", struct.pack("!I", (0xffffffff << (32 - ipv4_cidr_mask)) & 0xffffffff))[0]).upper() def _ipv4_cidr_destination_to_hex(destination): """ Converts an destination address from its CIDR notation (ie. 127.0.0.1/32 or default or localhost) to an 8 digit padded hex notation (ie. "0100007F" or "00000000" or "0100007F") string and its subnet bitmask also in hex (FFFFFFFF). :return: tuple of 8 character long hex string representation of the IP and 8 character long hex string representation of the subnet mask :rtype: tuple(string, int) """ destination_ip = "0.0.0.0" destination_subnetmask = 32 if destination != "default": if destination == "localhost": destination_ip = "127.0.0.1" else: destination_ip = destination.split("/") if len(destination_ip) > 1: destination_subnetmask = int(destination_ip[1]) destination_ip = destination_ip[0] hex_destination_ip = _ipv4_ascii_address_to_hex(destination_ip) hex_destination_subnetmask = _ipv4_cidr_mask_to_hex(destination_subnetmask) return hex_destination_ip, hex_destination_subnetmask def _try_ipv4_gateway_to_hex(gateway): """ If the gateway is an IPv4 address, return its IP in hex, else, return "00000000" :return: 8 character long hex string representation of the IP of the gateway :rtype: string """ try: return _ipv4_ascii_address_to_hex(gateway) except socket.error: return "00000000" def _ascii_route_flags_to_bitmask(ascii_route_flags): """ Converts route flags to a bitmask of their equivalent linux/route.h values. :return: integer representation of a 16 bit mask :rtype: int """ bitmask_flags = 0 RTF_UP = 0x0001 RTF_GATEWAY = 0x0002 RTF_HOST = 0x0004 RTF_DYNAMIC = 0x0010 if "U" in ascii_route_flags: bitmask_flags |= RTF_UP if "G" in ascii_route_flags: bitmask_flags |= RTF_GATEWAY if "H" in ascii_route_flags: bitmask_flags |= RTF_HOST if "S" not in ascii_route_flags: bitmask_flags |= RTF_DYNAMIC return bitmask_flags def _freebsd_netstat_rn_route_to_linux_proc_net_route(netstat_route): """ Converts a single FreeBSD `netstat -rn -f inet` route to its equivalent /proc/net/route line. ie: > default 0.0.0.0 UGS 0 247 em1 to > em1 00000000 00000000 0003 0 0 0 FFFFFFFF 0 0 0 :return: string representation of the equivalent /proc/net/route line :rtype: string """ network_interface = netstat_route["Netif"] hex_destination_ip, hex_destination_subnetmask = _ipv4_cidr_destination_to_hex(netstat_route["Destination"]) hex_gateway = _try_ipv4_gateway_to_hex(netstat_route["Gateway"]) bitmask_flags = _ascii_route_flags_to_bitmask(netstat_route["Flags"]) dummy_refcount = 0 dummy_use = 0 route_metric = netstat_route["Metric"] dummy_mtu = 0 dummy_window = 0 dummy_irtt = 0 return "{0}\t{1}\t{2}\t{3}\t{4}\t{5}\t{6}\t{7}\t{8}\t{9}\t{10}".format( network_interface, hex_destination_ip, hex_gateway, bitmask_flags, dummy_refcount, dummy_use, route_metric, hex_destination_subnetmask, dummy_mtu, dummy_window, dummy_irtt ) linux_style_route_file = ["Iface\tDestination\tGateway\tFlags\tRefCnt\tUse\tMetric\tMask\tMTU\tWindow\tIRTT"] try: netstat_routes = _get_netstat_rn_ipv4_routes() # Make sure the `netstat -rn -f inet` contains columns for Netif, Destination, Gateway and Flags which are needed to convert # to the Linux Format if len(netstat_routes) > 0: missing_headers = [] if "Netif" not in netstat_routes[0]: missing_headers.append("Netif") if "Destination" not in netstat_routes[0]: missing_headers.append("Destination") if "Gateway" not in netstat_routes[0]: missing_headers.append("Gateway") if "Flags" not in netstat_routes[0]: missing_headers.append("Flags") if missing_headers: raise KeyError( "`netstat -rn -f inet` output is missing columns required to convert to the Linux /proc/net/route format; columns are [{0}]".format( missing_headers)) # Parse the Netstat IPv4 Routes for netstat_route in netstat_routes: try: linux_style_route = _freebsd_netstat_rn_route_to_linux_proc_net_route(netstat_route) linux_style_route_file.append(linux_style_route) except Exception: # Skip the route continue except Exception as e: logger.error("Cannot read route table [{0}]", ustr(e)) return linux_style_route_file @staticmethod def get_list_of_routes(route_table): """ Construct a list of all network routes known to this system. :param list(str) route_table: List of text entries from route table, including headers :return: a list of network routes :rtype: list(RouteEntry) """ route_list = [] count = len(route_table) if count < 1: logger.error("netstat -rn -f inet is missing headers") elif count == 1: logger.error("netstat -rn -f inet contains no routes") else: route_list = DefaultOSUtil._build_route_list(route_table) return route_list def get_primary_interface(self): """ Get the name of the primary interface, which is the one with the default route attached to it; if there are multiple default routes, the primary has the lowest Metric. :return: the interface which has the default route """ RTF_GATEWAY = 0x0002 DEFAULT_DEST = "00000000" primary_interface = None if not self.disable_route_warning: logger.info("Examine `netstat -rn -f inet` for primary interface") route_table = self.read_route_table() def is_default(route): return (route.destination == DEFAULT_DEST) and (RTF_GATEWAY & route.flags) candidates = list(filter(is_default, self.get_list_of_routes(route_table))) if len(candidates) > 0: def get_metric(route): return int(route.metric) primary_route = min(candidates, key=get_metric) primary_interface = primary_route.interface if primary_interface is None: primary_interface = '' if not self.disable_route_warning: logger.warn('Could not determine primary interface, ' 'please ensure routes are correct') logger.warn('Primary interface examination will retry silently') self.disable_route_warning = True else: logger.info('Primary interface is [{0}]'.format(primary_interface)) self.disable_route_warning = False return primary_interface def is_primary_interface(self, ifname): """ Indicate whether the specified interface is the primary. :param ifname: the name of the interface - eth0, lo, etc. :return: True if this interface binds the default route """ return self.get_primary_interface() == ifname def is_loopback(self, ifname): """ Determine if a named interface is loopback. """ return ifname.startswith("lo") def route_add(self, net, mask, gateway): cmd = 'route add {0} {1} {2}'.format(net, gateway, mask) return shellutil.run(cmd, chk_err=False) def is_missing_default_route(self): """ For FreeBSD, the default broadcast goes to current default gw, not a all-ones broadcast address, need to specify the route manually to get it work in a VNET environment. SEE ALSO: man ip(4) IP_ONESBCAST, """ RTF_GATEWAY = 0x0002 DEFAULT_DEST = "00000000" route_table = self.read_route_table() routes = self.get_list_of_routes(route_table) for route in routes: if (route.destination == DEFAULT_DEST) and (RTF_GATEWAY & route.flags): return False return True def is_dhcp_enabled(self): return True def start_dhcp_service(self): shellutil.run("/etc/rc.d/dhclient start {0}".format(self.get_if_name()), chk_err=False) def allow_dhcp_broadcast(self): pass def set_route_for_dhcp_broadcast(self, ifname): return shellutil.run("route add 255.255.255.255 -iface {0}".format(ifname), chk_err=False) def remove_route_for_dhcp_broadcast(self, ifname): shellutil.run("route delete 255.255.255.255 -iface {0}".format(ifname), chk_err=False) def get_dhcp_pid(self): return self._get_dhcp_pid(["pgrep", "-n", "dhclient"]) def eject_dvd(self, chk_err=True): dvd = self.get_dvd_device() retcode = shellutil.run("cdcontrol -f {0} eject".format(dvd)) if chk_err and retcode != 0: raise OSUtilError("Failed to eject dvd: ret={0}".format(retcode)) def restart_if(self, ifname, retries=None, wait=None): # Restart dhclient only to publish hostname shellutil.run("/etc/rc.d/dhclient restart {0}".format(ifname), chk_err=False) def get_total_mem(self): cmd = "sysctl hw.physmem |awk '{print $2}'" ret, output = shellutil.run_get_output(cmd) if ret: raise OSUtilError("Failed to get total memory: {0}".format(output)) try: return int(output) / 1024 / 1024 except ValueError: raise OSUtilError("Failed to get total memory: {0}".format(output)) def get_processor_cores(self): ret, output = shellutil.run_get_output("sysctl hw.ncpu |awk '{print $2}'") if ret: raise OSUtilError("Failed to get processor cores.") try: return int(output) except ValueError: raise OSUtilError("Failed to get total memory: {0}".format(output)) def set_scsi_disks_timeout(self, timeout): if self._scsi_disks_timeout_set: return ret, output = shellutil.run_get_output('sysctl kern.cam.da.default_timeout={0}'.format(timeout)) if ret: raise OSUtilError("Failed set SCSI disks timeout: {0}".format(output)) self._scsi_disks_timeout_set = True def check_pid_alive(self, pid): return shellutil.run('ps -p {0}'.format(pid), chk_err=False) == 0 @staticmethod def _get_net_info(): """ There is no SIOCGIFCONF on freeBSD - just parse ifconfig. Returns strings: iface, inet4_addr, and mac or 'None,None,None' if unable to parse. We will sleep and retry as the network must be up. """ iface = '' inet = '' mac = '' err, output = shellutil.run_get_output('ifconfig -l ether', chk_err=False) if err: raise OSUtilError("Can't find ether interface:{0}".format(output)) ifaces = output.split() if not ifaces: raise OSUtilError("Can't find ether interface.") iface = ifaces[0] err, output = shellutil.run_get_output('ifconfig ' + iface, chk_err=False) if err: raise OSUtilError("Can't get info for interface:{0}".format(iface)) for line in output.split('\n'): if line.find('inet ') != -1: inet = line.split()[1] elif line.find('ether ') != -1: mac = line.split()[1] logger.verbose("Interface info: ({0},{1},{2})", iface, inet, mac) return iface, inet, mac def device_for_ide_port(self, port_id): """ Return device name attached to ide port 'n'. """ if port_id > 3: return None g0 = "00000000" if port_id > 1: g0 = "00000001" port_id = port_id - 2 err, output = shellutil.run_get_output('sysctl dev.storvsc | grep pnpinfo | grep deviceid=') if err: return None g1 = "000" + ustr(port_id) g0g1 = "{0}-{1}".format(g0, g1) # pylint: disable=W0105 """ search 'X' from 'dev.storvsc.X.%pnpinfo: classid=32412632-86cb-44a2-9b5c-50d1417354f5 deviceid=00000000-0001-8899-0000-000000000000' """ # pylint: enable=W0105 cmd_search_ide = "sysctl dev.storvsc | grep pnpinfo | grep deviceid={0}".format(g0g1) err, output = shellutil.run_get_output(cmd_search_ide) if err: return None cmd_extract_id = cmd_search_ide + "|awk -F . '{print $3}'" err, output = shellutil.run_get_output(cmd_extract_id) # pylint: disable=W0105 """ try to search 'blkvscX' and 'storvscX' to find device name """ # pylint: enable=W0105 output = output.rstrip() cmd_search_blkvsc = "camcontrol devlist -b | grep blkvsc{0} | awk '{{print $1}}'".format(output) err, output = shellutil.run_get_output(cmd_search_blkvsc) if err == 0: output = output.rstrip() cmd_search_dev = "camcontrol devlist | grep {0} | awk -F \\( '{{print $2}}'|sed -e 's/.*(//'| sed -e 's/).*//'".format(output) err, output = shellutil.run_get_output(cmd_search_dev) if err == 0: for possible in output.rstrip().split(','): if not possible.startswith('pass'): return possible cmd_search_storvsc = "camcontrol devlist -b | grep storvsc{0} | awk '{{print $1}}'".format(output) err, output = shellutil.run_get_output(cmd_search_storvsc) if err == 0: output = output.rstrip() cmd_search_dev = "camcontrol devlist | grep {0} | awk -F \\( '{{print $2}}'|sed -e 's/.*(//'| sed -e 's/).*//'".format(output) err, output = shellutil.run_get_output(cmd_search_dev) if err == 0: for possible in output.rstrip().split(','): if not possible.startswith('pass'): return possible return None @staticmethod def get_total_cpu_ticks_since_boot(): return 0 Azure-WALinuxAgent-a976115/azurelinuxagent/common/osutil/gaia.py000066400000000000000000000166731510742556200247160ustar00rootroot00000000000000# # Copyright 2017 Check Point Software Technologies # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import base64 import socket import struct import time import azurelinuxagent.common.conf as conf from azurelinuxagent.common.exception import OSUtilError from azurelinuxagent.common.future import ustr, bytebuffer, range, int # pylint: disable=redefined-builtin import azurelinuxagent.common.logger as logger from azurelinuxagent.common.osutil.default import DefaultOSUtil from azurelinuxagent.common.utils.cryptutil import CryptUtil import azurelinuxagent.common.utils.fileutil as fileutil import azurelinuxagent.common.utils.shellutil as shellutil class GaiaOSUtil(DefaultOSUtil): def __init__(self): # pylint: disable=W0235 super(GaiaOSUtil, self).__init__() def _run_clish(self, cmd): ret = 0 out = "" for i in range(10): # pylint: disable=W0612 try: final_command = ["/bin/clish", "-s", "-c", "'{0}'".format(cmd)] out = shellutil.run_command(final_command, log_error=True) ret = 0 break except shellutil.CommandError as e: ret = e.returncode out = e.stdout except Exception as e: ret = -1 out = ustr(e) if 'NMSHST0025' in out: # Entry for [hostname] already present ret = 0 break time.sleep(2) return ret, out def useradd(self, username, expiration=None, comment=None): logger.warn('useradd is not supported on GAiA') def chpasswd(self, username, password, crypt_id=6, salt_len=10): logger.info('chpasswd') passwd_hash = DefaultOSUtil.gen_password_hash(password, crypt_id, salt_len) ret, out = self._run_clish( 'set user admin password-hash ' + passwd_hash) if ret != 0: raise OSUtilError(("Failed to set password for {0}: {1}" "").format('admin', out)) def conf_sudoer(self, username, nopasswd=False, remove=False): logger.info('conf_sudoer is not supported on GAiA') def del_root_password(self): logger.info('del_root_password') ret, out = self._run_clish('set user admin password-hash *LOCK*') # pylint: disable=W0612 if ret != 0: raise OSUtilError("Failed to delete root password") def _replace_user(self, path, username): if path.startswith('$HOME'): path = '/home' + path[5:] parts = path.split('/') parts[2] = username return '/'.join(parts) def deploy_ssh_keypair(self, username, keypair): logger.info('deploy_ssh_keypair') username = 'admin' path, thumbprint = keypair path = self._replace_user(path, username) super(GaiaOSUtil, self).deploy_ssh_keypair( username, (path, thumbprint)) def openssl_to_openssh(self, input_file, output_file): cryptutil = CryptUtil(conf.get_openssl_cmd()) ret, out = shellutil.run_get_output( conf.get_openssl_cmd() + " rsa -pubin -noout -text -in '" + input_file + "'") if ret != 0: raise OSUtilError('openssl failed with {0}'.format(ret)) modulus = [] exponent = [] buf = None for line in out.split('\n'): if line.startswith('Modulus:'): buf = modulus buf.append(line) continue if line.startswith('Exponent:'): buf = exponent buf.append(line) continue if buf and line: buf.append(line.strip().replace(':', '')) def text_to_num(buf): if len(buf) == 1: return int(buf[0].split()[1]) return int(''.join(buf[1:]), 16) n = text_to_num(modulus) e = text_to_num(exponent) keydata = bytearray() keydata.extend(struct.pack('>I', len('ssh-rsa'))) keydata.extend(b'ssh-rsa') keydata.extend(struct.pack('>I', len(cryptutil.num_to_bytes(e)))) keydata.extend(cryptutil.num_to_bytes(e)) keydata.extend(struct.pack('>I', len(cryptutil.num_to_bytes(n)) + 1)) keydata.extend(b'\0') keydata.extend(cryptutil.num_to_bytes(n)) keydata_base64 = base64.b64encode(bytebuffer(keydata)) fileutil.write_file(output_file, ustr(b'ssh-rsa ' + keydata_base64 + b'\n', encoding='utf-8')) def deploy_ssh_pubkey(self, username, pubkey): logger.info('deploy_ssh_pubkey') username = 'admin' path, thumbprint, value = pubkey path = self._replace_user(path, username) super(GaiaOSUtil, self).deploy_ssh_pubkey( username, (path, thumbprint, value)) def eject_dvd(self, chk_err=True): logger.warn('eject is not supported on GAiA') def mount(self, device, mount_point, option=None, chk_err=True): if not option: option = [] if any('udf,iso9660' in opt for opt in option): ret, out = super(GaiaOSUtil, self).mount(device, mount_point, option=[opt.replace('udf,iso9660', 'udf') for opt in option], chk_err=chk_err) if not ret: return ret, out return super(GaiaOSUtil, self).mount( device, mount_point, option=option, chk_err=chk_err) def allow_dhcp_broadcast(self): logger.info('allow_dhcp_broadcast is ignored on GAiA') def remove_rules_files(self, rules_files=''): pass def restore_rules_files(self, rules_files=''): logger.info('restore_rules_files is ignored on GAiA') def restart_ssh_service(self): return shellutil.run('/sbin/service sshd condrestart', chk_err=False) def _address_to_string(self, addr): return socket.inet_ntoa(struct.pack("!I", addr)) def _get_prefix(self, mask): return str(sum(bin(int(x)).count('1') for x in mask.split('.'))) def route_add(self, net, mask, gateway): logger.info('route_add {0} {1} {2}', net, mask, gateway) if net == 0 and mask == 0: cidr = 'default' else: cidr = self._address_to_string(net) + '/' + self._get_prefix( self._address_to_string(mask)) ret, out = self._run_clish( # pylint: disable=W0612 'set static-route ' + cidr + ' nexthop gateway address ' + self._address_to_string(gateway) + ' on') return ret def set_hostname(self, hostname): logger.warn('set_hostname is ignored on GAiA') def set_dhcp_hostname(self, hostname): logger.warn('set_dhcp_hostname is ignored on GAiA') def publish_hostname(self, hostname, recover_nic=False): logger.warn('publish_hostname is ignored on GAiA') def del_account(self, username): logger.warn('del_account is ignored on GAiA') Azure-WALinuxAgent-a976115/azurelinuxagent/common/osutil/iosxe.py000066400000000000000000000067461510742556200251440ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.shellutil as shellutil from azurelinuxagent.common.future import ustr from azurelinuxagent.common.osutil.default import DefaultOSUtil, PRODUCT_ID_FILE, DMIDECODE_CMD, UUID_PATTERN from azurelinuxagent.common.utils import textutil, fileutil # pylint: disable=W0611 # pylint: disable=W0105 ''' The IOSXE distribution is a variant of the Centos distribution, version 7.1. The primary difference is that IOSXE makes some assumptions about the waagent environment: - only the waagent daemon is executed - no provisioning is performed - no DHCP-based services are available ''' # pylint: enable=W0105 class IosxeOSUtil(DefaultOSUtil): def __init__(self): # pylint: disable=W0235 super(IosxeOSUtil, self).__init__() @staticmethod def get_systemd_unit_file_install_path(): return "/usr/lib/systemd/system" def set_hostname(self, hostname): """ Unlike redhat 6.x, redhat 7.x will set hostname via hostnamectl Due to a bug in systemd in Centos-7.0, if this call fails, fallback to hostname. """ hostnamectl_cmd = ["hostnamectl", "set-hostname", hostname, "--static"] try: shellutil.run_command(hostnamectl_cmd) except Exception as e: logger.warn("[{0}] failed with error: {1}, attempting fallback".format(' '.join(hostnamectl_cmd), ustr(e))) DefaultOSUtil.set_hostname(self, hostname) def publish_hostname(self, hostname, recover_nic=False): """ Restart NetworkManager first before publishing hostname """ shellutil.run("service NetworkManager restart") super(IosxeOSUtil, self).publish_hostname(hostname, recover_nic) def register_agent_service(self): return shellutil.run("systemctl enable waagent", chk_err=False) def unregister_agent_service(self): return shellutil.run("systemctl disable waagent", chk_err=False) def openssl_to_openssh(self, input_file, output_file): DefaultOSUtil.openssl_to_openssh(self, input_file, output_file) def is_dhcp_available(self): return False def get_instance_id(self): ''' Azure records a UUID as the instance ID First check /sys/class/dmi/id/product_uuid. If that is missing, then extracts from dmidecode If nothing works (for old VMs), return the empty string ''' if os.path.isfile(PRODUCT_ID_FILE): try: s = fileutil.read_file(PRODUCT_ID_FILE).strip() return self._correct_instance_id(s.strip()) except IOError: pass rc, s = shellutil.run_get_output(DMIDECODE_CMD) if rc != 0 or UUID_PATTERN.match(s) is None: return "" return self._correct_instance_id(s.strip()) Azure-WALinuxAgent-a976115/azurelinuxagent/common/osutil/mariner.py000066400000000000000000000047151510742556200254440ustar00rootroot00000000000000# # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from azurelinuxagent.common.osutil.default import DefaultOSUtil class MarinerOSUtil(DefaultOSUtil): def __init__(self): super(MarinerOSUtil, self).__init__() self.jit_enabled = True @staticmethod def get_systemd_unit_file_install_path(): return "/usr/lib/systemd/system" @staticmethod def get_agent_bin_path(): return "/usr/bin" def is_dhcp_enabled(self): return True def start_network(self): self._run_command_without_raising(["systemctl", "start", "systemd-networkd"], log_error=False) def restart_if(self, ifname=None, retries=None, wait=None): self._run_command_without_raising(["systemctl", "restart", "systemd-networkd"]) def restart_ssh_service(self): self._run_command_without_raising(["systemctl", "restart", "sshd"]) def stop_dhcp_service(self): self._run_command_without_raising(["systemctl", "stop", "systemd-networkd"], log_error=False) def start_dhcp_service(self): self._run_command_without_raising(["systemctl", "start", "systemd-networkd"], log_error=False) def start_agent_service(self): self._run_command_without_raising(["systemctl", "start", "{0}".format(self.service_name)], log_error=False) def stop_agent_service(self): self._run_command_without_raising(["systemctl", "stop", "{0}".format(self.service_name)], log_error=False) def register_agent_service(self): self._run_command_without_raising(["systemctl", "enable", "{0}".format(self.service_name)], log_error=False) def unregister_agent_service(self): self._run_command_without_raising(["systemctl", "disable", "{0}".format(self.service_name)], log_error=False) def get_dhcp_pid(self): return self._get_dhcp_pid(["pidof", "systemd-networkd"]) def conf_sshd(self, disable_password): pass Azure-WALinuxAgent-a976115/azurelinuxagent/common/osutil/nsbsd.py000066400000000000000000000124461510742556200251200ustar00rootroot00000000000000# # Copyright 2018 Stormshield # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import os import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.fileutil as fileutil import azurelinuxagent.common.utils.shellutil as shellutil from azurelinuxagent.common.exception import OSUtilError from azurelinuxagent.common.osutil.freebsd import FreeBSDOSUtil class NSBSDOSUtil(FreeBSDOSUtil): resolver = None def __init__(self): super(NSBSDOSUtil, self).__init__() self.agent_conf_file_path = '/etc/waagent.conf' if self.resolver is None: # NSBSD doesn't have a system resolver, configure a python one try: import dns.resolver except ImportError: raise OSUtilError("Python DNS resolver not available. Cannot proceed!") self.resolver = dns.resolver.Resolver(configure=False) servers = [] cmd = "getconf /usr/Firewall/ConfigFiles/dns Servers | tail -n +2" ret, output = shellutil.run_get_output(cmd) # pylint: disable=W0612 for server in output.split("\n"): if server == '': break server = server[:-1] # remove last '=' cmd = "grep '{}' /etc/hosts".format(server) + " | awk '{print $1}'" ret, ip = shellutil.run_get_output(cmd) ip = ip.strip() # Remove new line char servers.append(ip) self.resolver.nameservers = servers dns.resolver.override_system_resolver(self.resolver) def set_hostname(self, hostname): self._run_command_without_raising( ['/usr/Firewall/sbin/setconf', '/usr/Firewall/System/global', 'SystemName', hostname]) self._run_command_without_raising(["/usr/Firewall/sbin/enlog"]) self._run_command_without_raising(["/usr/Firewall/sbin/enproxy", "-u"]) self._run_command_without_raising(["/usr/Firewall/sbin/ensl", "-u"]) self._run_command_without_raising(["/usr/Firewall/sbin/ennetwork", "-f"]) def restart_ssh_service(self): return shellutil.run('/usr/Firewall/sbin/enservice', chk_err=False) def conf_sshd(self, disable_password): option = "0" if disable_password else "1" shellutil.run('setconf /usr/Firewall/ConfigFiles/system SSH State 1', chk_err=False) shellutil.run('setconf /usr/Firewall/ConfigFiles/system SSH Password {}'.format(option), chk_err=False) shellutil.run('enservice', chk_err=False) logger.info("{0} SSH password-based authentication methods." .format("Disabled" if disable_password else "Enabled")) def get_root_username(self): return "admin" def useradd(self, username, expiration=None, comment=None): """ Create user account with 'username' """ logger.warn("User creation disabled") def del_account(self, username): logger.warn("User deletion disabled") def conf_sudoer(self, username, nopasswd=False, remove=False): logger.warn("Sudo is not enabled") def chpasswd(self, username, password, crypt_id=6, salt_len=10): self._run_command_raising_OSUtilError(["/usr/Firewall/sbin/fwpasswd", "-p", password], err_msg="Failed to set password for admin") # password set, activate webadmin and ssh access commands = [['setconf', '/usr/Firewall/ConfigFiles/webadmin', 'ACL', 'any'], ['ensl']] self._run_multiple_commands_without_raising(commands, log_error=False, continue_on_error=False) def deploy_ssh_pubkey(self, username, pubkey): """ Deploy authorized_key """ path, thumbprint, value = pubkey # pylint: disable=W0612 # overide parameters super(NSBSDOSUtil, self).deploy_ssh_pubkey('admin', ["/usr/Firewall/.ssh/authorized_keys", thumbprint, value]) def del_root_password(self): logger.warn("Root password deletion disabled") def start_dhcp_service(self): shellutil.run("/usr/Firewall/sbin/nstart dhclient", chk_err=False) def stop_dhcp_service(self): shellutil.run("/usr/Firewall/sbin/nstop dhclient", chk_err=False) def get_dhcp_pid(self): ret = "" pidfile = "/var/run/dhclient.pid" if os.path.isfile(pidfile): ret = fileutil.read_file(pidfile, encoding='ascii') return self._text_to_pid_list(ret) def eject_dvd(self, chk_err=True): pass def restart_if(self, ifname=None, retries=None, wait=None): # Restart dhclient only to publish hostname shellutil.run("ennetwork", chk_err=False) def set_dhcp_hostname(self, hostname): # already done by the dhcp client pass Azure-WALinuxAgent-a976115/azurelinuxagent/common/osutil/openbsd.py000066400000000000000000000325411510742556200254370ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # Copyright 2017 Reyk Floeter # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and OpenSSL 1.0+ import os import re import time import glob import datetime from azurelinuxagent.common.future import UTC import azurelinuxagent.common.utils.fileutil as fileutil import azurelinuxagent.common.utils.shellutil as shellutil import azurelinuxagent.common.logger as logger import azurelinuxagent.common.conf as conf from azurelinuxagent.common.exception import OSUtilError from azurelinuxagent.common.osutil.default import DefaultOSUtil UUID_PATTERN = re.compile( r'^\s*[A-F0-9]{8}(?:\-[A-F0-9]{4}){3}\-[A-F0-9]{12}\s*$', re.IGNORECASE) class OpenBSDOSUtil(DefaultOSUtil): def __init__(self): super(OpenBSDOSUtil, self).__init__() self.jit_enabled = True self._scsi_disks_timeout_set = False @staticmethod def get_agent_bin_path(): return "/usr/local/sbin" def get_instance_id(self): ret, output = shellutil.run_get_output("sysctl -n hw.uuid") if ret != 0 or UUID_PATTERN.match(output) is None: return "" return output.strip() def set_hostname(self, hostname): fileutil.write_file("/etc/myname", "{}\n".format(hostname)) self._run_command_without_raising(["hostname", hostname], log_error=False) def restart_ssh_service(self): return shellutil.run('rcctl restart sshd', chk_err=False) def start_agent_service(self): return shellutil.run('rcctl start {0}'.format(self.service_name), chk_err=False) def stop_agent_service(self): return shellutil.run('rcctl stop {0}'.format(self.service_name), chk_err=False) def register_agent_service(self): shellutil.run('chmod 0555 /etc/rc.d/{0}'.format(self.service_name), chk_err=False) return shellutil.run('rcctl enable {0}'.format(self.service_name), chk_err=False) def unregister_agent_service(self): return shellutil.run('rcctl disable {0}'.format(self.service_name), chk_err=False) def del_account(self, username): if self.is_sys_user(username): logger.error("{0} is a system user. Will not delete it.", username) self._run_command_without_raising(["touch", "/var/run/utmp"]) self._run_command_without_raising(["userdel", "-r", username]) self.conf_sudoer(username, remove=True) def conf_sudoer(self, username, nopasswd=False, remove=False): doas_conf = "/etc/doas.conf" doas = None if not remove: if not os.path.isfile(doas_conf): # always allow root to become root doas = "permit keepenv nopass root\n" fileutil.append_file(doas_conf, doas) if nopasswd: doas = "permit keepenv nopass {0}\n".format(username) else: doas = "permit keepenv persist {0}\n".format(username) fileutil.append_file(doas_conf, doas) fileutil.chmod(doas_conf, 0o644) else: # Remove user from doas.conf if os.path.isfile(doas_conf): try: content = fileutil.read_file(doas_conf) doas = content.split("\n") doas = [x for x in doas if username not in x] fileutil.write_file(doas_conf, "\n".join(doas)) except IOError as err: raise OSUtilError("Failed to remove sudoer: " "{0}".format(err)) def chpasswd(self, username, password, crypt_id=6, salt_len=10): if self.is_sys_user(username): raise OSUtilError(("User {0} is a system user. " "Will not set passwd.").format(username)) output = self._run_command_raising_OSUtilError(['encrypt'], cmd_input=password, err_msg="Failed to encrypt password for {0}".format(username)) passwd_hash = output.strip() self._run_command_raising_OSUtilError(['usermod', '-p', passwd_hash, username], err_msg="Failed to set password for {0}".format(username)) def del_root_password(self): ret, output = shellutil.run_get_output('usermod -p "*" root') if ret: raise OSUtilError("Failed to delete root password: " "{0}".format(output)) def get_if_mac(self, ifname): data = self._get_net_info() if data[0] == ifname: return data[2].replace(':', '').upper() return None def get_first_if(self): return self._get_net_info()[:2] def route_add(self, net, mask, gateway): cmd = 'route add {0} {1} {2}'.format(net, gateway, mask) return shellutil.run(cmd, chk_err=False) def is_missing_default_route(self): ret = shellutil.run("route -n get default", chk_err=False) if ret == 0: return False return True def is_dhcp_enabled(self): pass def start_dhcp_service(self): pass def stop_dhcp_service(self): pass def get_dhcp_lease_endpoint(self): """ OpenBSD has a sligthly different lease file format. """ endpoint = None pathglob = '/var/db/dhclient.leases.{}'.format(self.get_first_if()[0]) HEADER_LEASE = "lease" HEADER_OPTION = "option option-245" HEADER_EXPIRE = "expire" FOOTER_LEASE = "}" FORMAT_DATETIME = "%Y/%m/%d %H:%M:%S %Z" logger.info("looking for leases in path [{0}]".format(pathglob)) for lease_file in glob.glob(pathglob): leases = open(lease_file).read() if HEADER_OPTION in leases: cached_endpoint = None has_option_245 = False expired = True # assume expired for line in leases.splitlines(): if line.startswith(HEADER_LEASE): cached_endpoint = None has_option_245 = False expired = True elif HEADER_OPTION in line: try: ipaddr = line.split(" ")[-1].strip(";").split(":") cached_endpoint = \ ".".join(str(int(d, 16)) for d in ipaddr) has_option_245 = True except ValueError: logger.error("could not parse '{0}'".format(line)) elif HEADER_EXPIRE in line: if "never" in line: expired = False else: try: expire_string = line.split( " ", 4)[-1].strip(";") expire_date = datetime.datetime.strptime(expire_string, FORMAT_DATETIME).replace(tzinfo=UTC) if expire_date > datetime.datetime.now(UTC): expired = False except ValueError: logger.error("could not parse expiry token " "'{0}'".format(line)) elif FOOTER_LEASE in line: logger.info("dhcp entry:{0}, 245:{1}, expired: {2}" .format(cached_endpoint, has_option_245, expired)) if not expired and cached_endpoint is not None and has_option_245: endpoint = cached_endpoint logger.info("found endpoint [{0}]".format(endpoint)) # we want to return the last valid entry, so # keep searching if endpoint is not None: logger.info("cached endpoint found [{0}]".format(endpoint)) else: logger.info("cached endpoint not found") return endpoint def allow_dhcp_broadcast(self): pass def set_route_for_dhcp_broadcast(self, ifname): return shellutil.run("route add 255.255.255.255 -iface " "{0}".format(ifname), chk_err=False) def remove_route_for_dhcp_broadcast(self, ifname): shellutil.run("route delete 255.255.255.255 -iface " "{0}".format(ifname), chk_err=False) def get_dhcp_pid(self): return self._get_dhcp_pid(["pgrep", "-n", "dhclient"]) def get_dvd_device(self, dev_dir='/dev'): pattern = r'cd[0-9]c' for dvd in [re.match(pattern, dev) for dev in os.listdir(dev_dir)]: if dvd is not None: return "/dev/{0}".format(dvd.group(0)) raise OSUtilError("Failed to get DVD device") def mount_dvd(self, max_retry=6, chk_err=True, dvd_device=None, mount_point=None, sleep_time=5): if dvd_device is None: dvd_device = self.get_dvd_device() if mount_point is None: mount_point = conf.get_dvd_mount_point() if not os.path.isdir(mount_point): os.makedirs(mount_point) for retry in range(0, max_retry): retcode = self.mount(dvd_device, mount_point, option=["-o", "ro", "-t", "udf"], chk_err=False) if retcode == 0: logger.info("Successfully mounted DVD") return if retry < max_retry - 1: mountlist = shellutil.run_get_output("/sbin/mount")[1] existing = self.get_mount_point(mountlist, dvd_device) if existing is not None: logger.info("{0} is mounted at {1}", dvd_device, existing) return logger.warn("Mount DVD failed: retry={0}, ret={1}", retry, retcode) time.sleep(sleep_time) if chk_err: raise OSUtilError("Failed to mount DVD.") def eject_dvd(self, chk_err=True): dvd = self.get_dvd_device() retcode = shellutil.run("cdio eject {0}".format(dvd)) if chk_err and retcode != 0: raise OSUtilError("Failed to eject DVD: ret={0}".format(retcode)) def restart_if(self, ifname, retries=3, wait=5): # Restart dhclient only to publish hostname shellutil.run("/sbin/dhclient {0}".format(ifname), chk_err=False) def get_total_mem(self): ret, output = shellutil.run_get_output("sysctl -n hw.physmem") if ret: raise OSUtilError("Failed to get total memory: {0}".format(output)) try: return int(output)/1024/1024 except ValueError: raise OSUtilError("Failed to get total memory: {0}".format(output)) def get_processor_cores(self): ret, output = shellutil.run_get_output("sysctl -n hw.ncpu") if ret: raise OSUtilError("Failed to get processor cores.") try: return int(output) except ValueError: raise OSUtilError("Failed to get total memory: {0}".format(output)) def set_scsi_disks_timeout(self, timeout): pass def check_pid_alive(self, pid): # pylint: disable=R1710 if not pid: return return shellutil.run('ps -p {0}'.format(pid), chk_err=False) == 0 @staticmethod def _get_net_info(): """ There is no SIOCGIFCONF on OpenBSD - just parse ifconfig. Returns strings: iface, inet4_addr, and mac or 'None,None,None' if unable to parse. We will sleep and retry as the network must be up. """ iface = '' inet = '' mac = '' ret, output = shellutil.run_get_output( 'ifconfig hvn | grep -E "^hvn.:" | sed "s/:.*//g"', chk_err=False) if ret: raise OSUtilError("Can't find ether interface:{0}".format(output)) ifaces = output.split() if not ifaces: raise OSUtilError("Can't find ether interface.") iface = ifaces[0] ret, output = shellutil.run_get_output( 'ifconfig ' + iface, chk_err=False) if ret: raise OSUtilError("Can't get info for interface:{0}".format(iface)) for line in output.split('\n'): if line.find('inet ') != -1: inet = line.split()[1] elif line.find('lladdr ') != -1: mac = line.split()[1] logger.verbose("Interface info: ({0},{1},{2})", iface, inet, mac) return iface, inet, mac def device_for_ide_port(self, port_id): """ Return device name attached to ide port 'n'. """ return "wd{0}".format(port_id) @staticmethod def get_total_cpu_ticks_since_boot(): return 0 Azure-WALinuxAgent-a976115/azurelinuxagent/common/osutil/openwrt.py000066400000000000000000000136101510742556200254770ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # Copyright 2018 Sonus Networks, Inc. (d.b.a. Ribbon Communications Operating Company) # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import re import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.shellutil as shellutil import azurelinuxagent.common.utils.fileutil as fileutil from azurelinuxagent.common.osutil.default import DefaultOSUtil from azurelinuxagent.common.utils.networkutil import NetworkInterfaceCard class OpenWRTOSUtil(DefaultOSUtil): def __init__(self): super(OpenWRTOSUtil, self).__init__() self.agent_conf_file_path = '/etc/waagent.conf' self.dhclient_name = 'udhcpc' self.jit_enabled = True _ip_command_output = re.compile(r'^\d+:\s+(\w+):\s+(.*)$') def eject_dvd(self, chk_err=True): logger.warn('eject is not supported on OpenWRT') def useradd(self, username, expiration=None, comment=None): """ Create user account with 'username' """ userentry = self.get_userentry(username) if userentry is not None: logger.info("User {0} already exists, skip useradd", username) return if expiration is not None: cmd = ["useradd", "-m", username, "-s", "/bin/ash", "-e", expiration] else: cmd = ["useradd", "-m", username, "-s", "/bin/ash"] if not os.path.exists("/home"): os.mkdir("/home") if comment is not None: cmd.extend(["-c", comment]) self._run_command_raising_OSUtilError(cmd, err_msg="Failed to create user account:{0}".format(username)) def get_dhcp_pid(self): return self._get_dhcp_pid(["pidof", self.dhclient_name]) def get_nic_state(self, as_string=False): """ Capture NIC state (IPv4 and IPv6 addresses plus link state). :return: Dictionary of NIC state objects, with the NIC name as key :rtype: dict(str,NetworkInformationCard) """ if as_string: # as_string not supported on open wrt return '' state = {} status, output = shellutil.run_get_output("ip -o link", chk_err=False, log_cmd=False) if status != 0: logger.verbose("Could not fetch NIC link info; status {0}, {1}".format(status, output)) return {} for entry in output.splitlines(): result = OpenWRTOSUtil._ip_command_output.match(entry) if result: name = result.group(1) state[name] = NetworkInterfaceCard(name, result.group(2)) self._update_nic_state(state, "ip -o -f inet address", NetworkInterfaceCard.add_ipv4, "an IPv4 address") self._update_nic_state(state, "ip -o -f inet6 address", NetworkInterfaceCard.add_ipv6, "an IPv6 address") return state @staticmethod def _update_nic_state(state, ip_command, handler, description): """ Update the state of NICs based on the output of a specified ip subcommand. :param dict(str, NetworkInterfaceCard) state: Dictionary of NIC state objects :param str ip_command: The ip command to run :param handler: A method on the NetworkInterfaceCard class :param str description: Description of the particular information being added to the state """ status, output = shellutil.run_get_output(ip_command, chk_err=True) if status != 0: return for entry in output.splitlines(): result = OpenWRTOSUtil._ip_command_output.match(entry) if result: interface_name = result.group(1) if interface_name in state: handler(state[interface_name], result.group(2)) else: logger.error("Interface {0} has {1} but no link state".format(interface_name, description)) def is_dhcp_enabled(self): pass def start_dhcp_service(self): pass def stop_dhcp_service(self): pass def start_network(self) : return shellutil.run("/etc/init.d/network start", chk_err=True) def restart_ssh_service(self): # pylint: disable=R1710 # Since Dropbear is the default ssh server on OpenWRt, lets do a sanity check if os.path.exists("/etc/init.d/sshd"): return shellutil.run("/etc/init.d/sshd restart", chk_err=True) else: logger.warn("sshd service does not exists") def stop_agent_service(self): return shellutil.run("/etc/init.d/{0} stop".format(self.service_name), chk_err=True) def start_agent_service(self): return shellutil.run("/etc/init.d/{0} start".format(self.service_name), chk_err=True) def register_agent_service(self): return shellutil.run("/etc/init.d/{0} enable".format(self.service_name), chk_err=True) def unregister_agent_service(self): return shellutil.run("/etc/init.d/{0} disable".format(self.service_name), chk_err=True) def set_hostname(self, hostname): fileutil.write_file('/etc/hostname', hostname) commands = [['uci', 'set', 'system.@system[0].hostname={0}'.format(hostname)], ['uci', 'commit', 'system'], ['/etc/init.d/system', 'reload']] self._run_multiple_commands_without_raising(commands, log_error=False, continue_on_error=False) def remove_rules_files(self, rules_files=""): pass Azure-WALinuxAgent-a976115/azurelinuxagent/common/osutil/photonos.py000066400000000000000000000040161510742556200256520ustar00rootroot00000000000000# # Copyright 2021 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import azurelinuxagent.common.utils.shellutil as shellutil from azurelinuxagent.common.osutil.default import DefaultOSUtil class PhotonOSUtil(DefaultOSUtil): def __init__(self): super(PhotonOSUtil, self).__init__() self.agent_conf_file_path = '/etc/waagent.conf' @staticmethod def get_systemd_unit_file_install_path(): return '/usr/lib/systemd/system' @staticmethod def get_agent_bin_path(): return '/usr/bin' def is_dhcp_enabled(self): return True def start_network(self) : return shellutil.run('systemctl start systemd-networkd', chk_err=False) def restart_if(self, ifname=None, retries=None, wait=None): shellutil.run('systemctl restart systemd-networkd') def restart_ssh_service(self): shellutil.run('systemctl restart sshd') def stop_dhcp_service(self): return shellutil.run('systemctl stop systemd-networkd', chk_err=False) def start_dhcp_service(self): return shellutil.run('systemctl start systemd-networkd', chk_err=False) def start_agent_service(self): return shellutil.run('systemctl start waagent', chk_err=False) def stop_agent_service(self): return shellutil.run('systemctl stop waagent', chk_err=False) def get_dhcp_pid(self): return self._get_dhcp_pid(['pidof', 'systemd-networkd']) def conf_sshd(self, disable_password): pass Azure-WALinuxAgent-a976115/azurelinuxagent/common/osutil/redhat.py000066400000000000000000000344701510742556200252570ustar00rootroot00000000000000# # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os # pylint: disable=W0611 import re # pylint: disable=W0611 import pwd # pylint: disable=W0611 import shutil # pylint: disable=W0611 import socket # pylint: disable=W0611 import array # pylint: disable=W0611 import struct # pylint: disable=W0611 import fcntl # pylint: disable=W0611 import time # pylint: disable=W0611 import base64 # pylint: disable=W0611 import azurelinuxagent.common.conf as conf import azurelinuxagent.common.logger as logger from azurelinuxagent.common.future import ustr, bytebuffer # pylint: disable=W0611 from azurelinuxagent.common.exception import OSUtilError, CryptError import azurelinuxagent.common.utils.fileutil as fileutil import azurelinuxagent.common.utils.shellutil as shellutil import azurelinuxagent.common.utils.textutil as textutil # pylint: disable=W0611 from azurelinuxagent.common.utils.cryptutil import CryptUtil from azurelinuxagent.common.osutil.default import DefaultOSUtil class Redhat6xOSUtil(DefaultOSUtil): def __init__(self): super(Redhat6xOSUtil, self).__init__() self.jit_enabled = True def start_network(self): return shellutil.run("/sbin/service networking start", chk_err=False) def restart_ssh_service(self): return shellutil.run("/sbin/service sshd condrestart", chk_err=False) def stop_agent_service(self): return shellutil.run("/sbin/service {0} stop".format(self.service_name), chk_err=False) def start_agent_service(self): return shellutil.run("/sbin/service {0} start".format(self.service_name), chk_err=False) def register_agent_service(self): return shellutil.run("chkconfig --add {0}".format(self.service_name), chk_err=False) def unregister_agent_service(self): return shellutil.run("chkconfig --del {0}".format(self.service_name), chk_err=False) def openssl_to_openssh(self, input_file, output_file): pubkey = fileutil.read_file(input_file) try: cryptutil = CryptUtil(conf.get_openssl_cmd()) ssh_rsa_pubkey = cryptutil.asn1_to_ssh(pubkey) except CryptError as e: raise OSUtilError(ustr(e)) fileutil.append_file(output_file, ssh_rsa_pubkey) # Override def get_dhcp_pid(self): return self._get_dhcp_pid(["pidof", "dhclient"]) def set_hostname(self, hostname): """ Set /etc/sysconfig/network """ fileutil.update_conf_file('/etc/sysconfig/network', 'HOSTNAME', 'HOSTNAME={0}'.format(hostname)) self._run_command_without_raising(["hostname", hostname], log_error=False) def set_dhcp_hostname(self, hostname): ifname = self.get_if_name() filepath = "/etc/sysconfig/network-scripts/ifcfg-{0}".format(ifname) fileutil.update_conf_file(filepath, 'DHCP_HOSTNAME', 'DHCP_HOSTNAME={0}'.format(hostname)) def get_dhcp_lease_endpoint(self): return self.get_endpoint_from_leases_path('/var/lib/dhclient/dhclient-*.leases') class RedhatOSUtil(Redhat6xOSUtil): def __init__(self): super(RedhatOSUtil, self).__init__() self.service_name = self.get_service_name() @staticmethod def get_systemd_unit_file_install_path(): return "/usr/lib/systemd/system" @classmethod def get_network_setup_service_install_path(cls): """ In image mode, /usr is readonly, so the waagent-network-setup.service is written in /etc/systemd/system. In non-image mode, the default location is /usr/lib/systemd/system. """ if os.path.exists('/run/ostree-booted'): return "/etc/systemd/system" else: return cls.get_systemd_unit_file_install_path() def set_hostname(self, hostname): """ Unlike redhat 6.x, redhat 7.x will set hostname via hostnamectl Due to a bug in systemd in Centos-7.0, if this call fails, fallback to hostname. """ hostnamectl_cmd = ['hostnamectl', 'set-hostname', hostname, '--static'] try: shellutil.run_command(hostnamectl_cmd, log_error=False) except shellutil.CommandError: logger.warn("[{0}] failed, attempting fallback".format(' '.join(hostnamectl_cmd))) DefaultOSUtil.set_hostname(self, hostname) def get_nm_controlled(self, ifname): filepath = "/etc/sysconfig/network-scripts/ifcfg-{0}".format(ifname) nm_controlled_cmd = ['grep', 'NM_CONTROLLED=', filepath] try: result = shellutil.run_command(nm_controlled_cmd, log_error=False).rstrip() if result and len(result.split('=')) > 1: # Remove trailing white space and ' or " characters value = result.split('=')[1].replace("'", '').replace('"', '').rstrip() if value == "n" or value == "no": return False except shellutil.CommandError as e: # Command might fail because NM_CONTROLLED value is not in interface config file (exit code 1). # Log warning for any other exit code. # NM_CONTROLLED=y by default if not specified. if e.returncode != 1: logger.warn("[{0}] failed: {1}.\nAgent will continue to publish hostname without NetworkManager restart".format(' '.join(nm_controlled_cmd), e)) except Exception as e: logger.warn("Unexpected error while retrieving value of NM_CONTROLLED in {0}: {1}.\nAgent will continue to publish hostname without NetworkManager restart".format(filepath, e)) return True def get_nic_operational_and_general_states(self, ifname): """ Checks the contents of /sys/class/net/{ifname}/operstate and the results of 'nmcli -g general.state device show {ifname}' to determine the state of the provided interface. Raises an exception if the network interface state cannot be determined. """ filepath = "/sys/class/net/{0}/operstate".format(ifname) nic_general_state_cmd = ['nmcli', '-g', 'general.state', 'device', 'show', ifname] if not os.path.isfile(filepath): msg = "Unable to determine primary network interface {0} state, because state file does not exist: {1}".format(ifname, filepath) logger.warn(msg) raise Exception(msg) try: nic_oper_state = fileutil.read_file(filepath).rstrip().lower() nic_general_state = shellutil.run_command(nic_general_state_cmd, log_error=True).rstrip().lower() if nic_oper_state != "up": logger.warn("The primary network interface {0} operational state is '{1}'.".format(ifname, nic_oper_state)) else: logger.info("The primary network interface {0} operational state is '{1}'.".format(ifname, nic_oper_state)) if nic_general_state != "100 (connected)": logger.warn("The primary network interface {0} general state is '{1}'.".format(ifname, nic_general_state)) else: logger.info("The primary network interface {0} general state is '{1}'.".format(ifname, nic_general_state)) return nic_oper_state, nic_general_state except Exception as e: msg = "Unexpected error while determining the primary network interface state: {0}".format(e) logger.warn(msg) raise Exception(msg) def check_and_recover_nic_state(self, ifname): """ Checks if the provided network interface is in an 'up' state. If the network interface is in a 'down' state, attempt to recover the interface by restarting the Network Manager service. Raises an exception if an attempt to bring the interface into an 'up' state fails, or if the state of the network interface cannot be determined. """ nic_operstate, nic_general_state = self.get_nic_operational_and_general_states(ifname) if nic_operstate == "down" or "disconnected" in nic_general_state: logger.info("Restarting the Network Manager service to recover network interface {0}".format(ifname)) self.restart_network_manager() # Interface does not come up immediately after NetworkManager restart. Wait 5 seconds before checking # network interface state. time.sleep(5) nic_operstate, nic_general_state = self.get_nic_operational_and_general_states(ifname) # It is possible for network interface to be in an unknown or unmanaged state. Log warning if state is not # down, disconnected, up, or connected if nic_operstate != "up" or nic_general_state != "100 (connected)": msg = "Network Manager restart failed to bring network interface {0} into 'up' and 'connected' state".format(ifname) logger.warn(msg) raise Exception(msg) else: logger.info("Network Manager restart successfully brought the network interface {0} into 'up' and 'connected' state".format(ifname)) elif nic_operstate != "up" or nic_general_state != "100 (connected)": # We already logged a warning with the network interface state in get_nic_operstate(). Raise an exception # for the env thread to send to telemetry. raise Exception("The primary network interface {0} operational state is '{1}' and general state is '{2}'.".format(ifname, nic_operstate, nic_general_state)) def restart_network_manager(self): shellutil.run("service NetworkManager restart") def publish_hostname(self, hostname, recover_nic=False): """ Restart NetworkManager first before publishing hostname, only if the network interface is not controlled by the NetworkManager service (as determined by NM_CONTROLLED=n in the interface configuration). If the NetworkManager service is restarted before the agent publishes the hostname, and NM_controlled=y, a race condition may happen between the NetworkManager service and the Guest Agent making changes to the network interface configuration simultaneously. Note: check_and_recover_nic_state(ifname) raises an Exception if an attempt to recover the network interface fails, or if the network interface state cannot be determined. Callers should handle this exception by sending an event to telemetry. TODO: Improve failure reporting and add success reporting to telemetry for hostname changes. Right now we are only reporting failures to telemetry by raising an Exception in publish_hostname for the calling thread to handle by reporting the failure to telemetry. """ ifname = self.get_if_name() nm_controlled = self.get_nm_controlled(ifname) if not nm_controlled: self.restart_network_manager() # TODO: Current recover logic is only effective when the NetworkManager manages the network interface. Update the recover logic so it is effective even when NM_CONTROLLED=n super(RedhatOSUtil, self).publish_hostname(hostname, recover_nic and nm_controlled) def register_agent_service(self): return shellutil.run("systemctl enable {0}".format(self.service_name), chk_err=False) def unregister_agent_service(self): return shellutil.run("systemctl disable {0}".format(self.service_name), chk_err=False) def openssl_to_openssh(self, input_file, output_file): DefaultOSUtil.openssl_to_openssh(self, input_file, output_file) def get_dhcp_lease_endpoint(self): # dhclient endpoint = self.get_endpoint_from_leases_path('/var/lib/dhclient/dhclient-*.lease') if endpoint is None: # NetworkManager endpoint = self.get_endpoint_from_leases_path('/var/lib/NetworkManager/dhclient-*.lease') return endpoint class RedhatOSModernUtil(RedhatOSUtil): def __init__(self): # pylint: disable=W0235 super(RedhatOSModernUtil, self).__init__() def restart_ssh_service(self): return shellutil.run("systemctl condrestart sshd", chk_err=False) def stop_agent_service(self): return shellutil.run("systemctl stop {0}".format(self.service_name), chk_err=False) def start_agent_service(self): return shellutil.run("systemctl start {0}".format(self.service_name), chk_err=False) def restart_network_manager(self): shellutil.run("systemctl restart NetworkManager") def restart_if(self, ifname, retries=3, wait=5): """ Restart an interface by bouncing the link. systemd-networkd observes this event, and forces a renew of DHCP. """ retry_limit = retries + 1 for attempt in range(1, retry_limit): return_code = shellutil.run("ip link set {0} down && ip link set {0} up".format(ifname)) if return_code == 0: return logger.warn("failed to restart {0}: return code {1}".format(ifname, return_code)) if attempt < retry_limit: logger.info("retrying in {0} seconds".format(wait)) time.sleep(wait) else: logger.warn("exceeded restart retries") def check_and_recover_nic_state(self, ifname): # TODO: Implement and test a way to recover the network interface for RedhatOSModernUtil pass def publish_hostname(self, hostname, recover_nic=False): # RedhatOSUtil was updated to conditionally run NetworkManager restart in response to a race condition between # NetworkManager restart and the agent restarting the network interface during publish_hostname. Keeping the # NetworkManager restart in RedhatOSModernUtil because the issue was not reproduced on these versions. self.restart_network_manager() DefaultOSUtil.publish_hostname(self, hostname) Azure-WALinuxAgent-a976115/azurelinuxagent/common/osutil/suse.py000066400000000000000000000155701510742556200247670ustar00rootroot00000000000000# # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import time import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.fileutil as fileutil import azurelinuxagent.common.utils.shellutil as shellutil # pylint: disable=W0611 from azurelinuxagent.common.exception import OSUtilError # pylint: disable=W0611 from azurelinuxagent.common.future import ustr # pylint: disable=W0611 from azurelinuxagent.common.osutil.default import DefaultOSUtil class SUSE11OSUtil(DefaultOSUtil): def __init__(self): super(SUSE11OSUtil, self).__init__() self.jit_enabled = True self.dhclient_name = 'dhcpcd' def set_hostname(self, hostname): fileutil.write_file('/etc/HOSTNAME', hostname) self._run_command_without_raising(["hostname", hostname], log_error=False) def get_dhcp_pid(self): return self._get_dhcp_pid(["pidof", self.dhclient_name]) def is_dhcp_enabled(self): return True def stop_dhcp_service(self): self._run_command_without_raising(["/sbin/service", self.dhclient_name, "stop"], log_error=False) def start_dhcp_service(self): self._run_command_without_raising(["/sbin/service", self.dhclient_name, "start"], log_error=False) def start_network(self): self._run_command_without_raising(["/sbin/service", "network", "start"], log_error=False) def restart_ssh_service(self): self._run_command_without_raising(["/sbin/service", "sshd", "restart"], log_error=False) def stop_agent_service(self): self._run_command_without_raising(["/sbin/service", self.service_name, "stop"], log_error=False) def start_agent_service(self): self._run_command_without_raising(["/sbin/service", self.service_name, "start"], log_error=False) def register_agent_service(self): self._run_command_without_raising(["/sbin/insserv", self.service_name], log_error=False) def unregister_agent_service(self): self._run_command_without_raising(["/sbin/insserv", "-r", self.service_name], log_error=False) class SUSEOSUtil(SUSE11OSUtil): def __init__(self): super(SUSEOSUtil, self).__init__() self.dhclient_name = 'wickedd-dhcp4' def publish_hostname(self, hostname, recover_nic=False): self.set_dhcp_hostname(hostname) self.set_hostname_record(hostname) ifname = self.get_if_name() # To push the hostname to the dhcp server we do not need to # bring down the interface, just make the make ifup do whatever is # necessary self.ifup(ifname) def ifup(self, ifname, retries=3, wait=5): logger.info('Interface {0} bounce with ifup'.format(ifname)) retry_limit=retries+1 for attempt in range(1, retry_limit): try: shellutil.run_command(['ifup', ifname], log_error=True) except Exception: if attempt < retry_limit: logger.info("retrying in {0} seconds".format(wait)) time.sleep(wait) else: logger.warn("exceeded restart retries") @staticmethod def get_systemd_unit_file_install_path(): return "/usr/lib/systemd/system" def set_hostname(self, hostname): self._run_command_without_raising( ["hostnamectl", "set-hostname", hostname], log_error=False ) def set_dhcp_hostname(self, hostname): dhcp_config_file_path = '/etc/sysconfig/network/dhcp' hostname_send_setting = fileutil.get_line_startingwith( 'DHCLIENT_HOSTNAME_OPTION', dhcp_config_file_path ) if hostname_send_setting: value = hostname_send_setting.split('=')[-1] # wicked's source accepts values with double quotes, single quotes, and no quotes at all. if value in ('"AUTO"', "'AUTO'", 'AUTO') or value == '"{0}"'.format(hostname): # Return if auto send host-name is configured or the current # hostname is already set up to be sent return else: # Do not use update_conf_file as it moves the setting to the # end of the file separating it from the contextual comment new_conf = [] dhcp_conf = fileutil.read_file( dhcp_config_file_path).split('\n') for entry in dhcp_conf: if entry.startswith('DHCLIENT_HOSTNAME_OPTION'): new_conf.append( 'DHCLIENT_HOSTNAME_OPTION="{0}"'. format(hostname) ) continue new_conf.append(entry) fileutil.write_file(dhcp_config_file_path, '\n'.join(new_conf)) else: fileutil.append_file( dhcp_config_file_path, 'DHCLIENT_HOSTNAME_OPTION="{0}"'. format(hostname) ) def stop_dhcp_service(self): self._run_command_without_raising(["systemctl", "stop", "{}.service".format(self.dhclient_name)], log_error=False) def start_dhcp_service(self): self._run_command_without_raising(["systemctl", "start", "{}.service".format(self.dhclient_name)], log_error=False) def start_network(self): self._run_command_without_raising(["systemctl", "start", "network.service"], log_error=False) def restart_ssh_service(self): self._run_command_without_raising(["systemctl", "restart", "sshd.service"], log_error=False) def stop_agent_service(self): self._run_command_without_raising(["systemctl", "stop", "{}.service".format(self.service_name)], log_error=False) def start_agent_service(self): self._run_command_without_raising(["systemctl", "start", "{}.service".format(self.service_name)], log_error=False) def register_agent_service(self): self._run_command_without_raising(["systemctl", "enable", "{}.service".format(self.service_name)], log_error=False) def unregister_agent_service(self): self._run_command_without_raising(["systemctl", "disable", "{}.service".format(self.service_name)], log_error=False) Azure-WALinuxAgent-a976115/azurelinuxagent/common/osutil/systemd.py000066400000000000000000000103511510742556200254700ustar00rootroot00000000000000# # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import re from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.utils import shellutil def _get_os_util(): if _get_os_util.value is None: _get_os_util.value = get_osutil() return _get_os_util.value _get_os_util.value = None def is_systemd(): """ Determine if systemd is managing system services; the implementation follows the same strategy as, for example, sd_booted() in libsystemd, or /usr/sbin/service """ return os.path.exists("/run/systemd/system/") def get_version(): # the output is similar to # $ systemctl --version # systemd 245 (245.4-4ubuntu3) # +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP etc # # return fist line systemd 245 (245.4-4ubuntu3) try: output = shellutil.run_command(['systemctl', '--version']) version = output.split('\n')[0] return version except Exception: return "unknown" def get_unit_file_install_path(): """ e.g. /lib/systemd/system """ return _get_os_util().get_systemd_unit_file_install_path() def get_agent_unit_name(): """ e.g. walinuxagent.service """ return _get_os_util().get_service_name() + ".service" def get_agent_unit_file(): """ e.g. /lib/systemd/system/walinuxagent.service """ return os.path.join(get_unit_file_install_path(), get_agent_unit_name()) def get_agent_drop_in_path(): """ e.g. /lib/systemd/system/walinuxagent.service.d """ return os.path.join(get_unit_file_install_path(), "{0}.d".format(get_agent_unit_name())) def get_unit_property(unit_name, property_name): output = shellutil.run_command(["systemctl", "show", unit_name, "--property", property_name]) # Output is similar to # # systemctl show walinuxagent.service --property CPUQuotaPerSecUSec # CPUQuotaPerSecUSec=50ms match = re.match("[^=]+=(?P.+)", output) if match is None: raise ValueError("Can't find property {0} of {1}".format(property_name, unit_name)) return match.group('value') def set_unit_run_time_property(unit_name, property_name, value): """ Set a property of a unit at runtime Note: --runtime settings only apply until the next reboot """ try: # Ex: systemctl set-property foobar.service CPUWeight=200 --runtime shellutil.run_command(["systemctl", "set-property", unit_name, "{0}={1}".format(property_name, value), "--runtime"]) except shellutil.CommandError as e: raise ValueError("Can't set property {0} of {1}: {2}".format(property_name, unit_name, e)) def set_unit_run_time_properties(unit_name, property_names, values): """ Set multiple properties of a unit at runtime Note: --runtime settings only apply until the next reboot """ if len(property_names) != len(values): raise ValueError("The number of property names:{0} and values:{1} must be the same".format(property_names, values)) properties = ["{0}={1}".format(name, value) for name, value in zip(property_names, values)] try: # Ex: systemctl set-property foobar.service CPUWeight=200 MemoryMax=2G IPAccounting=yes --runtime shellutil.run_command(["systemctl", "set-property", unit_name] + properties + ["--runtime"]) except shellutil.CommandError as e: raise ValueError("Can't set properties {0} of {1}: {2}".format(properties, unit_name, e)) def is_unit_loaded(unit_name): """ Determine if a unit is loaded """ try: value = get_unit_property(unit_name, "LoadState") return value.lower() == "loaded" except shellutil.CommandError: return FalseAzure-WALinuxAgent-a976115/azurelinuxagent/common/osutil/ubuntu.py000066400000000000000000000143701510742556200253270ustar00rootroot00000000000000# # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import glob import textwrap import time import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.shellutil as shellutil from azurelinuxagent.common.osutil.default import DefaultOSUtil class Ubuntu14OSUtil(DefaultOSUtil): def __init__(self): super(Ubuntu14OSUtil, self).__init__() self.jit_enabled = True self.service_name = self.get_service_name() @staticmethod def get_service_name(): return "walinuxagent" def start_network(self): return shellutil.run("service networking start", chk_err=False) def stop_agent_service(self): try: shellutil.run_command(["service", self.service_name, "stop"]) except shellutil.CommandError as cmd_err: return cmd_err.returncode return 0 def start_agent_service(self): try: shellutil.run_command(["service", self.service_name, "start"]) except shellutil.CommandError as cmd_err: return cmd_err.returncode return 0 def remove_rules_files(self, rules_files=""): pass def restore_rules_files(self, rules_files=""): pass def get_dhcp_lease_endpoint(self): return self.get_endpoint_from_leases_path('/var/lib/dhcp/dhclient.*.leases') class Ubuntu12OSUtil(Ubuntu14OSUtil): def __init__(self): # pylint: disable=W0235 super(Ubuntu12OSUtil, self).__init__() # Override def get_dhcp_pid(self): return self._get_dhcp_pid(["pidof", "dhclient3"]) class Ubuntu16OSUtil(Ubuntu14OSUtil): """ Ubuntu 16.04, 16.10, and 17.04. """ def __init__(self): super(Ubuntu16OSUtil, self).__init__() self.service_name = self.get_service_name() def register_agent_service(self): return shellutil.run("systemctl unmask {0}".format(self.service_name), chk_err=False) def unregister_agent_service(self): return shellutil.run("systemctl mask {0}".format(self.service_name), chk_err=False) class Ubuntu18OSUtil(Ubuntu16OSUtil): """ Ubuntu >=18.04 and <=24.04 """ def __init__(self): super(Ubuntu18OSUtil, self).__init__() self.service_name = self.get_service_name() def restart_if(self, ifname, retries=3, wait=5): """ Restart systemd-networkd """ retry_limit=retries+1 for attempt in range(1, retry_limit): try: shellutil.run_command(["systemctl", "restart", "systemd-networkd"]) except shellutil.CommandError as cmd_err: logger.warn("failed to restart systemd-networkd: return code {1}".format(cmd_err.returncode)) if attempt < retry_limit: logger.info("retrying in {0} seconds".format(wait)) time.sleep(wait) else: logger.warn("exceeded restart retries") def get_dhcp_pid(self): return self._get_dhcp_pid(["pidof", "systemd-networkd"]) def start_network(self): return shellutil.run("systemctl start systemd-networkd", chk_err=False) def stop_network(self): return shellutil.run("systemctl stop systemd-networkd", chk_err=False) def start_dhcp_service(self): return self.start_network() def stop_dhcp_service(self): return self.stop_network() def start_agent_service(self): return shellutil.run("systemctl start {0}".format(self.service_name), chk_err=False) def stop_agent_service(self): return shellutil.run("systemctl stop {0}".format(self.service_name), chk_err=False) def get_dhcp_lease_endpoint(self): pathglob = "/run/systemd/netif/leases/*" logger.info("looking for leases in path [{0}]".format(pathglob)) endpoint = None for lease_file in glob.glob(pathglob): try: with open(lease_file) as f: lease = f.read() for line in lease.splitlines(): if line.startswith("OPTION_245"): option_245 = line.split("=")[1] options = [int(i, 16) for i in textwrap.wrap(option_245, 2)] endpoint = "{0}.{1}.{2}.{3}".format(*options) logger.info("found endpoint [{0}]".format(endpoint)) except Exception as e: logger.info( "Failed to parse {0}: {1}".format(lease_file, str(e)) ) if endpoint is not None: logger.info("cached endpoint found [{0}]".format(endpoint)) else: logger.info("cached endpoint not found") return endpoint class UbuntuOSUtil(Ubuntu16OSUtil): def __init__(self): # pylint: disable=W0235 super(UbuntuOSUtil, self).__init__() def restart_if(self, ifname, retries=3, wait=5): """ Restart an interface by bouncing the link. systemd-networkd observes this event, and forces a renew of DHCP. """ retry_limit = retries+1 for attempt in range(1, retry_limit): return_code = shellutil.run("ip link set {0} down && ip link set {0} up".format(ifname)) if return_code == 0: return logger.warn("failed to restart {0}: return code {1}".format(ifname, return_code)) if attempt < retry_limit: logger.info("retrying in {0} seconds".format(wait)) time.sleep(wait) else: logger.warn("exceeded restart retries") class UbuntuSnappyOSUtil(Ubuntu14OSUtil): def __init__(self): super(UbuntuSnappyOSUtil, self).__init__() self.conf_file_path = '/apps/walinuxagent/current/waagent.conf' Azure-WALinuxAgent-a976115/azurelinuxagent/common/protocol/000077500000000000000000000000001510742556200237505ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/azurelinuxagent/common/protocol/__init__.py000066400000000000000000000011651510742556200260640ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # Azure-WALinuxAgent-a976115/azurelinuxagent/common/protocol/extensions_goal_state.py000066400000000000000000000165351510742556200307350ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import datetime from azurelinuxagent.common import logger from azurelinuxagent.common.AgentGlobals import AgentGlobals from azurelinuxagent.common.exception import AgentError from azurelinuxagent.common.future import datetime_min_utc from azurelinuxagent.common.utils import textutil, timeutil class GoalStateChannel(object): WireServer = "WireServer" HostGAPlugin = "HostGAPlugin" Empty = "Empty" class GoalStateSource(object): Fabric = "Fabric" FastTrack = "FastTrack" Empty = "Empty" class VmSettingsParseError(AgentError): """ Error raised when the VmSettings are malformed """ def __init__(self, message, etag, vm_settings_text, inner=None): super(VmSettingsParseError, self).__init__(message, inner) self.etag = etag self.vm_settings_text = vm_settings_text class ExtensionsGoalState(object): """ ExtensionsGoalState represents the extensions information in the goal state; that information can originate from ExtensionsConfig when the goal state is retrieved from the WireServe or from vmSettings when it is retrieved from the HostGAPlugin. NOTE: This is an abstract class. The corresponding concrete classes can be instantiated using the ExtensionsGoalStateFactory. """ def __init__(self): self._is_outdated = False @property def id(self): """ Returns a string that includes the incarnation number if the ExtensionsGoalState was created from ExtensionsConfig, or the etag if it was created from vmSettings. """ raise NotImplementedError() @property def is_outdated(self): """ A goal state can be outdated if, for example, the VM Agent is using Fast Track and support for it stops (e.g. the VM is migrated to a node with an older version of the HostGAPlugin) and now the Agent is fetching goal states via the WireServer. """ return self._is_outdated @is_outdated.setter def is_outdated(self, value): self._is_outdated = value @property def svd_sequence_number(self): raise NotImplementedError() @property def activity_id(self): raise NotImplementedError() @property def correlation_id(self): raise NotImplementedError() @property def created_on_timestamp(self): raise NotImplementedError() @property def channel(self): """ Whether the goal state was retrieved from the WireServer or the HostGAPlugin """ raise NotImplementedError() @property def source(self): """ Whether the goal state originated from Fabric or Fast Track """ raise NotImplementedError() @property def status_upload_blob(self): raise NotImplementedError() @property def status_upload_blob_type(self): raise NotImplementedError() def _set_status_upload_blob_type(self, value): raise NotImplementedError() @property def required_features(self): raise NotImplementedError() @property def on_hold(self): raise NotImplementedError() @property def agent_families(self): raise NotImplementedError() @property def extensions(self): raise NotImplementedError() def get_redacted_text(self): """ Returns the raw text (either the ExtensionsConfig or the vmSettings) with any confidential data removed, or an empty string for empty goal states. """ raise NotImplementedError() def _do_common_validations(self): """ Does validations common to vmSettings and ExtensionsConfig """ if self.status_upload_blob_type not in ["BlockBlob", "PageBlob"]: logger.info("Status Blob type '{0}' is not valid, assuming BlockBlob", self.status_upload_blob) self._set_status_upload_blob_type("BlockBlob") @staticmethod def _ticks_to_utc_timestamp(ticks_string): """ Takes 'ticks', a string indicating the number of ticks since midnight 0001-01-01 00:00:00, and returns a UTC timestamp (every tick is 1/10000000 of a second). """ as_date_time = None if ticks_string not in (None, ""): try: as_date_time = datetime_min_utc + datetime.timedelta(seconds=float(ticks_string) / 10 ** 7) except Exception as exception: logger.info("Can't parse ticks: {0}", textutil.format_exception(exception)) if as_date_time is None: as_date_time = datetime_min_utc return timeutil.create_utc_timestamp(as_date_time) @staticmethod def _string_to_id(id_string): """ Takes 'id', a string indicating an ID, and returns a null GUID if the string is None or empty; otherwise return 'id' unchanged """ if id_string in (None, ""): return AgentGlobals.GUID_ZERO return id_string def supports_encoded_signature(self): """ Returns boolean indicating if the goal state API supports the 'encoded_signature' property. - ExtensionsConfig goal states should always return True. - VmSettings goal states should check the HGAP version. """ raise NotImplementedError() class EmptyExtensionsGoalState(ExtensionsGoalState): def __init__(self, incarnation): super(EmptyExtensionsGoalState, self).__init__() self._id = "incarnation_{0}".format(incarnation) self._incarnation = incarnation @property def id(self): return self._id @property def incarnation(self): return self._incarnation @property def svd_sequence_number(self): return self._incarnation @property def activity_id(self): return AgentGlobals.GUID_ZERO @property def correlation_id(self): return AgentGlobals.GUID_ZERO @property def created_on_timestamp(self): return datetime_min_utc @property def channel(self): return GoalStateChannel.Empty @property def source(self): return GoalStateSource.Empty @property def status_upload_blob(self): return None @property def status_upload_blob_type(self): return None def _set_status_upload_blob_type(self, value): raise TypeError("EmptyExtensionsGoalState is immutable; cannot change the value of the status upload blob") @property def required_features(self): return [] @property def on_hold(self): return False @property def agent_families(self): return [] @property def extensions(self): return [] def get_redacted_text(self): return '' def supports_encoded_signature(self): return False Azure-WALinuxAgent-a976115/azurelinuxagent/common/protocol/extensions_goal_state_factory.py000066400000000000000000000027341510742556200324600ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ from azurelinuxagent.common.protocol.extensions_goal_state import EmptyExtensionsGoalState from azurelinuxagent.common.protocol.extensions_goal_state_from_extensions_config import ExtensionsGoalStateFromExtensionsConfig from azurelinuxagent.common.protocol.extensions_goal_state_from_vm_settings import ExtensionsGoalStateFromVmSettings class ExtensionsGoalStateFactory(object): @staticmethod def create_empty(incarnation): return EmptyExtensionsGoalState(incarnation) @staticmethod def create_from_extensions_config(incarnation, xml_text, wire_client): return ExtensionsGoalStateFromExtensionsConfig(incarnation, xml_text, wire_client) @staticmethod def create_from_vm_settings(etag, json_text, correlation_id): return ExtensionsGoalStateFromVmSettings(etag, json_text, correlation_id) extensions_goal_state_from_extensions_config.py000066400000000000000000000750121510742556200355000ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/azurelinuxagent/common/protocol# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import json from collections import defaultdict from azurelinuxagent.common import logger from azurelinuxagent.common.event import add_event, WALAEventOperation from azurelinuxagent.common.exception import ExtensionsConfigError from azurelinuxagent.common.future import ustr, urlparse from azurelinuxagent.common.protocol.extensions_goal_state import ExtensionsGoalState, GoalStateChannel, GoalStateSource from azurelinuxagent.common.protocol.restapi import ExtensionSettings, Extension, VMAgentFamily, ExtensionState, InVMGoalStateMetaData from azurelinuxagent.common.utils.textutil import parse_doc, parse_json, findall, find, findtext, getattrib, gettext, \ format_exception, is_str_none_or_whitespace, is_str_empty, gettextxml class ExtensionsGoalStateFromExtensionsConfig(ExtensionsGoalState): def __init__(self, incarnation, xml_text, wire_client): super(ExtensionsGoalStateFromExtensionsConfig, self).__init__() self._id = "incarnation_{0}".format(incarnation) self._is_outdated = False self._incarnation = incarnation self._text = xml_text self._status_upload_blob = None self._status_upload_blob_type = None self._status_upload_blob_xml_node = None self._artifacts_profile_blob_xml_node = None self._required_features = [] self._on_hold = False self._activity_id = None self._correlation_id = None self._created_on_timestamp = None self._agent_families = [] self._extensions = [] try: self._parse_extensions_config(xml_text, wire_client) self._do_common_validations() except Exception as e: raise ExtensionsConfigError("Error parsing ExtensionsConfig (incarnation: {0}): {1}\n{2}".format(incarnation, format_exception(e), self.get_redacted_text())) def _parse_extensions_config(self, xml_text, wire_client): xml_doc = parse_doc(xml_text) ga_families_list = find(xml_doc, "GAFamilies") ga_families = findall(ga_families_list, "GAFamily") for ga_family in ga_families: name = findtext(ga_family, "Name") version = findtext(ga_family, "Version") from_version = findtext(ga_family, "FromVersion") is_version_from_rsm = findtext(ga_family, "IsVersionFromRSM") is_vm_enabled_for_rsm_upgrades = findtext(ga_family, "IsVMEnabledForRSMUpgrades") uris_list = find(ga_family, "Uris") uris = findall(uris_list, "Uri") family = VMAgentFamily(name) family.version = version family.from_version = from_version if is_version_from_rsm is not None: # checking None because converting string to lowercase family.is_version_from_rsm = is_version_from_rsm.lower() == "true" if is_vm_enabled_for_rsm_upgrades is not None: # checking None because converting string to lowercase family.is_vm_enabled_for_rsm_upgrades = is_vm_enabled_for_rsm_upgrades.lower() == "true" for uri in uris: family.uris.append(gettext(uri)) self._agent_families.append(family) self.__parse_plugins_and_settings_and_populate_ext_handlers(xml_doc) required_features_list = find(xml_doc, "RequiredFeatures") if required_features_list is not None: self._parse_required_features(required_features_list) self._status_upload_blob_xml_node = find(xml_doc, "StatusUploadBlob") self._status_upload_blob = gettext(self._status_upload_blob_xml_node) self._status_upload_blob_type = getattrib(self._status_upload_blob_xml_node, "statusBlobType") logger.verbose("Extension config shows status blob type as [{0}]", self._status_upload_blob_type) self._artifacts_profile_blob_xml_node = find(xml_doc, "InVMArtifactsProfileBlob") self._on_hold = ExtensionsGoalStateFromExtensionsConfig._fetch_extensions_on_hold(self._artifacts_profile_blob_xml_node, wire_client) in_vm_gs_metadata = InVMGoalStateMetaData(find(xml_doc, "InVMGoalStateMetaData")) self._activity_id = self._string_to_id(in_vm_gs_metadata.activity_id) self._correlation_id = self._string_to_id(in_vm_gs_metadata.correlation_id) self._created_on_timestamp = self._ticks_to_utc_timestamp(in_vm_gs_metadata.created_on_ticks) @staticmethod def _fetch_extensions_on_hold(artifacts_profile_blob_xml_node, wire_client): def log_info(message): logger.info(message) add_event(op=WALAEventOperation.ArtifactsProfileBlob, message=message, is_success=True, log_event=False) def log_warning(message): logger.warn(message) add_event(op=WALAEventOperation.ArtifactsProfileBlob, message=message, is_success=False, log_event=False) artifacts_profile_blob = gettext(artifacts_profile_blob_xml_node) if is_str_none_or_whitespace(artifacts_profile_blob): log_info("ExtensionsConfig does not include a InVMArtifactsProfileBlob; will assume the VM is not on hold") return False try: profile = wire_client.fetch_artifacts_profile_blob(artifacts_profile_blob) except Exception as error: log_warning("Can't download the artifacts profile blob; will assume the VM is not on hold. {0}".format(ustr(error))) return False if is_str_empty(profile): log_info("The artifacts profile blob is empty; will assume the VM is not on hold.") return False try: artifacts_profile = _InVMArtifactsProfile(profile) except Exception as exception: log_warning("Can't parse the artifacts profile blob; will assume the VM is not on hold. Error: {0}".format(ustr(exception))) return False return artifacts_profile.get_on_hold() @property def id(self): return self._id @property def incarnation(self): return self._incarnation @property def svd_sequence_number(self): return self._incarnation @property def activity_id(self): return self._activity_id @property def correlation_id(self): return self._correlation_id @property def created_on_timestamp(self): return self._created_on_timestamp @property def channel(self): return GoalStateChannel.WireServer @property def source(self): return GoalStateSource.Fabric @property def status_upload_blob(self): return self._status_upload_blob @property def status_upload_blob_type(self): return self._status_upload_blob_type def _set_status_upload_blob_type(self, value): self._status_upload_blob_type = value @property def required_features(self): return self._required_features @property def on_hold(self): return self._on_hold @property def agent_families(self): return self._agent_families @property def extensions(self): return self._extensions def get_redacted_text(self): def redact_url(unredacted, xml_node, name): text_xml = gettextxml(xml_node) # Note that we need to redact the raw XML text (which may contain escape sequences) if text_xml is None: return unredacted parsed = urlparse(text_xml) redacted = unredacted.replace(parsed.query, "***REDACTED***") if redacted == unredacted: raise Exception('Could not redact {0}'.format(name)) return redacted try: text = self._text text = redact_url(text, self._status_upload_blob_xml_node, "StatusUploadBlob") text = redact_url(text, self._artifacts_profile_blob_xml_node, "InVMArtifactsProfileBlob") for ext_handler in self._extensions: for extension in ext_handler.settings: if extension.protectedSettings is not None: original = text text = text.replace(extension.protectedSettings, "***REDACTED***") if text == original: return 'Could not redact protectedSettings for {0}'.format(extension.name) return text except Exception as e: return "Error redacting text: {0}".format(e) def _parse_required_features(self, required_features_list): for required_feature in findall(required_features_list, "RequiredFeature"): feature_name = findtext(required_feature, "Name") # per the documentation, RequiredFeatures also have a "Value" attribute but currently it is not being populated self._required_features.append(feature_name) def __parse_plugins_and_settings_and_populate_ext_handlers(self, xml_doc): """ Sample ExtensionConfig Plugin and PluginSettings: { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"01_add_extensions_with_dependency":"ff2a3da6-8e12-4ab6-a4ca-4e3a473ab385"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"01_add_extensions_with_dependency":"2e837740-cf7e-4528-b3a4-241002618f05"} } } ] } """ plugins_list = find(xml_doc, "Plugins") plugins = findall(plugins_list, "Plugin") plugin_settings_list = find(xml_doc, "PluginSettings") plugin_settings = findall(plugin_settings_list, "Plugin") for plugin in plugins: extension = Extension() try: ExtensionsGoalStateFromExtensionsConfig._parse_plugin(extension, plugin) ExtensionsGoalStateFromExtensionsConfig._parse_plugin_settings(extension, plugin_settings) except ExtensionsConfigError as error: extension.invalid_setting_reason = ustr(error) self._extensions.append(extension) @staticmethod def _parse_plugin(extension, plugin): """ Sample config: https://rdfecurrentuswestcache3.blob.core.test-cint.azure-test.net/0e53c53ef0be4178bacb0a1fecf12a74/Microsoft.Azure.Extensions_CustomScript_usstagesc_manifest.xml https://rdfecurrentuswestcache4.blob.core.test-cint.azure-test.net/0e53c53ef0be4178bacb0a1fecf12a74/Microsoft.Azure.Extensions_CustomScript_usstagesc_manifest.xml Note that the `additionalLocations` subnode is populated with links generated by PIR for resiliency. In regions with this feature enabled, CRP will provide any extra links in the format above. If no extra links are provided, the subnode will not exist. """ def _log_error_if_none(attr_name, value): # Plugin Name and Version are very essential fields, without them we wont be able to even report back to CRP # about that handler. For those cases we need to fail the GoalState completely but currently we dont support # reporting status at a GoalState level (we only report at a handler level). # Once that functionality is added to the GA, we would raise here rather than just report error in our logs. if value in (None, ""): add_event(op=WALAEventOperation.InvalidExtensionConfig, message="{0} is None for ExtensionConfig, logging error".format(attr_name), log_event=True, is_success=False) return value extension.name = _log_error_if_none("Extensions.Plugins.Plugin.name", getattrib(plugin, "name")) extension.version = _log_error_if_none("Extensions.Plugins.Plugin.version", getattrib(plugin, "version")) extension.state = getattrib(plugin, "state") if extension.state in (None, ""): raise ExtensionsConfigError("Received empty Extensions.Plugins.Plugin.state, failing Handler") # The 'encodedSignature' property is optional. If absent, it may mean the ExtensionsConfig API does not support # it, or the extension is not signed. In either case, we set extension.encoded_signature to an empty string. # Note that getattrib returns an empty string already if an attribute does not exist in a node. extension.encoded_signature = getattrib(plugin, "encodedSignature") def getattrib_wrapped_in_list(node, attr_name): attr = getattrib(node, attr_name) return [attr] if attr not in (None, "") else [] location = getattrib_wrapped_in_list(plugin, "location") failover_location = getattrib_wrapped_in_list(plugin, "failoverlocation") locations = location + failover_location additional_location_node = find(plugin, "additionalLocations") if additional_location_node is not None: nodes_list = findall(additional_location_node, "additionalLocation") locations += [gettext(node) for node in nodes_list] for uri in locations: extension.manifest_uris.append(uri) @staticmethod def _parse_plugin_settings(extension, plugin_settings): """ Sample config: { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"01_add_extensions_with_dependency":"ff2a3da6-8e12-4ab6-a4ca-4e3a473ab385"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"Write-Host First: Hello World TestTry2!"},"parameters":[{"name":"extensionName","value":"firstRunCommand"}],"timeoutInSeconds":120} } } ] } """ if plugin_settings is None: return extension_name = extension.name version = extension.version def to_lower(str_to_change): return str_to_change.lower() if str_to_change is not None else None extension_plugin_settings = [x for x in plugin_settings if to_lower(getattrib(x, "name")) == to_lower(extension_name)] if not extension_plugin_settings: return settings = [x for x in extension_plugin_settings if getattrib(x, "version") == version] if len(settings) != len(extension_plugin_settings): msg = "Extension PluginSettings Version Mismatch! Expected PluginSettings version: {0} for Extension: {1} but found versions: ({2})".format( version, extension_name, ', '.join(set([getattrib(x, "version") for x in extension_plugin_settings]))) add_event(op=WALAEventOperation.PluginSettingsVersionMismatch, message=msg, log_event=True, is_success=False) raise ExtensionsConfigError(msg) if len(settings) > 1: msg = "Multiple plugin settings found for the same extension: {0} and version: {1} (Expected: 1; Available: {2})".format( extension_name, version, len(settings)) raise ExtensionsConfigError(msg) plugin_settings_node = settings[0] runtime_settings_nodes = findall(plugin_settings_node, "RuntimeSettings") extension_runtime_settings_nodes = findall(plugin_settings_node, "ExtensionRuntimeSettings") if any(runtime_settings_nodes) and any(extension_runtime_settings_nodes): # There can only be a single RuntimeSettings node or multiple ExtensionRuntimeSettings nodes per Plugin msg = "Both RuntimeSettings and ExtensionRuntimeSettings found for the same extension: {0} and version: {1}".format( extension_name, version) raise ExtensionsConfigError(msg) if runtime_settings_nodes: if len(runtime_settings_nodes) > 1: msg = "Multiple RuntimeSettings found for the same extension: {0} and version: {1} (Expected: 1; Available: {2})".format( extension_name, version, len(runtime_settings_nodes)) raise ExtensionsConfigError(msg) # Only Runtime settings available, parse that ExtensionsGoalStateFromExtensionsConfig.__parse_runtime_settings(plugin_settings_node, runtime_settings_nodes[0], extension_name, extension) elif extension_runtime_settings_nodes: # Parse the ExtensionRuntime settings for the given extension ExtensionsGoalStateFromExtensionsConfig.__parse_extension_runtime_settings(plugin_settings_node, extension_runtime_settings_nodes, extension) @staticmethod def __get_dependency_level_from_node(depends_on_node, name): depends_on_level = 0 if depends_on_node is not None: try: depends_on_level = int(getattrib(depends_on_node, "dependencyLevel")) except (ValueError, TypeError): logger.warn("Could not parse dependencyLevel for handler {0}. Setting it to 0".format(name)) depends_on_level = 0 return depends_on_level @staticmethod def __parse_runtime_settings(plugin_settings_node, runtime_settings_node, extension_name, extension): """ Sample Plugin in PluginSettings containing DependsOn and RuntimeSettings (single settings per extension) - { "runtimeSettings": [ { "handlerSettings": { "protectedSettingsCertThumbprint": "", "protectedSettings": "", "publicSettings": {"UserName":"test1234"} } } ] } """ depends_on_nodes = findall(plugin_settings_node, "DependsOn") if len(depends_on_nodes) > 1: msg = "Extension Handler can only have a single dependsOn node for Single config extensions. Found: {0}".format( len(depends_on_nodes)) raise ExtensionsConfigError(msg) depends_on_node = depends_on_nodes[0] if depends_on_nodes else None depends_on_level = ExtensionsGoalStateFromExtensionsConfig.__get_dependency_level_from_node(depends_on_node, extension_name) ExtensionsGoalStateFromExtensionsConfig.__parse_and_add_extension_settings(runtime_settings_node, extension_name, extension, depends_on_level) @staticmethod def __parse_extension_runtime_settings(plugin_settings_node, extension_runtime_settings_nodes, extension): """ Sample PluginSettings containing DependsOn and ExtensionRuntimeSettings - { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"Write-Host First: Hello World 1234!"}} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"Write-Host First: Hello World 1234!"}} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"Write-Host Third: Hello World 3!"}} } } ] } """ # Parse and cache the Dependencies for each extension first dependency_levels = defaultdict(int) for depends_on_node in findall(plugin_settings_node, "DependsOn"): extension_name = getattrib(depends_on_node, "name") if extension_name in (None, ""): raise ExtensionsConfigError("No Name not specified for DependsOn object in ExtensionRuntimeSettings for MultiConfig!") dependency_level = ExtensionsGoalStateFromExtensionsConfig.__get_dependency_level_from_node(depends_on_node, extension_name) dependency_levels[extension_name] = dependency_level extension.supports_multi_config = True for extension_runtime_setting_node in extension_runtime_settings_nodes: # Name and State will only be set for ExtensionRuntimeSettings for Multi-Config extension_name = getattrib(extension_runtime_setting_node, "name") if extension_name in (None, ""): raise ExtensionsConfigError("Extension Name not specified for ExtensionRuntimeSettings for MultiConfig!") # State can either be `ExtensionState.Enabled` (default) or `ExtensionState.Disabled` state = getattrib(extension_runtime_setting_node, "state") state = ustr(state.lower()) if state not in (None, "") else ExtensionState.Enabled ExtensionsGoalStateFromExtensionsConfig.__parse_and_add_extension_settings(extension_runtime_setting_node, extension_name, extension, dependency_levels[extension_name], state=state) @staticmethod def __parse_and_add_extension_settings(settings_node, name, extension, depends_on_level, state=ExtensionState.Enabled): seq_no = getattrib(settings_node, "seqNo") if seq_no in (None, ""): raise ExtensionsConfigError("SeqNo not specified for the Extension: {0}".format(name)) try: runtime_settings = json.loads(gettext(settings_node)) except ValueError as error: logger.error("Invalid extension settings: {0}", ustr(error)) # Incase of invalid/no settings, add the name and seqNo of the Extension and treat it as an extension with # no settings since we were able to successfully parse those data properly. Without this, we wont report # anything for that sequence number and CRP would eventually have to timeout rather than fail fast. extension.settings.append( ExtensionSettings(name=name, sequenceNumber=seq_no, state=state, dependencyLevel=depends_on_level)) return for plugin_settings_list in runtime_settings["runtimeSettings"]: handler_settings = plugin_settings_list["handlerSettings"] extension_settings = ExtensionSettings() # There is no "extension name" for single Handler Settings. Use HandlerName for those extension_settings.name = name extension_settings.state = state extension_settings.sequenceNumber = int(seq_no) extension_settings.publicSettings = handler_settings.get("publicSettings") extension_settings.protectedSettings = handler_settings.get("protectedSettings") extension_settings.dependencyLevel = depends_on_level thumbprint = handler_settings.get("protectedSettingsCertThumbprint") extension_settings.certificateThumbprint = thumbprint extension.settings.append(extension_settings) def supports_encoded_signature(self): """ Returns bool indicating if the ExtensionsConfig API supports the 'encoded_signature' extension property. Should always return True. """ return True # Do not extend this class class _InVMArtifactsProfile(object): """ deserialized json string of InVMArtifactsProfile. It is expected to contain the following fields: * inVMArtifactsProfileBlobSeqNo * profileId (optional) * onHold (optional) * certificateThumbprint (optional) * encryptedHealthChecks (optional) * encryptedApplicationProfile (optional) """ def __init__(self, artifacts_profile_json): self._on_hold = False artifacts_profile = parse_json(artifacts_profile_json) on_hold = artifacts_profile.get('onHold') if on_hold is not None: # accept both bool and str values on_hold_normalized = str(on_hold).lower() if on_hold_normalized == "true": self._on_hold = True elif on_hold_normalized == "false": self._on_hold = False else: raise Exception("Invalid value for onHold: {0}".format(on_hold)) def get_on_hold(self): return self._on_hold Azure-WALinuxAgent-a976115/azurelinuxagent/common/protocol/extensions_goal_state_from_vm_settings.py000066400000000000000000000710111510742556200343700ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import json import sys from azurelinuxagent.common import logger from azurelinuxagent.common.AgentGlobals import AgentGlobals from azurelinuxagent.common.event import WALAEventOperation, add_event from azurelinuxagent.common.future import ustr, urlparse, datetime_min_utc from azurelinuxagent.common.protocol.extensions_goal_state import ExtensionsGoalState, GoalStateChannel, VmSettingsParseError from azurelinuxagent.common.protocol.restapi import VMAgentFamily, Extension, ExtensionRequestedState, ExtensionSettings from azurelinuxagent.common.utils import timeutil from azurelinuxagent.common.utils.flexible_version import FlexibleVersion # The 'encodedSignature' property is only supported on newer versions of HGAP. _MIN_HGAP_VERSION_FOR_EXT_SIGNATURE = FlexibleVersion("1.0.8.159") class ExtensionsGoalStateFromVmSettings(ExtensionsGoalState): def __init__(self, etag, json_text, correlation_id): super(ExtensionsGoalStateFromVmSettings, self).__init__() self._id = "etag_{0}".format(etag) self._etag = etag self._svd_sequence_number = 0 self._hostga_plugin_correlation_id = correlation_id self._text = json_text self._host_ga_plugin_version = FlexibleVersion('0.0.0.0') self._schema_version = FlexibleVersion('0.0.0.0') self._activity_id = AgentGlobals.GUID_ZERO self._correlation_id = AgentGlobals.GUID_ZERO self._created_on_timestamp = timeutil.create_utc_timestamp(datetime_min_utc) self._source = None self._status_upload_blob = None self._status_upload_blob_type = None self._required_features = [] self._on_hold = False self._agent_families = [] self._extensions = [] try: self._parse_vm_settings(json_text) self._do_common_validations() except Exception as e: message = "Error parsing vmSettings [HGAP: {0} Etag:{1}]: {2}".format(self._host_ga_plugin_version, etag, ustr(e)) raise VmSettingsParseError(message, etag, self.get_redacted_text()) @property def id(self): return self._id @property def etag(self): return self._etag @property def svd_sequence_number(self): return self._svd_sequence_number @property def host_ga_plugin_version(self): return self._host_ga_plugin_version @property def schema_version(self): return self._schema_version @property def activity_id(self): """ The CRP activity id """ return self._activity_id @property def correlation_id(self): """ The correlation id for the CRP operation """ return self._correlation_id @property def hostga_plugin_correlation_id(self): """ The correlation id for the call to the HostGAPlugin vmSettings API """ return self._hostga_plugin_correlation_id @property def created_on_timestamp(self): """ Timestamp assigned by the CRP (time at which the goal state was created) """ return self._created_on_timestamp @property def channel(self): return GoalStateChannel.HostGAPlugin @property def source(self): return self._source @property def status_upload_blob(self): return self._status_upload_blob @property def status_upload_blob_type(self): return self._status_upload_blob_type def _set_status_upload_blob_type(self, value): self._status_upload_blob_type = value @property def required_features(self): return self._required_features @property def on_hold(self): return self._on_hold @property def agent_families(self): return self._agent_families @property def extensions(self): return self._extensions def get_redacted_text(self): try: text = self._text if self.status_upload_blob is not None: parsed = urlparse(self.status_upload_blob) original = text text = text.replace(parsed.query, "***REDACTED***") if text == original: raise Exception('Could not redact the status upload blob') for ext_handler in self._extensions: for extension in ext_handler.settings: if extension.protectedSettings is not None: original = text text = text.replace(extension.protectedSettings, "***REDACTED***") if text == original: return 'Could not redact protectedSettings for {0}'.format(extension.name) return text except Exception as e: return "Error redacting text: {0}".format(e) def _parse_vm_settings(self, json_text): vm_settings = _CaseFoldedDict.from_dict(json.loads(json_text)) self._parse_simple_attributes(vm_settings) self._parse_status_upload_blob(vm_settings) self._parse_required_features(vm_settings) self._parse_agent_manifests(vm_settings) self._parse_extensions(vm_settings) def _parse_simple_attributes(self, vm_settings): # Sample: # { # "hostGAPluginVersion": "1.0.8.115", # "vmSettingsSchemaVersion": "0.0", # "activityId": "a33f6f53-43d6-4625-b322-1a39651a00c9", # "correlationId": "9a47a2a2-e740-4bfc-b11b-4f2f7cfe7d2e", # "inSvdSeqNo": 1, # "extensionsLastModifiedTickCount": 637726657706205217, # "extensionGoalStatesSource": "FastTrack", # ... # } # The HGAP version is included in some messages, so parse it first host_ga_plugin_version = vm_settings.get("hostGAPluginVersion") if host_ga_plugin_version is not None: self._host_ga_plugin_version = FlexibleVersion(host_ga_plugin_version) self._activity_id = self._string_to_id(vm_settings.get("activityId")) self._correlation_id = self._string_to_id(vm_settings.get("correlationId")) self._svd_sequence_number = self._string_to_id(vm_settings.get("inSvdSeqNo")) self._created_on_timestamp = self._ticks_to_utc_timestamp(vm_settings.get("extensionsLastModifiedTickCount")) schema_version = vm_settings.get("vmSettingsSchemaVersion") if schema_version is not None: self._schema_version = FlexibleVersion(schema_version) on_hold = vm_settings.get("onHold") if on_hold is not None: self._on_hold = on_hold self._source = vm_settings.get("extensionGoalStatesSource") if self._source is None: self._source = "UNKNOWN" def _parse_status_upload_blob(self, vm_settings): # Sample: # { # ... # "statusUploadBlob": { # "statusBlobType": "BlockBlob", # "value": "https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.status?sv=2018-03-28&sr=b&sk=system-1&sig=KNWgC2%3d&se=9999-01-01T00%3a00%3a00Z&sp=w" # }, # ... # } status_upload_blob = vm_settings.get("statusUploadBlob") if status_upload_blob is None: self._status_upload_blob = None self._status_upload_blob_type = "BlockBlob" else: self._status_upload_blob = status_upload_blob.get("value") if self._status_upload_blob is None: raise Exception("Missing statusUploadBlob.value") self._status_upload_blob_type = status_upload_blob.get("statusBlobType") if self._status_upload_blob_type is None: self._status_upload_blob_type = "BlockBlob" def _parse_required_features(self, vm_settings): # Sample: # { # ... # "requiredFeatures": [ # { # "name": "MultipleExtensionsPerHandler" # } # ], # ... # } required_features = vm_settings.get("requiredFeatures") if required_features is not None: if not isinstance(required_features, list): raise Exception("requiredFeatures should be an array (got {0})".format(required_features)) def get_required_features_names(): for feature in required_features: name = feature.get("name") if name is None: raise Exception("A required feature is missing the 'name' property (got {0})".format(feature)) yield name self._required_features.extend(get_required_features_names()) def _parse_agent_manifests(self, vm_settings): # Sample: # { # ... # "gaFamilies": [ # { # "name": "Prod", # "version": "9.9.9.9", # "from_version": "9.9.9.9", # "isVersionFromRSM": true, # "isVMEnabledForRSMUpgrades": true, # "uris": [ # "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml", # "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml" # ] # }, # { # "name": "Test", # "uris": [ # "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml", # "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml" # ] # } # ], # ... # } families = vm_settings.get("gaFamilies") if families is None: return if not isinstance(families, list): raise Exception("gaFamilies should be an array (got {0})".format(families)) for family in families: name = family["name"] version = family.get("version") from_version = family.get("fromVersion") is_version_from_rsm = family.get("isVersionFromRSM") is_vm_enabled_for_rsm_upgrades = family.get("isVMEnabledForRSMUpgrades") uris = family.get("uris") if uris is None: uris = [] agent_family = VMAgentFamily(name) agent_family.version = version agent_family.from_version = from_version agent_family.is_version_from_rsm = is_version_from_rsm agent_family.is_vm_enabled_for_rsm_upgrades = is_vm_enabled_for_rsm_upgrades for u in uris: agent_family.uris.append(u) self._agent_families.append(agent_family) def _parse_extensions(self, vm_settings): # Sample (NOTE: The first sample is single-config, the second multi-config): # { # ... # "extensionGoalStates": [ # { # "name": "Microsoft.Azure.Monitor.AzureMonitorLinuxAgent", # "version": "1.9.1", # "location": "https://zrdfepirv2cbn04prdstr01a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml", # "state": "enabled", # "autoUpgrade": true, # "runAsStartupTask": false, # "isJson": true, # "useExactVersion": true, # "encodedSignature": "MIIn...", # "settingsSeqNo": 0, # "settings": [ # { # "protectedSettingsCertThumbprint": "BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F", # "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEFpB/HKM/7evRk+DBz754wUwDQYJKoZIhvcNAQEBBQAEggEADPJwniDeIUXzxNrZCloitFdscQ59Bz1dj9DLBREAiM8jmxM0LLicTJDUv272Qm/4ZQgdqpFYBFjGab/9MX+Ih2x47FkVY1woBkckMaC/QOFv84gbboeQCmJYZC/rZJdh8rCMS+CEPq3uH1PVrvtSdZ9uxnaJ+E4exTPPviIiLIPtqWafNlzdbBt8HZjYaVw+SSe+CGzD2pAQeNttq3Rt/6NjCzrjG8ufKwvRoqnrInMs4x6nnN5/xvobKIBSv4/726usfk8Ug+9Q6Benvfpmre2+1M5PnGTfq78cO3o6mI3cPoBUjp5M0iJjAMGeMt81tyHkimZrEZm6pLa4NQMOEjArBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECC5nVaiJaWt+gAhgeYvxUOYHXw==", # "publicSettings": "{\"GCS_AUTO_CONFIG\":true}" # } # ], # "dependsOn": [ # ... # ] # }, # { # "name": "Microsoft.CPlat.Core.RunCommandHandlerLinux", # "version": "1.2.0", # "location": "https://umsavbvncrpzbnxmxzmr.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml", # "failoverlocation": "https://umsajbjtqrb3zqjvgb2z.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml", # "additionalLocations": [ # "https://umsawqtlsshtn5v2nfgh.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml" # ], # "state": "enabled", # "autoUpgrade": true, # "runAsStartupTask": false, # "isJson": true, # "useExactVersion": true, # "settingsSeqNo": 0, # "isMultiConfig": true, # "settings": [ # { # "publicSettings": "{\"source\":{\"script\":\"echo '4abb1e88-f349-41f8-8442-247d9fdfcac5'\"}}", # "seqNo": 0, # "extensionName": "MCExt1", # "extensionState": "enabled" # }, # { # "publicSettings": "{\"source\":{\"script\":\"echo 'e865c9bc-a7b3-42c6-9a79-cfa98a1ee8b3'\"}}", # "seqNo": 0, # "extensionName": "MCExt2", # "extensionState": "enabled" # }, # { # "publicSettings": "{\"source\":{\"script\":\"echo 'f923e416-0340-485c-9243-8b84fb9930c6'\"}}", # "seqNo": 0, # "extensionName": "MCExt3", # "extensionState": "enabled" # } # ], # "dependsOn": [ # ... # ] # } # ... # ] # ... # } extension_goal_states = vm_settings.get("extensionGoalStates") if extension_goal_states is not None: if not isinstance(extension_goal_states, list): raise Exception("extension_goal_states should be an array (got {0})".format(type(extension_goal_states))) # report only the type, since the value may contain secrets for extension_gs in extension_goal_states: extension = Extension() extension.name = extension_gs['name'] extension.version = extension_gs['version'] extension.state = extension_gs['state'] # The 'encodedSignature' property is optional. If absent, it may mean the VmSettings API does not support # it, or the extension is not signed. In either case, we set extension.encoded_signature to an empty string. encoded_signature = extension_gs.get('encodedSignature') extension.encoded_signature = "" if encoded_signature is None else encoded_signature if extension.state not in ExtensionRequestedState.All: raise Exception('Invalid extension state: {0} ({1})'.format(extension.state, extension.name)) is_multi_config = extension_gs.get('isMultiConfig') if is_multi_config is not None: extension.supports_multi_config = is_multi_config location = extension_gs.get('location') if location is not None: extension.manifest_uris.append(location) fail_over_location = extension_gs.get('failoverLocation') if fail_over_location is not None: extension.manifest_uris.append(fail_over_location) additional_locations = extension_gs.get('additionalLocations') if additional_locations is not None: if not isinstance(additional_locations, list): raise Exception('additionalLocations should be an array (got {0})'.format(additional_locations)) extension.manifest_uris.extend(additional_locations) # # Settings # settings_list = extension_gs.get('settings') if settings_list is not None: if not isinstance(settings_list, list): raise Exception("'settings' should be an array (extension: {0})".format(extension.name)) if not extension.supports_multi_config and len(settings_list) > 1: raise Exception("Single-config extension includes multiple settings (extension: {0})".format(extension.name)) for s in settings_list: settings = ExtensionSettings() public_settings = s.get('publicSettings') # Note that publicSettings, protectedSettings and protectedSettingsCertThumbprint can be None; do not change this to, for example, # empty, since those values are serialized to the extension's status file and extensions may depend on the current implementation # (for example, no public settings would currently be serialized as '"publicSettings": null') settings.publicSettings = None if public_settings is None else json.loads(public_settings) settings.protectedSettings = s.get('protectedSettings') thumbprint = s.get('protectedSettingsCertThumbprint') if thumbprint is None and settings.protectedSettings is not None: raise Exception("The certificate thumbprint for protected settings is missing (extension: {0})".format(extension.name)) settings.certificateThumbprint = thumbprint # in multi-config each settings have their own name, sequence number and state if extension.supports_multi_config: settings.name = s['extensionName'] settings.sequenceNumber = s['seqNo'] settings.state = s['extensionState'] else: settings.name = extension.name settings.sequenceNumber = extension_gs['settingsSeqNo'] settings.state = extension.state extension.settings.append(settings) # # Dependency level # depends_on = extension_gs.get("dependsOn") if depends_on is not None: self._parse_dependency_level(depends_on, extension) self._extensions.append(extension) @staticmethod def _parse_dependency_level(depends_on, extension): # Sample (NOTE: The first sample is single-config, the second multi-config): # { # ... # "extensionGoalStates": [ # { # "name": "Microsoft.Azure.Monitor.AzureMonitorLinuxAgent", # ... # "settings": [ # ... # ], # "dependsOn": [ # { # "DependsOnExtension": [ # { # "handler": "Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent" # } # ], # "dependencyLevel": 1 # } # ] # }, # { # "name": "Microsoft.CPlat.Core.RunCommandHandlerLinux", # ... # "isMultiConfig": true, # "settings": [ # { # ... # "extensionName": "MCExt1", # }, # { # ... # "extensionName": "MCExt2", # }, # { # ... # "extensionName": "MCExt3", # } # ], # "dependsOn": [ # { # "dependsOnExtension": [ # { # "extension": "...", # "handler": "..." # }, # { # "extension": "...", # "handler": "..." # } # ], # "dependencyLevel": 2, # "name": "MCExt1" # }, # { # "dependsOnExtension": [ # { # "extension": "...", # "handler": "..." # } # ], # "dependencyLevel": 1, # "name": "MCExt2" # } # ... # ] # ... # } if not isinstance(depends_on, list): raise Exception('dependsOn should be an array ({0}) (got {1})'.format(extension.name, depends_on)) if not extension.supports_multi_config: # single-config length = len(depends_on) if length > 1: raise Exception('dependsOn should be an array with exactly one item for single-config extensions ({0}) (got {1})'.format(extension.name, depends_on)) if length == 0: logger.warn('dependsOn is an empty array for extension {0}; setting the dependency level to 0'.format(extension.name)) dependency_level = 0 else: dependency_level = depends_on[0]['dependencyLevel'] depends_on_extension = depends_on[0].get('dependsOnExtension') if depends_on_extension is None: # TODO: Consider removing this check and its telemetry after a few releases if we do not receive any telemetry indicating # that dependsOnExtension is actually missing from the vmSettings message = 'Missing dependsOnExtension on extension {0}'.format(extension.name) logger.warn(message) add_event(WALAEventOperation.ProvisionAfterExtensions, message=message, is_success=False, log_event=False) else: message = '{0} depends on {1}'.format(extension.name, depends_on_extension) logger.info(message) add_event(WALAEventOperation.ProvisionAfterExtensions, message=message, is_success=True, log_event=False) if len(extension.settings) == 0: message = 'Extension {0} does not have any settings. Will ignore dependency (dependency level: {1})'.format(extension.name, dependency_level) logger.warn(message) add_event(WALAEventOperation.ProvisionAfterExtensions, message=message, is_success=False, log_event=False) else: extension.settings[0].dependencyLevel = dependency_level else: # multi-config settings_by_name = {} for settings in extension.settings: settings_by_name[settings.name] = settings for dependency in depends_on: settings = settings_by_name.get(dependency["name"]) if settings is None: raise Exception("Dependency '{0}' does not correspond to any of the settings in the extension (settings: {1})".format(dependency["name"], settings_by_name.keys())) settings.dependencyLevel = dependency["dependencyLevel"] def supports_encoded_signature(self): """ Returns True if the HGAP version supports the 'encoded_signature' extension property. """ return self._host_ga_plugin_version >= _MIN_HGAP_VERSION_FOR_EXT_SIGNATURE # # TODO: The current implementation of the vmSettings API uses inconsistent cases on the names of the json items it returns. # To work around that, we use _CaseFoldedDict to query those json items in a case-insensitive matter, Do not use # _CaseFoldedDict for other purposes. Remove it once the vmSettings API is updated. # class _CaseFoldedDict(dict): @staticmethod def from_dict(dictionary): case_folded = _CaseFoldedDict() for key, value in dictionary.items(): case_folded[key] = _CaseFoldedDict._to_case_folded_dict_item(value) return case_folded def get(self, key): return super(_CaseFoldedDict, self).get(_casefold(key)) def has_key(self, key): return super(_CaseFoldedDict, self).get(_casefold(key)) def __getitem__(self, key): return super(_CaseFoldedDict, self).__getitem__(_casefold(key)) def __setitem__(self, key, value): return super(_CaseFoldedDict, self).__setitem__(_casefold(key), value) def __contains__(self, key): return super(_CaseFoldedDict, self).__contains__(_casefold(key)) @staticmethod def _to_case_folded_dict_item(item): if isinstance(item, dict): case_folded_dict = _CaseFoldedDict() for key, value in item.items(): case_folded_dict[_casefold(key)] = _CaseFoldedDict._to_case_folded_dict_item(value) return case_folded_dict if isinstance(item, list): return [_CaseFoldedDict._to_case_folded_dict_item(list_item) for list_item in item] return item def copy(self): raise NotImplementedError() @staticmethod def fromkeys(*args, **kwargs): raise NotImplementedError() def pop(self, key, default=None): raise NotImplementedError() def setdefault(self, key, default=None): raise NotImplementedError() def update(self, E=None, **F): # known special case of dict.update raise NotImplementedError() def __delitem__(self, *args, **kwargs): raise NotImplementedError() # casefold() does not exist on Python 2 so we use lower() there def _casefold(string): if sys.version_info[0] == 2: return type(string).lower(string) # the type of "string" can be unicode or str # Class 'str' has no 'casefold' member (no-member) -- Disabled: This warning shows up on Python 2.7 pylint runs # but this code is actually not executed on Python 2. return str.casefold(string) # pylint: disable=no-member Azure-WALinuxAgent-a976115/azurelinuxagent/common/protocol/goal_state.py000066400000000000000000001072401510742556200264500ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import datetime import os import re import time import json from azurelinuxagent.common import conf from azurelinuxagent.common import logger from azurelinuxagent.common.AgentGlobals import AgentGlobals from azurelinuxagent.common.event import add_event, WALAEventOperation, LogEvent from azurelinuxagent.common.exception import ProtocolError, ResourceGoneError from azurelinuxagent.common.future import ustr, UTC from azurelinuxagent.common.protocol.extensions_goal_state_factory import ExtensionsGoalStateFactory from azurelinuxagent.common.protocol.extensions_goal_state import VmSettingsParseError, GoalStateSource from azurelinuxagent.common.protocol.hostplugin import VmSettingsNotSupported, VmSettingsSupportStopped from azurelinuxagent.common.protocol.restapi import RemoteAccessUser, RemoteAccessUsersList, ExtHandlerPackage, ExtHandlerPackageList from azurelinuxagent.common.utils import fileutil, shellutil from azurelinuxagent.common.utils.archive import GoalStateHistory, SHARED_CONF_FILE_NAME from azurelinuxagent.common.utils.cryptutil import CryptUtil from azurelinuxagent.common.utils.textutil import parse_doc, findall, find, findtext, getattrib, gettext GOAL_STATE_URI = "http://{0}/machine/?comp=goalstate" CERTS_FILE_NAME = "Certificates.xml" P7M_FILE_NAME = "Certificates.p7m" PFX_FILE_NAME = "Certificates.pfx" PEM_FILE_NAME = "Certificates.pem" TRANSPORT_CERT_FILE_NAME = "TransportCert.pem" TRANSPORT_PRV_FILE_NAME = "TransportPrivate.pem" _GET_GOAL_STATE_MAX_ATTEMPTS = 6 class GoalStateProperties(object): """ Enum for defining the properties that we fetch in the goal state """ RoleConfig = 0x1 HostingEnv = 0x2 SharedConfig = 0x4 ExtensionsGoalState = 0x8 Certificates = 0x10 RemoteAccessInfo = 0x20 All = RoleConfig | HostingEnv | SharedConfig | ExtensionsGoalState | Certificates | RemoteAccessInfo class GoalState(object): def __init__(self, wire_client, goal_state_properties=GoalStateProperties.All, silent=False, save_to_history=False): """ Fetches the goal state using the given wire client. Fetching the goal state involves several HTTP requests to the WireServer and the HostGAPlugin. There is an initial request to WireServer's goalstate API, which response includes the incarnation, role instance, container ID, role config, and URIs to the rest of the goal state (ExtensionsConfig, Certificates, Remote Access users, etc.). Additional requests are done using those URIs (all of them point to APIs in the WireServer). Additionally, there is a request to the HostGAPlugin for the vmSettings, which determines the goal state for extensions when using the Fast Track pipeline. To reduce the number of requests, when possible, create a single instance of GoalState and use the update() method to keep it up to date. """ try: self._wire_client = wire_client self._history = None self._save_to_history = save_to_history self._extensions_goal_state = None # populated from vmSettings or extensionsConfig self._goal_state_properties = goal_state_properties self.logger = logger.Logger(logger.DEFAULT_LOGGER) self.logger.silent = silent # These properties hold the goal state from the WireServer and are initialized by self._fetch_full_wire_server_goal_state() self._incarnation = None self._role_instance_id = None self._role_config_name = None self._container_id = None self._hosting_env = None self._shared_conf = None self._certs = EmptyCertificates() self._certs_uri = None self._remote_access = None self.update(silent=silent) except ProtocolError: raise except Exception as exception: # We don't log the error here since fetching the goal state is done every few seconds raise ProtocolError(msg="Error fetching goal state", inner=exception) @property def incarnation(self): return self._incarnation @property def container_id(self): if not self._goal_state_properties & GoalStateProperties.RoleConfig: raise ProtocolError("ContainerId is not in goal state properties") else: return self._container_id @property def role_instance_id(self): if not self._goal_state_properties & GoalStateProperties.RoleConfig: raise ProtocolError("RoleInstanceId is not in goal state properties") else: return self._role_instance_id @property def role_config_name(self): if not self._goal_state_properties & GoalStateProperties.RoleConfig: raise ProtocolError("RoleConfig is not in goal state properties") else: return self._role_config_name @property def extensions_goal_state(self): if not self._goal_state_properties & GoalStateProperties.ExtensionsGoalState: raise ProtocolError("ExtensionsGoalState is not in goal state properties") else: return self._extensions_goal_state @property def certs(self): if not self._goal_state_properties & GoalStateProperties.Certificates: raise ProtocolError("Certificates is not in goal state properties") else: return self._certs @property def hosting_env(self): if not self._goal_state_properties & GoalStateProperties.HostingEnv: raise ProtocolError("HostingEnvironment is not in goal state properties") else: return self._hosting_env @property def shared_conf(self): if not self._goal_state_properties & GoalStateProperties.SharedConfig: raise ProtocolError("SharedConfig is not in goal state properties") else: return self._shared_conf @property def remote_access(self): if not self._goal_state_properties & GoalStateProperties.RemoteAccessInfo: raise ProtocolError("RemoteAccessInfo is not in goal state properties") else: return self._remote_access @property def history(self): return self._history def fetch_agent_manifest(self, family_name, uris): """ This is a convenience method that wraps WireClient.fetch_manifest(), but adds the required 'use_verify_header' parameter and saves the manifest to the history folder. """ return self._fetch_manifest("agent", "waagent.{0}".format(family_name), uris) def fetch_extension_manifest(self, extension_name, uris): """ This is a convenience method that wraps WireClient.fetch_manifest(), but adds the required 'use_verify_header' parameter and saves the manifest to the history folder. """ return self._fetch_manifest("extension", extension_name, uris) def _fetch_manifest(self, manifest_type, name, uris): try: is_fast_track = self.extensions_goal_state.source == GoalStateSource.FastTrack xml_text = self._wire_client.fetch_manifest(manifest_type, uris, use_verify_header=is_fast_track) if self._save_to_history: self._history.save_manifest(name, xml_text) return ExtensionManifest(xml_text) except Exception as e: raise ProtocolError("Failed to retrieve {0} manifest. Error: {1}".format(manifest_type, ustr(e))) @staticmethod def update_host_plugin_headers(wire_client): """ Updates the container ID and role config name that are send in the headers of HTTP requests to the HostGAPlugin """ # Fetching the goal state updates the HostGAPlugin so simply trigger the request GoalState._fetch_goal_state(wire_client) def update(self, force_update=False, silent=False): """ Updates the current GoalState instance fetching values from the WireServer/HostGAPlugin as needed """ self.logger.silent = silent # # Fetch the goal state from both the HGAP and the WireServer # timestamp = datetime.datetime.now(UTC) if force_update: message = "Refreshing goal state and vmSettings" self.logger.info(message) add_event(op=WALAEventOperation.GoalState, message=message) incarnation, xml_text, xml_doc = GoalState._fetch_goal_state(self._wire_client) goal_state_updated = force_update or incarnation != self._incarnation if goal_state_updated: message = 'Fetched a new incarnation for the WireServer goal state [incarnation {0}]'.format(incarnation) self.logger.info(message) add_event(op=WALAEventOperation.GoalState, message=message) vm_settings, vm_settings_updated = None, False if self._goal_state_properties & GoalStateProperties.ExtensionsGoalState: try: vm_settings, vm_settings_updated = GoalState._fetch_vm_settings(self._wire_client, force_update=force_update) except VmSettingsSupportStopped as exception: # If the HGAP stopped supporting vmSettings, we need to use the goal state from the WireServer self._restore_wire_server_goal_state(incarnation, xml_text, xml_doc, exception) return if vm_settings_updated: self.logger.info('') message = "Fetched new vmSettings [HostGAPlugin correlation ID: {0} eTag: {1} source: {2}]".format(vm_settings.hostga_plugin_correlation_id, vm_settings.etag, vm_settings.source) self.logger.info(message) add_event(op=WALAEventOperation.GoalState, message=message) # Ignore the vmSettings if their source is Fabric (processing a Fabric goal state may require the tenant certificate and the vmSettings don't include it.) if vm_settings is not None and vm_settings.source == GoalStateSource.Fabric: if vm_settings_updated: message = "The vmSettings originated via Fabric; will ignore them." self.logger.info(message) add_event(op=WALAEventOperation.GoalState, message=message) vm_settings, vm_settings_updated = None, False # If neither goal state has changed we are done with the update if not goal_state_updated and not vm_settings_updated: return # Start a new history subdirectory and capture the updated goal state tag = "{0}".format(incarnation) if vm_settings is None else "{0}-{1}".format(incarnation, vm_settings.etag) if self._save_to_history: self._history = GoalStateHistory(timestamp, tag) if goal_state_updated: self._history.save_goal_state(xml_text) if vm_settings_updated: self._history.save_vm_settings(vm_settings.get_redacted_text()) # # Continue fetching the rest of the goal state # extensions_config = None if goal_state_updated: extensions_config = self._fetch_full_wire_server_goal_state(incarnation, xml_doc) # # Lastly, decide whether to use the vmSettings or extensionsConfig for the extensions goal state # if goal_state_updated: # On rotation of the tenant certificate the vmSettings and extensionsConfig are not updated. However, the incarnation of the WS goal state is update so 'goal_state_updated' will be True. # In this case, we should use the most recent of vmSettigns and extensionsConfig. if vm_settings is not None: most_recent = vm_settings if vm_settings.created_on_timestamp > extensions_config.created_on_timestamp else extensions_config else: most_recent = extensions_config else: # vm_settings_updated most_recent = vm_settings if self._extensions_goal_state is None or most_recent.created_on_timestamp >= self._extensions_goal_state.created_on_timestamp: self._extensions_goal_state = most_recent # For each extension in the goal state being executed, we emit telemetry to indicate whether a signature is present # for the extension. The "is_success" field reflects whether the extension was signed. # If signature is missing, skip telemetry in the following cases: # - Extension requested state is 'uninstall' (uninstall goal states never include signature). # - The goal state API does not support the 'encoded_signature' property (e.g., fast track goal states where HGAP version does not support signature). for ext in self._extensions_goal_state.extensions: if ext.state == "uninstall" or not self._extensions_goal_state.supports_encoded_signature(): continue add_event(op=WALAEventOperation.ExtensionSigned, message="", name=ext.name, version=ext.version, is_success=ext.encoded_signature != "", log_event=False) # Ensure all certificates are downloaded on Fast Track goal states in order to maintain backwards compatibility with previous # versions of the Agent, which used to download certificates from the WireServer on every goal state. Some customer applications # depend on this behavior (see https://github.com/Azure/WALinuxAgent/issues/2750). # if self._extensions_goal_state.source == GoalStateSource.FastTrack and self._goal_state_properties & GoalStateProperties.Certificates: self._check_and_download_missing_certs_on_disk() def _download_certificates(self, certs_uri): certs = Certificates(self._wire_client, certs_uri, self.logger) # Save the certificates summary (i.e. the thumbprints but not the certificates themselves) to the goal state history if self._save_to_history: self._history.save_certificates(json.dumps(certs.summary)) return certs def _check_and_download_missing_certs_on_disk(self): # Re-download certificates if any have been removed from disk since last download if self._certs_uri is not None: certificates = self.certs.summary certs_missing_from_disk = False for c in certificates: cert_path = os.path.join(conf.get_lib_dir(), c['thumbprint'] + '.crt') if not os.path.isfile(cert_path): certs_missing_from_disk = True message = "Certificate required by goal state is not on disk: {0}".format(cert_path) self.logger.info(message) add_event(op=WALAEventOperation.GoalState, message=message) if certs_missing_from_disk: # Try to re-download certs. Sometimes download may fail if certs_uri is outdated/contains wrong # container id (for example, when the VM is moved to a new container after resuming from # hibernation). If download fails we should report and continue with goal state processing, as some # extensions in the goal state may succeed. try: self._download_certificates(self._certs_uri) except Exception as e: message = "Unable to download certificates. Goal state processing will continue, some " \ "extensions requiring certificates may fail. Error: {0}".format(ustr(e)) self.logger.warn(message) add_event(op=WALAEventOperation.GoalState, is_success=False, message=message) def _restore_wire_server_goal_state(self, incarnation, xml_text, xml_doc, vm_settings_support_stopped_error): msg = 'The HGAP stopped supporting vmSettings; will fetched the goal state from the WireServer.' self.logger.info(msg) add_event(op=WALAEventOperation.VmSettings, message=msg) if self._save_to_history: self._history = GoalStateHistory(datetime.datetime.now(UTC), incarnation) self._history.save_goal_state(xml_text) self._extensions_goal_state = self._fetch_full_wire_server_goal_state(incarnation, xml_doc) if self._extensions_goal_state.created_on_timestamp < vm_settings_support_stopped_error.timestamp: self._extensions_goal_state.is_outdated = True msg = "Fetched a Fabric goal state older than the most recent FastTrack goal state; will skip it.\nFabric: {0}\nFastTrack: {1}".format( self._extensions_goal_state.created_on_timestamp, vm_settings_support_stopped_error.timestamp) self.logger.info(msg) add_event(op=WALAEventOperation.VmSettings, message=msg) def save_to_history(self, data, file_name): if self._save_to_history: self._history.save(data, file_name) @staticmethod def _fetch_goal_state(wire_client): """ Issues an HTTP request for the goal state (WireServer) and returns a tuple containing the response as text and as an XML Document """ uri = GOAL_STATE_URI.format(wire_client.get_endpoint()) # In some environments a few goal state requests return a missing RoleInstance; these retries are used to work around that issue # TODO: Consider retrying on 410 (ResourceGone) as well incarnation = "unknown" for _ in range(0, _GET_GOAL_STATE_MAX_ATTEMPTS): xml_text = wire_client.fetch_config(uri, wire_client.get_header()) xml_doc = parse_doc(xml_text) incarnation = findtext(xml_doc, "Incarnation") role_instance = find(xml_doc, "RoleInstance") if role_instance: break time.sleep(0.5) else: raise ProtocolError("Fetched goal state without a RoleInstance [incarnation {inc}]".format(inc=incarnation)) # Telemetry and the HostGAPlugin depend on the container id/role config; keep them up-to-date each time we fetch the goal state # (note that these elements can change even if the incarnation of the goal state does not change) container = find(xml_doc, "Container") container_id = findtext(container, "ContainerId") role_config = find(role_instance, "Configuration") role_config_name = findtext(role_config, "ConfigName") AgentGlobals.update_container_id(container_id) # Telemetry uses this global to pick up the container id wire_client.update_host_plugin(container_id, role_config_name) return incarnation, xml_text, xml_doc @staticmethod def _fetch_vm_settings(wire_client, force_update=False): """ Issues an HTTP request (HostGAPlugin) for the vm settings and returns the response as an ExtensionsGoalState. """ vm_settings, vm_settings_updated = (None, False) if conf.get_enable_fast_track(): try: try: vm_settings, vm_settings_updated = wire_client.get_host_plugin().fetch_vm_settings(force_update=force_update) except ResourceGoneError: # retry after refreshing the HostGAPlugin GoalState.update_host_plugin_headers(wire_client) vm_settings, vm_settings_updated = wire_client.get_host_plugin().fetch_vm_settings(force_update=force_update) except VmSettingsSupportStopped: raise except VmSettingsNotSupported: pass except VmSettingsParseError as exception: # ensure we save the vmSettings if there were parsing errors, but save them only once per ETag if not GoalStateHistory.tag_exists(exception.etag): GoalStateHistory(datetime.datetime.now(UTC), exception.etag).save_vm_settings(exception.vm_settings_text) raise return vm_settings, vm_settings_updated def _fetch_full_wire_server_goal_state(self, incarnation, xml_doc): """ Issues HTTP requests (to the WireServer) for each of the URIs in the goal state (ExtensionsConfig, Certificate, Remote Access users, etc) and populates the corresponding properties. Returns the value of ExtensionsConfig. """ try: self.logger.info('') message = 'Fetching full goal state from the WireServer [incarnation {0}]'.format(incarnation) self.logger.info(message) add_event(op=WALAEventOperation.GoalState, message=message) role_instance_id = None role_config_name = None container_id = None if GoalStateProperties.RoleConfig & self._goal_state_properties: role_instance = find(xml_doc, "RoleInstance") role_instance_id = findtext(role_instance, "InstanceId") role_config = find(role_instance, "Configuration") role_config_name = findtext(role_config, "ConfigName") container = find(xml_doc, "Container") container_id = findtext(container, "ContainerId") extensions_config_uri = findtext(xml_doc, "ExtensionsConfig") if not (GoalStateProperties.ExtensionsGoalState & self._goal_state_properties) or extensions_config_uri is None: extensions_config = ExtensionsGoalStateFactory.create_empty(incarnation) else: xml_text = self._wire_client.fetch_config(extensions_config_uri, self._wire_client.get_header()) extensions_config = ExtensionsGoalStateFactory.create_from_extensions_config(incarnation, xml_text, self._wire_client) if self._save_to_history: self._history.save_extensions_config(extensions_config.get_redacted_text()) hosting_env = None if GoalStateProperties.HostingEnv & self._goal_state_properties: hosting_env_uri = findtext(xml_doc, "HostingEnvironmentConfig") xml_text = self._wire_client.fetch_config(hosting_env_uri, self._wire_client.get_header()) hosting_env = HostingEnv(xml_text) if self._save_to_history: self._history.save_hosting_env(xml_text) shared_config = None if GoalStateProperties.SharedConfig & self._goal_state_properties: shared_conf_uri = findtext(xml_doc, "SharedConfig") xml_text = self._wire_client.fetch_config(shared_conf_uri, self._wire_client.get_header()) shared_config = SharedConfig(xml_text) if self._save_to_history: self._history.save_shared_conf(xml_text) # SharedConfig.xml is used by other components (Azsec and Singularity/HPC Infiniband), so save it to the agent's root directory as well shared_config_file = os.path.join(conf.get_lib_dir(), SHARED_CONF_FILE_NAME) try: fileutil.write_file(shared_config_file, xml_text) except Exception as e: logger.warn("Failed to save {0}: {1}".format(shared_config, e)) certs = EmptyCertificates() certs_uri = findtext(xml_doc, "Certificates") if (GoalStateProperties.Certificates & self._goal_state_properties) and certs_uri is not None: certs = self._download_certificates(certs_uri) remote_access = None if GoalStateProperties.RemoteAccessInfo & self._goal_state_properties: remote_access_uri = findtext(container, "RemoteAccessInfo") if remote_access_uri is not None: xml_text = self._wire_client.fetch_config(remote_access_uri, self._wire_client.get_header_for_remote_access()) remote_access = RemoteAccess(xml_text) if self._save_to_history: self._history.save_remote_access(xml_text) self._incarnation = incarnation self._role_instance_id = role_instance_id self._role_config_name = role_config_name self._container_id = container_id self._hosting_env = hosting_env self._shared_conf = shared_config self._certs = certs self._certs_uri = certs_uri self._remote_access = remote_access return extensions_config except Exception as exception: self.logger.warn("Fetching the goal state failed: {0}", ustr(exception)) raise ProtocolError(msg="Error fetching goal state", inner=exception) finally: message = 'Fetch goal state from WireServer completed' self.logger.info(message) add_event(op=WALAEventOperation.GoalState, message=message) class HostingEnv(object): def __init__(self, xml_text): self.xml_text = xml_text xml_doc = parse_doc(xml_text) incarnation = find(xml_doc, "Incarnation") self.vm_name = getattrib(incarnation, "instance") role = find(xml_doc, "Role") self.role_name = getattrib(role, "name") deployment = find(xml_doc, "Deployment") self.deployment_name = getattrib(deployment, "name") class SharedConfig(object): def __init__(self, xml_text): self.xml_text = xml_text class Certificates(LogEvent): def __init__(self, wire_client, uri, logger_): super(Certificates, self).__init__(logger_) self.summary = [] self._crypt_util = CryptUtil(conf.get_openssl_cmd()) try: pfx_file = self._download_certificates_pfx(wire_client, uri) if pfx_file is None: # The response from the WireServer may not have any certificates return try: pem_file = self._convert_certificates_pfx_to_pem(pfx_file) finally: self._remove_file(pfx_file) self.summary = self._extract_certificate(pem_file) for c in self.summary: self.info(WALAEventOperation.GoalStateCertificates, "Downloaded certificate {0}", c) except Exception as e: self.error(WALAEventOperation.GoalStateCertificates, "Error fetching the goal state certificates: {0}", ustr(e)) def _remove_file(self, file): if os.path.exists(file): try: os.remove(file) except Exception as e: self.warn(WALAEventOperation.GoalStateCertificates, "Failed to remove {0}: {1}", file, ustr(e)) def _download_certificates_pfx(self, wire_client, uri): """ Downloads the certificates from the WireServer and saves them to a pfx file. Returns the full path of the pfx file, or None, if the WireServer response does not have a "Data" element """ trans_prv_file = os.path.join(conf.get_lib_dir(), TRANSPORT_PRV_FILE_NAME) trans_cert_file = os.path.join(conf.get_lib_dir(), TRANSPORT_CERT_FILE_NAME) xml_file = os.path.join(conf.get_lib_dir(), CERTS_FILE_NAME) pfx_file = os.path.join(conf.get_lib_dir(), PFX_FILE_NAME) for cypher in ["AES128_CBC", "DES_EDE3_CBC"]: headers = wire_client.get_headers_for_encrypted_request(cypher) try: xml_text = wire_client.fetch_config(uri, headers) except Exception as e: self.warn(WALAEventOperation.GoalStateCertificates, "Error in Certificates request [cypher: {0}]: {1}", cypher, ustr(e)) continue fileutil.write_file(xml_file, xml_text) xml_doc = parse_doc(xml_text) data = findtext(xml_doc, "Data") if data is None: self.info(WALAEventOperation.GoalStateCertificates, "The Data element of the Certificates response is empty") return None certificate_format = findtext(xml_doc, "Format") if certificate_format and certificate_format != "Pkcs7BlobWithPfxContents": self.warn(WALAEventOperation.GoalStateCertificates, "The Certificates format is not Pkcs7BlobWithPfxContents; skipping. Format is {0}", certificate_format) return None p7m_file = Certificates._create_p7m_file(data) try: self._crypt_util.decrypt_certificates_p7m(p7m_file, trans_prv_file, trans_cert_file, pfx_file) except shellutil.CommandError as e: self.warn(WALAEventOperation.GoalState, "Error in transport decryption [cypher: {0}]: {1}", cypher, ustr(e)) self._remove_file(pfx_file) continue return pfx_file raise Exception("Cannot download certificates using any of the supported cyphers") @staticmethod def _create_p7m_file(data): p7m_file = os.path.join(conf.get_lib_dir(), P7M_FILE_NAME) p7m = ("MIME-Version:1.0\n" # pylint: disable=W1308 "Content-Disposition: attachment; filename=\"{0}\"\n" "Content-Type: application/x-pkcs7-mime; name=\"{1}\"\n" "Content-Transfer-Encoding: base64\n" "\n" "{2}").format(p7m_file, p7m_file, data) fileutil.write_file(p7m_file, p7m) return p7m_file def _convert_certificates_pfx_to_pem(self, pfx_file): """ Convert the pfx file to pem file. """ pem_file = os.path.join(conf.get_lib_dir(), PEM_FILE_NAME) for nomacver in [True, False]: try: self._crypt_util.convert_pfx_to_pem(pfx_file, nomacver, pem_file) return pem_file except shellutil.CommandError as e: self._remove_file(pem_file) # An error may leave an empty pem file, which can produce a failure on some versions of open SSL (e.g. 3.2.2) on the next invocation self.warn(WALAEventOperation.GoalState, "Error converting PFX to PEM [-nomacver: {0}]: {1}", nomacver, ustr(e)) continue raise Exception("Cannot convert PFX to PEM") def _extract_certificate(self, pem_file): """ Parse the certificates and private keys from the pem file and store them in the certificates directory. """ # The parsing process use public key to match prv and crt. private_keys = {} # map of private keys indexed by public key thumbprints = {} # map of thumbprints indexed by public key buffer = [] # buffer for reading lines belonging to a certificate or private key index = 0 with open(pem_file) as pem: for line in pem.readlines(): buffer.append(line) if re.match(r'[-]+END.*KEY[-]+', line): tmp_file = Certificates._write_to_tmp_file(index, 'prv', buffer) pub = self._crypt_util.get_pubkey_from_prv(tmp_file) private_keys[pub] = tmp_file buffer = [] index += 1 elif re.match(r'[-]+END.*CERTIFICATE[-]+', line): tmp_file = Certificates._write_to_tmp_file(index, 'crt', buffer) pub = self._crypt_util.get_pubkey_from_crt(tmp_file) thumbprint = self._crypt_util.get_thumbprint_from_crt(tmp_file) thumbprints[pub] = thumbprint # Rename crt with thumbprint as the file name crt = "{0}.crt".format(thumbprint) os.rename(tmp_file, os.path.join(conf.get_lib_dir(), crt)) buffer = [] index += 1 # Rename prv key with thumbprint as the file name for pubkey in private_keys: thumbprint = thumbprints[pubkey] if thumbprint: tmp_file = private_keys[pubkey] prv = "{0}.prv".format(thumbprint) os.rename(tmp_file, os.path.join(conf.get_lib_dir(), prv)) else: # Since private key has *no* matching certificate, it will not be named correctly self.warn(WALAEventOperation.GoalState, "Found a private key with no matching cert/thumbprint!") certificates = [] for pubkey, thumbprint in thumbprints.items(): has_private_key = pubkey in private_keys certificates.append({"thumbprint": thumbprint, "hasPrivateKey": has_private_key}) return certificates @staticmethod def _write_to_tmp_file(index, suffix, buf): file_name = os.path.join(conf.get_lib_dir(), "{0}.{1}".format(index, suffix)) fileutil.write_file(file_name, "".join(buf)) return file_name class EmptyCertificates: def __init__(self): self.summary = [] class RemoteAccess(object): """ Object containing information about user accounts """ # # # # # # # # # # # # # def __init__(self, xml_text): self.xml_text = xml_text self.version = None self.incarnation = None self.user_list = RemoteAccessUsersList() if self.xml_text is None or len(self.xml_text) == 0: return xml_doc = parse_doc(self.xml_text) self.version = findtext(xml_doc, "Version") self.incarnation = findtext(xml_doc, "Incarnation") user_collection = find(xml_doc, "Users") users = findall(user_collection, "User") for user in users: remote_access_user = RemoteAccess._parse_user(user) self.user_list.users.append(remote_access_user) @staticmethod def _parse_user(user): name = findtext(user, "Name") encrypted_password = findtext(user, "Password") expiration = findtext(user, "Expiration") remote_access_user = RemoteAccessUser(name, encrypted_password, expiration) return remote_access_user class ExtensionManifest(object): def __init__(self, xml_text): if xml_text is None: raise ValueError("ExtensionManifest is None") logger.verbose("Load ExtensionManifest.xml") self.pkg_list = ExtHandlerPackageList() self._parse(xml_text) def _parse(self, xml_text): xml_doc = parse_doc(xml_text) self._handle_packages(findall(find(xml_doc, "Plugins"), "Plugin"), False) self._handle_packages(findall(find(xml_doc, "InternalPlugins"), "Plugin"), True) def _handle_packages(self, packages, isinternal): for package in packages: version = findtext(package, "Version") disallow_major_upgrade = findtext(package, "DisallowMajorVersionUpgrade") if disallow_major_upgrade is None: disallow_major_upgrade = '' disallow_major_upgrade = disallow_major_upgrade.lower() == "true" uris = find(package, "Uris") uri_list = findall(uris, "Uri") uri_list = [gettext(x) for x in uri_list] pkg = ExtHandlerPackage() pkg.version = version pkg.disallow_major_upgrade = disallow_major_upgrade for uri in uri_list: pkg.uris.append(uri) pkg.isinternal = isinternal self.pkg_list.versions.append(pkg) Azure-WALinuxAgent-a976115/azurelinuxagent/common/protocol/healthservice.py000066400000000000000000000152271510742556200271570ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import json from azurelinuxagent.common import logger from azurelinuxagent.common.exception import HttpError from azurelinuxagent.common.future import ustr from azurelinuxagent.common.utils import restutil from azurelinuxagent.common.version import AGENT_NAME, CURRENT_VERSION class Observation(object): def __init__(self, name, is_healthy, description='', value=''): if name is None: raise ValueError("Observation name must be provided") if is_healthy is None: raise ValueError("Observation health must be provided") if value is None: value = '' if description is None: description = '' self.name = name self.is_healthy = is_healthy self.description = description self.value = value @property def as_obj(self): return { "ObservationName": self.name[:64], "IsHealthy": self.is_healthy, "Description": self.description[:128], "Value": self.value[:128] } class HealthService(object): ENDPOINT = 'http://{0}:80/HealthService' API = 'reporttargethealth' VERSION = "1.0" OBSERVER_NAME = 'WALinuxAgent' HOST_PLUGIN_HEARTBEAT_OBSERVATION_NAME = 'GuestAgentPluginHeartbeat' HOST_PLUGIN_STATUS_OBSERVATION_NAME = 'GuestAgentPluginStatus' HOST_PLUGIN_VERSIONS_OBSERVATION_NAME = 'GuestAgentPluginVersions' HOST_PLUGIN_ARTIFACT_OBSERVATION_NAME = 'GuestAgentPluginArtifact' IMDS_OBSERVATION_NAME = 'InstanceMetadataHeartbeat' MAX_OBSERVATIONS = 10 def __init__(self, endpoint): self.endpoint = HealthService.ENDPOINT.format(endpoint) self.api = HealthService.API self.version = HealthService.VERSION self.source = HealthService.OBSERVER_NAME self.observations = [] @property def as_json(self): data = { "Api": self.api, "Version": self.version, "Source": self.source, "Observations": [o.as_obj for o in self.observations] } return json.dumps(data) def report_host_plugin_heartbeat(self, is_healthy): """ Reports a signal for /health :param is_healthy: whether the call succeeded """ self._observe(name=HealthService.HOST_PLUGIN_HEARTBEAT_OBSERVATION_NAME, is_healthy=is_healthy) self._report() def report_host_plugin_versions(self, is_healthy, response): """ Reports a signal for /versions :param is_healthy: whether the api call succeeded :param response: debugging information for failures """ self._observe(name=HealthService.HOST_PLUGIN_VERSIONS_OBSERVATION_NAME, is_healthy=is_healthy, value=response) self._report() def report_host_plugin_extension_artifact(self, is_healthy, source, response): """ Reports a signal for /extensionArtifact :param is_healthy: whether the api call succeeded :param source: specifies the api caller for debugging failures :param response: debugging information for failures """ self._observe(name=HealthService.HOST_PLUGIN_ARTIFACT_OBSERVATION_NAME, is_healthy=is_healthy, description=source, value=response) self._report() def report_host_plugin_status(self, is_healthy, response): """ Reports a signal for /status :param is_healthy: whether the api call succeeded :param response: debugging information for failures """ self._observe(name=HealthService.HOST_PLUGIN_STATUS_OBSERVATION_NAME, is_healthy=is_healthy, value=response) self._report() def report_imds_status(self, is_healthy, response): """ Reports a signal for /metadata/instance :param is_healthy: whether the api call succeeded and returned valid data :param response: debugging information for failures """ self._observe(name=HealthService.IMDS_OBSERVATION_NAME, is_healthy=is_healthy, value=response) self._report() def _observe(self, name, is_healthy, value='', description=''): # ensure we keep the list size within bounds if len(self.observations) >= HealthService.MAX_OBSERVATIONS: del self.observations[:HealthService.MAX_OBSERVATIONS-1] self.observations.append(Observation(name=name, is_healthy=is_healthy, value=value, description=description)) def _report(self): logger.verbose('HealthService: report observations') try: restutil.http_post(self.endpoint, self.as_json, headers={'Content-Type': 'application/json'}) logger.verbose('HealthService: Reported observations to {0}: {1}', self.endpoint, self.as_json) except HttpError as e: logger.warn("HealthService: could not report observations: {0}", ustr(e)) finally: # report any failures via telemetry self._report_failures() # these signals are not timestamped, so there is no value in persisting data del self.observations[:] def _report_failures(self): try: logger.verbose("HealthService: report failures as telemetry") from azurelinuxagent.common.event import add_event, WALAEventOperation for o in self.observations: if not o.is_healthy: add_event(AGENT_NAME, version=CURRENT_VERSION, op=WALAEventOperation.HealthObservation, is_success=False, message=json.dumps(o.as_obj)) except Exception as e: logger.verbose("HealthService: could not report failures: {0}".format(ustr(e))) Azure-WALinuxAgent-a976115/azurelinuxagent/common/protocol/hostplugin.py000066400000000000000000000755601510742556200265330ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import base64 import datetime import json import os.path import threading import uuid from azurelinuxagent.common import logger, conf from azurelinuxagent.common.errorstate import ErrorState, ERROR_STATE_HOST_PLUGIN_FAILURE from azurelinuxagent.common.event import WALAEventOperation, add_event from azurelinuxagent.common.exception import HttpError, ProtocolError, ResourceGoneError from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from azurelinuxagent.common.future import ustr, httpclient, UTC, datetime_min_utc from azurelinuxagent.common.protocol.healthservice import HealthService from azurelinuxagent.common.protocol.extensions_goal_state import VmSettingsParseError, GoalStateSource from azurelinuxagent.common.protocol.extensions_goal_state_factory import ExtensionsGoalStateFactory from azurelinuxagent.common.utils import restutil, textutil, timeutil from azurelinuxagent.common.utils.textutil import remove_bom from azurelinuxagent.common.version import AGENT_NAME, AGENT_VERSION, PY_VERSION_MAJOR HOST_PLUGIN_PORT = 32526 URI_FORMAT_GET_API_VERSIONS = "http://{0}:{1}/versions" URI_FORMAT_VM_SETTINGS = "http://{0}:{1}/vmSettings" URI_FORMAT_GET_EXTENSION_ARTIFACT = "http://{0}:{1}/extensionArtifact" URI_FORMAT_PUT_VM_STATUS = "http://{0}:{1}/status" URI_FORMAT_PUT_LOG = "http://{0}:{1}/vmAgentLog" URI_FORMAT_HEALTH = "http://{0}:{1}/health" API_VERSION = "2015-09-01" _HEADER_CLIENT_NAME = "x-ms-client-name" _HEADER_CLIENT_VERSION = "x-ms-client-version" _HEADER_CORRELATION_ID = "x-ms-client-correlationid" _HEADER_CONTAINER_ID = "x-ms-containerid" _HEADER_DEPLOYMENT_ID = "x-ms-vmagentlog-deploymentid" _HEADER_VERSION = "x-ms-version" _HEADER_HOST_CONFIG_NAME = "x-ms-host-config-name" _HEADER_ARTIFACT_LOCATION = "x-ms-artifact-location" _HEADER_ARTIFACT_MANIFEST_LOCATION = "x-ms-artifact-manifest-location" _HEADER_VERIFY_FROM_ARTIFACTS_BLOB = "x-ms-verify-from-artifacts-blob" MAXIMUM_PAGEBLOB_PAGE_SIZE = 4 * 1024 * 1024 # Max page size: 4MB class HostPluginProtocol(object): is_default_channel = False FETCH_REPORTING_PERIOD = datetime.timedelta(minutes=1) STATUS_REPORTING_PERIOD = datetime.timedelta(minutes=1) def __init__(self, endpoint): """ NOTE: Before using the HostGAPlugin be sure to invoke GoalState.update_host_plugin_headers() to initialize the container id and role config name """ if endpoint is None: raise ProtocolError("HostGAPlugin: Endpoint not provided") self.is_initialized = False self.is_available = False self.api_versions = None self.endpoint = endpoint self.container_id = None self.deployment_id = None self.role_config_name = None self.manifest_uri = None self.health_service = HealthService(endpoint) self.fetch_error_state = ErrorState(min_timedelta=ERROR_STATE_HOST_PLUGIN_FAILURE) self.status_error_state = ErrorState(min_timedelta=ERROR_STATE_HOST_PLUGIN_FAILURE) self.fetch_last_timestamp = None self.status_last_timestamp = None self._version = FlexibleVersion("0.0.0.0") # Version 0 means "unknown" self._supports_vm_settings = None # Tri-state variable: None == Not Initialized, True == Supports, False == Does Not Support self._supports_vm_settings_next_check = datetime.datetime.now(UTC) self._vm_settings_error_reporter = _VmSettingsErrorReporter() self._cached_vm_settings = None # Cached value of the most recent vmSettings # restore the state of Fast Track self._supports_vm_settings = os.path.exists(self._get_fast_track_state_file()) self._fast_track_timestamp = HostPluginProtocol.get_fast_track_timestamp() @staticmethod def _extract_deployment_id(role_config_name): # Role config name consists of: .(...) return role_config_name.split(".")[0] if role_config_name is not None else None def check_vm_settings_support(self): """ Returns True if the HostGAPlugin supports the vmSettings API. """ # _host_plugin_supports_vm_settings is set by fetch_vm_settings() if self._supports_vm_settings is None: _, _ = self.fetch_vm_settings() return self._supports_vm_settings def update_container_id(self, new_container_id): self.container_id = new_container_id def update_role_config_name(self, new_role_config_name): self.role_config_name = new_role_config_name self.deployment_id = self._extract_deployment_id(new_role_config_name) def update_manifest_uri(self, new_manifest_uri): self.manifest_uri = new_manifest_uri def ensure_initialized(self): if not self.is_initialized: self.api_versions = self.get_api_versions() self.is_available = API_VERSION in self.api_versions self.is_initialized = self.is_available add_event(op=WALAEventOperation.InitializeHostPlugin, is_success=self.is_available) return self.is_available def get_health(self): """ Call the /health endpoint :return: True if 200 received, False otherwise """ url = URI_FORMAT_HEALTH.format(self.endpoint, HOST_PLUGIN_PORT) logger.verbose("HostGAPlugin: Getting health from [{0}]", url) response = restutil.http_get(url, max_retry=1) return restutil.request_succeeded(response) def get_api_versions(self): url = URI_FORMAT_GET_API_VERSIONS.format(self.endpoint, HOST_PLUGIN_PORT) logger.verbose("HostGAPlugin: Getting API versions at [{0}]" .format(url)) return_val = [] error_response = '' is_healthy = False try: headers = {_HEADER_CONTAINER_ID: self.container_id} response = restutil.http_get(url, headers) if restutil.request_failed(response): error_response = restutil.read_response_error(response) logger.error("HostGAPlugin: Failed Get API versions: {0}".format(error_response)) is_healthy = not restutil.request_failed_at_hostplugin(response) else: return_val = ustr(remove_bom(response.read()), encoding='utf-8') is_healthy = True except HttpError as e: logger.error("HostGAPlugin: Exception Get API versions: {0}".format(e)) self.health_service.report_host_plugin_versions(is_healthy=is_healthy, response=error_response) return return_val def get_vm_settings_request(self, correlation_id): url = URI_FORMAT_VM_SETTINGS.format(self.endpoint, HOST_PLUGIN_PORT) headers = { _HEADER_VERSION: API_VERSION, _HEADER_CONTAINER_ID: self.container_id, _HEADER_HOST_CONFIG_NAME: self.role_config_name, _HEADER_CORRELATION_ID: correlation_id } return url, headers def get_artifact_request(self, artifact_url, use_verify_header, artifact_manifest_url=None): if not self.ensure_initialized(): raise ProtocolError("HostGAPlugin: Host plugin channel is not available") if textutil.is_str_none_or_whitespace(artifact_url): raise ProtocolError("HostGAPlugin: No extension artifact url was provided") url = URI_FORMAT_GET_EXTENSION_ARTIFACT.format(self.endpoint, HOST_PLUGIN_PORT) headers = { _HEADER_VERSION: API_VERSION, _HEADER_CONTAINER_ID: self.container_id, _HEADER_HOST_CONFIG_NAME: self.role_config_name, _HEADER_ARTIFACT_LOCATION: artifact_url} if use_verify_header: headers[_HEADER_VERIFY_FROM_ARTIFACTS_BLOB] = "true" if artifact_manifest_url is not None: headers[_HEADER_ARTIFACT_MANIFEST_LOCATION] = artifact_manifest_url return url, headers def report_fetch_health(self, uri, is_healthy=True, source='', response=''): if uri != URI_FORMAT_GET_EXTENSION_ARTIFACT.format(self.endpoint, HOST_PLUGIN_PORT): return if self.should_report(is_healthy, self.fetch_error_state, self.fetch_last_timestamp, HostPluginProtocol.FETCH_REPORTING_PERIOD): self.fetch_last_timestamp = datetime.datetime.now(UTC) health_signal = self.fetch_error_state.is_triggered() is False self.health_service.report_host_plugin_extension_artifact(is_healthy=health_signal, source=source, response=response) def report_status_health(self, is_healthy, response=''): if self.should_report(is_healthy, self.status_error_state, self.status_last_timestamp, HostPluginProtocol.STATUS_REPORTING_PERIOD): self.status_last_timestamp = datetime.datetime.now(UTC) health_signal = self.status_error_state.is_triggered() is False self.health_service.report_host_plugin_status(is_healthy=health_signal, response=response) @staticmethod def should_report(is_healthy, error_state, last_timestamp, period): """ Determine whether a health signal should be reported :param is_healthy: whether the current measurement is healthy :param error_state: the error state which is tracking time since failure :param last_timestamp: the last measurement time stamp :param period: the reporting period :return: True if the signal should be reported, False otherwise """ if is_healthy: # we only reset the error state upon success, since we want to keep # reporting the failure; this is different to other uses of error states # which do not have a separate periodicity error_state.reset() else: error_state.incr() if last_timestamp is None: last_timestamp = datetime.datetime.now(UTC) - period return datetime.datetime.now(UTC) >= (last_timestamp + period) def put_vm_log(self, content): """ Try to upload VM logs, a compressed zip file, via the host plugin /vmAgentLog channel. :param content: the binary content of the zip file to upload """ if not self.ensure_initialized(): raise ProtocolError("HostGAPlugin: HostGAPlugin is not available") if content is None: raise ProtocolError("HostGAPlugin: Invalid argument passed to upload VM logs. Content was not provided.") url = URI_FORMAT_PUT_LOG.format(self.endpoint, HOST_PLUGIN_PORT) response = restutil.http_put(url, data=content, headers=self._build_log_headers(), redact_data=True, timeout=30) if restutil.request_failed(response): error_response = restutil.read_response_error(response) raise HttpError("HostGAPlugin: Upload VM logs failed: {0}".format(error_response)) return response def put_vm_status(self, status_blob, sas_url, config_blob_type=None): """ Try to upload the VM status via the host plugin /status channel :param sas_url: the blob SAS url to pass to the host plugin :param config_blob_type: the blob type from the extension config :type status_blob: StatusBlob """ if not self.ensure_initialized(): raise ProtocolError("HostGAPlugin: HostGAPlugin is not available") if status_blob is None or status_blob.vm_status is None: raise ProtocolError("HostGAPlugin: Status blob was not provided") logger.verbose("HostGAPlugin: Posting VM status") blob_type = status_blob.type if status_blob.type else config_blob_type if blob_type == "BlockBlob": self._put_block_blob_status(sas_url, status_blob) else: self._put_page_blob_status(sas_url, status_blob) def _put_block_blob_status(self, sas_url, status_blob): url = URI_FORMAT_PUT_VM_STATUS.format(self.endpoint, HOST_PLUGIN_PORT) response = restutil.http_put(url, data=self._build_status_data( sas_url, status_blob.get_block_blob_headers(len(status_blob.data)), bytearray(status_blob.data, encoding='utf-8')), headers=self._build_status_headers()) if restutil.request_failed(response): error_response = restutil.read_response_error(response) is_healthy = not restutil.request_failed_at_hostplugin(response) self.report_status_health(is_healthy=is_healthy, response=error_response) raise HttpError("HostGAPlugin: Put BlockBlob failed: {0}" .format(error_response)) else: self.report_status_health(is_healthy=True) logger.verbose("HostGAPlugin: Put BlockBlob status succeeded") def _put_page_blob_status(self, sas_url, status_blob): url = URI_FORMAT_PUT_VM_STATUS.format(self.endpoint, HOST_PLUGIN_PORT) # Convert the status into a blank-padded string whose length is modulo 512 status = bytearray(status_blob.data, encoding='utf-8') status_size = int((len(status) + 511) / 512) * 512 status = bytearray(status_blob.data.ljust(status_size), encoding='utf-8') # First, initialize an empty blob response = restutil.http_put(url, data=self._build_status_data( sas_url, status_blob.get_page_blob_create_headers(status_size)), headers=self._build_status_headers()) if restutil.request_failed(response): error_response = restutil.read_response_error(response) is_healthy = not restutil.request_failed_at_hostplugin(response) self.report_status_health(is_healthy=is_healthy, response=error_response) raise HttpError("HostGAPlugin: Failed PageBlob clean-up: {0}" .format(error_response)) else: self.report_status_health(is_healthy=True) logger.verbose("HostGAPlugin: PageBlob clean-up succeeded") # Then, upload the blob in pages if sas_url.count("?") <= 0: sas_url = "{0}?comp=page".format(sas_url) else: sas_url = "{0}&comp=page".format(sas_url) start = 0 end = 0 while start < len(status): # Create the next page end = start + min(len(status) - start, MAXIMUM_PAGEBLOB_PAGE_SIZE) page_size = int((end - start + 511) / 512) * 512 buf = bytearray(page_size) buf[0: end - start] = status[start: end] # Send the page response = restutil.http_put(url, data=self._build_status_data( sas_url, status_blob.get_page_blob_page_headers(start, end), buf), headers=self._build_status_headers()) if restutil.request_failed(response): error_response = restutil.read_response_error(response) is_healthy = not restutil.request_failed_at_hostplugin(response) self.report_status_health(is_healthy=is_healthy, response=error_response) raise HttpError( "HostGAPlugin Error: Put PageBlob bytes " "[{0},{1}]: {2}".format(start, end, error_response)) # Advance to the next page (if any) start = end def _build_status_data(self, sas_url, blob_headers, content=None): headers = [] for name in iter(blob_headers.keys()): headers.append({ 'headerName': name, 'headerValue': blob_headers[name] }) data = { 'requestUri': sas_url, 'headers': headers } if not content is None: data['content'] = self._base64_encode(content) return json.dumps(data, sort_keys=True) def _build_status_headers(self): return { _HEADER_VERSION: API_VERSION, "Content-type": "application/json", _HEADER_CONTAINER_ID: self.container_id, _HEADER_HOST_CONFIG_NAME: self.role_config_name } def _build_log_headers(self): return { _HEADER_VERSION: API_VERSION, _HEADER_CONTAINER_ID: self.container_id, _HEADER_DEPLOYMENT_ID: self.deployment_id, _HEADER_CLIENT_NAME: AGENT_NAME, _HEADER_CLIENT_VERSION: AGENT_VERSION, _HEADER_CORRELATION_ID: str(uuid.uuid4()) } def _base64_encode(self, data): s = base64.b64encode(bytes(data)) if PY_VERSION_MAJOR > 2: return s.decode('utf-8') return s @staticmethod def _get_fast_track_state_file(): # This file keeps the timestamp of the most recent goal state if it was retrieved via Fast Track return os.path.join(conf.get_lib_dir(), "fast_track.json") # Multiple threads create instances of HostPluginProtocol; we use this lock to protect access to the state file for Fast Track _fast_track_state_lock = threading.RLock() @staticmethod def _save_fast_track_state(timestamp): try: with HostPluginProtocol._fast_track_state_lock: with open(HostPluginProtocol._get_fast_track_state_file(), "w") as file_: json.dump({"timestamp": timestamp}, file_) except Exception as e: logger.warn("Error updating the Fast Track state ({0}): {1}", HostPluginProtocol._get_fast_track_state_file(), ustr(e)) @staticmethod def clear_fast_track_state(): try: with HostPluginProtocol._fast_track_state_lock: if os.path.exists(HostPluginProtocol._get_fast_track_state_file()): os.remove(HostPluginProtocol._get_fast_track_state_file()) except Exception as e: logger.warn("Error clearing the current state for Fast Track ({0}): {1}", HostPluginProtocol._get_fast_track_state_file(), ustr(e)) @staticmethod def get_fast_track_timestamp(): """ Returns the timestamp of the most recent FastTrack goal state retrieved by fetch_vm_settings(), or a timestamp representing datetime.min if the most recent goal state was Fabric or fetch_vm_settings() has not been invoked. """ with HostPluginProtocol._fast_track_state_lock: state_file = HostPluginProtocol._get_fast_track_state_file() if not os.path.exists(state_file): return timeutil.create_utc_timestamp(datetime_min_utc) try: with open(state_file, "r") as file_: return json.load(file_)["timestamp"] except Exception as e: logger.warn("Can't retrieve the timestamp for the most recent Fast Track goal state ({0}), will assume the current time. Error: {1}", state_file, ustr(e)) return timeutil.create_utc_timestamp(datetime.datetime.now(UTC)) def fetch_vm_settings(self, force_update=False): """ Queries the vmSettings from the HostGAPlugin and returns an (ExtensionsGoalState, bool) tuple with the vmSettings and a boolean indicating if they are an updated (True) or a cached value (False). Raises * VmSettingsNotSupported if the HostGAPlugin does not support the vmSettings API * VmSettingsSupportStopped if the HostGAPlugin stopped supporting the vmSettings API * VmSettingsParseError if the HostGAPlugin returned invalid vmSettings (e.g. syntax error) * ResourceGoneError if the container ID and roleconfig name need to be refreshed * ProtocolError if the request fails for any other reason (e.g. not supported, time out, server error) """ def raise_not_supported(): try: if self._supports_vm_settings: # The most recent goal state was delivered using FastTrack, and suddenly the HostGAPlugin does not support the vmSettings API anymore. # This can happen if, for example, the VM is migrated across host nodes that are running different versions of the HostGAPlugin. logger.warn("The HostGAPlugin stopped supporting the vmSettings API. If there is a pending FastTrack goal state, it will not be executed.") add_event(op=WALAEventOperation.VmSettings, message="[VmSettingsSupportStopped] HostGAPlugin: {0}".format(self._version), is_success=False, log_event=False) raise VmSettingsSupportStopped(self._fast_track_timestamp) else: logger.info("HostGAPlugin {0} does not support the vmSettings API. Will not use FastTrack.", self._version) add_event(op=WALAEventOperation.VmSettings, message="[VmSettingsNotSupported] HostGAPlugin: {0}".format(self._version), is_success=True) raise VmSettingsNotSupported() finally: self._supports_vm_settings = False self._supports_vm_settings_next_check = datetime.datetime.now(UTC) + datetime.timedelta(hours=6) # check again in 6 hours def format_message(msg): return "GET vmSettings [correlation ID: {0} eTag: {1}]: {2}".format(correlation_id, etag, msg) try: # Raise if VmSettings are not supported, but check again periodically since the HostGAPlugin could have been updated since the last check # Note that self._host_plugin_supports_vm_settings can be None, so we need to compare against False if not self._supports_vm_settings and self._supports_vm_settings_next_check > datetime.datetime.now(UTC): # Raise VmSettingsNotSupported directly instead of using raise_not_supported() to avoid resetting the timestamp for the next check raise VmSettingsNotSupported() etag = None if force_update or self._cached_vm_settings is None else self._cached_vm_settings.etag correlation_id = str(uuid.uuid4()) self._vm_settings_error_reporter.report_request() url, headers = self.get_vm_settings_request(correlation_id) if etag is not None: headers['if-none-match'] = etag response = restutil.http_get(url, headers=headers, use_proxy=False, max_retry=1, return_raw_response=True) if response.status == httpclient.GONE: raise ResourceGoneError() if response.status == httpclient.NOT_FOUND: # the HostGAPlugin does not support FastTrack raise_not_supported() if response.status == httpclient.NOT_MODIFIED: # The goal state hasn't changed, return the current instance return self._cached_vm_settings, False if response.status != httpclient.OK: error_description = restutil.read_response_error(response) # For historical reasons the HostGAPlugin returns 502 (BAD_GATEWAY) for internal errors instead of using # 500 (INTERNAL_SERVER_ERROR). We add a short prefix to the error message in the hope that it will help # clear any confusion produced by the poor choice of status code. if response.status == httpclient.BAD_GATEWAY: error_description = "[Internal error in HostGAPlugin] {0}".format(error_description) error_description = format_message(error_description) if 400 <= response.status <= 499: self._vm_settings_error_reporter.report_error(error_description, _VmSettingsError.ClientError) elif 500 <= response.status <= 599: self._vm_settings_error_reporter.report_error(error_description, _VmSettingsError.ServerError) else: self._vm_settings_error_reporter.report_error(error_description, _VmSettingsError.HttpError) raise ProtocolError(error_description) for h in response.getheaders(): if h[0].lower() == 'etag': response_etag = h[1] break else: # since the vmSettings were updated, the response must include an etag message = format_message("The vmSettings response does not include an Etag header") raise ProtocolError(message) response_content = ustr(response.read(), encoding='utf-8') vm_settings = ExtensionsGoalStateFactory.create_from_vm_settings(response_etag, response_content, correlation_id) # log the HostGAPlugin version if vm_settings.host_ga_plugin_version != self._version: self._version = vm_settings.host_ga_plugin_version message = "HostGAPlugin version: {0}".format(vm_settings.host_ga_plugin_version) logger.info(message) add_event(op=WALAEventOperation.HostPlugin, message=message, is_success=True) # Don't support HostGAPlugin versions older than 133 if vm_settings.host_ga_plugin_version < FlexibleVersion("1.0.8.133"): raise_not_supported() self._supports_vm_settings = True self._cached_vm_settings = vm_settings if vm_settings.source == GoalStateSource.FastTrack: self._fast_track_timestamp = vm_settings.created_on_timestamp self._save_fast_track_state(vm_settings.created_on_timestamp) else: self.clear_fast_track_state() return vm_settings, True except (ProtocolError, ResourceGoneError, VmSettingsNotSupported, VmSettingsParseError): raise except Exception as exception: if isinstance(exception, IOError) and "timed out" in ustr(exception): message = format_message("Timeout") self._vm_settings_error_reporter.report_error(message, _VmSettingsError.Timeout) else: message = format_message("Request failed: {0}".format(textutil.format_exception(exception))) self._vm_settings_error_reporter.report_error(message, _VmSettingsError.RequestFailed) raise ProtocolError(message) finally: self._vm_settings_error_reporter.report_summary() class VmSettingsNotSupported(TypeError): """ Indicates that the HostGAPlugin does not support the vmSettings API """ class VmSettingsSupportStopped(VmSettingsNotSupported): """ Indicates that the HostGAPlugin supported the vmSettings API in previous calls, but now it does not support it for current call. This can happen, for example, if the VM is migrated across nodes with different HostGAPlugin versions. """ def __init__(self, timestamp): super(VmSettingsSupportStopped, self).__init__() self.timestamp = timestamp class _VmSettingsError(object): ClientError = 'ClientError' HttpError = 'HttpError' RequestFailed = 'RequestFailed' ServerError = 'ServerError' Timeout = 'Timeout' class _VmSettingsErrorReporter(object): _MaxErrors = 3 # Max number of errors reported to telemetry (by period) _Period = datetime.timedelta(hours=1) # How often to report the summary def __init__(self): self._reset() def _reset(self): self._request_count = 0 # Total number of vmSettings HTTP requests self._error_count = 0 # Total number of errors issuing vmSettings requests (includes all kinds of errors) self._client_error_count = 0 # Count of client side errors (HTTP status in the 400s) self._http_error_count = 0 # Count of HTTP errors other than 400s and 500s self._request_failure_count = 0 # Total count of requests that could not be issued (does not include timeouts or requests that were actually issued and failed, for example, with 500 or 400 statuses) self._server_error_count = 0 # Count of server side errors (HTTP status in the 500s) self._timeout_count = 0 # Count of timeouts on vmSettings requests self._next_period = datetime.datetime.now(UTC) + _VmSettingsErrorReporter._Period def report_request(self): self._request_count += 1 def report_error(self, error, category): self._error_count += 1 if self._error_count <= _VmSettingsErrorReporter._MaxErrors: add_event(op=WALAEventOperation.VmSettings, message="[{0}] {1}".format(category, error), is_success=True, log_event=False) if category == _VmSettingsError.ClientError: self._client_error_count += 1 elif category == _VmSettingsError.HttpError: self._http_error_count += 1 elif category == _VmSettingsError.RequestFailed: self._request_failure_count += 1 elif category == _VmSettingsError.ServerError: self._server_error_count += 1 elif category == _VmSettingsError.Timeout: self._timeout_count += 1 def report_summary(self): if datetime.datetime.now(UTC) >= self._next_period: summary = { "requests": self._request_count, "errors": self._error_count, "serverErrors": self._server_error_count, "clientErrors": self._client_error_count, "timeouts": self._timeout_count, "failedRequests": self._request_failure_count } message = json.dumps(summary) add_event(op=WALAEventOperation.VmSettingsSummary, message=message, is_success=True, log_event=False) if self._error_count > 0: logger.info("[VmSettingsSummary] {0}", message) self._reset() Azure-WALinuxAgent-a976115/azurelinuxagent/common/protocol/imds.py000066400000000000000000000333161510742556200252640ustar00rootroot00000000000000# Copyright (c) Microsoft Corporation. All rights reserved. # Licensed under the Apache License, Version 2.0 (the "License"); import json import re from collections import namedtuple import azurelinuxagent.common.utils.restutil as restutil from azurelinuxagent.common.exception import HttpError, ResourceGoneError from azurelinuxagent.common.future import ustr import azurelinuxagent.common.logger as logger from azurelinuxagent.common.datacontract import DataContract, set_properties from azurelinuxagent.common.utils.flexible_version import FlexibleVersion IMDS_ENDPOINT = '169.254.169.254' APIVERSION = '2018-02-01' BASE_METADATA_URI = "http://{0}/metadata/{1}?api-version={2}" IMDS_IMAGE_ORIGIN_UNKNOWN = 0 IMDS_IMAGE_ORIGIN_CUSTOM = 1 IMDS_IMAGE_ORIGIN_ENDORSED = 2 IMDS_IMAGE_ORIGIN_PLATFORM = 3 MetadataResult = namedtuple('MetadataResult', ['success', 'service_error', 'response']) IMDS_RESPONSE_SUCCESS = 0 IMDS_RESPONSE_ERROR = 1 IMDS_CONNECTION_ERROR = 2 IMDS_INTERNAL_SERVER_ERROR = 3 def get_imds_client(): return ImdsClient() # A *slightly* future proof list of endorsed distros. # -> e.g. I have predicted the future and said that 20.04-LTS will exist # and is endored. # # See https://docs.microsoft.com/en-us/azure/virtual-machines/linux/endorsed-distros for # more details. # # This is not an exhaustive list. This is a best attempt to mark images as # endorsed or not. Image publishers do not encode all of the requisite information # in their publisher, offer, sku, and version to definitively mark something as # endorsed or not. This is not perfect, but it is approximately 98% perfect. ENDORSED_IMAGE_INFO_MATCHER_JSON = """{ "CANONICAL": { "UBUNTUSERVER": { "List": [ "14.04.0-LTS", "14.04.1-LTS", "14.04.2-LTS", "14.04.3-LTS", "14.04.4-LTS", "14.04.5-LTS", "14.04.6-LTS", "14.04.7-LTS", "14.04.8-LTS", "16.04-LTS", "16.04.0-LTS", "18.04-LTS", "20.04-LTS", "22.04-LTS" ] } }, "COREOS": { "COREOS": { "STABLE": { "Minimum": "494.4.0" } } }, "CREDATIV": { "DEBIAN": { "Minimum": "7" } }, "OPENLOGIC": { "CENTOS": { "Minimum": "6.3", "List": [ "7-LVM", "7-RAW" ] }, "CENTOS-HPC": { "Minimum": "6.3" } }, "REDHAT": { "RHEL": { "Minimum": "6.7", "List": [ "7-LVM", "7-RAW" ] }, "RHEL-HANA": { "Minimum": "6.7" }, "RHEL-SAP": { "Minimum": "6.7" }, "RHEL-SAP-APPS": { "Minimum": "6.7" }, "RHEL-SAP-HANA": { "Minimum": "6.7" } }, "SUSE": { "SLES": { "List": [ "11-SP4", "11-SP5", "11-SP6", "12-SP1", "12-SP2", "12-SP3", "12-SP4", "12-SP5", "12-SP6" ] }, "SLES-BYOS": { "List": [ "11-SP4", "12", "12-SP1", "12-SP2", "12-SP3", "12-SP4", "12-SP5", "15", "15-SP1", "15-SP2", "15-SP3", "15-SP4", "15-SP5" ] }, "SLES-SAP": { "List": [ "11-SP4", "12", "12-SP1", "12-SP2", "12-SP3", "12-SP4", "12-SP5", "15", "15-SP1", "15-SP2", "15-SP3", "15-SP4", "15-SP5" ] }, "SLE-HPC": { "List": [ "15-SP1", "15-SP2", "15-SP3", "15-SP4", "15-SP5" ] } } }""" class ImageInfoMatcher(object): def __init__(self, doc): self.doc = json.loads(doc) def is_match(self, publisher, offer, sku, version): def _is_match_walk(doci, keys): key = keys.pop(0).upper() if key is None: return False if key not in doci: return False if 'List' in doci[key] and keys[0] in doci[key]['List']: return True if 'Match' in doci[key] and re.match(doci[key]['Match'], keys[0]): return True if 'Minimum' in doci[key]: try: return FlexibleVersion(keys[0]) >= FlexibleVersion(doci[key]['Minimum']) except ValueError: pass return _is_match_walk(doci[key], keys) return _is_match_walk(self.doc, [ publisher, offer, sku, version ]) class ComputeInfo(DataContract): __matcher = ImageInfoMatcher(ENDORSED_IMAGE_INFO_MATCHER_JSON) def __init__(self, location=None, name=None, offer=None, osType=None, placementGroupId=None, platformFaultDomain=None, placementUpdateDomain=None, publisher=None, resourceGroupName=None, sku=None, subscriptionId=None, tags=None, version=None, vmId=None, vmSize=None, vmScaleSetName=None, zone=None): self.location = location self.name = name self.offer = offer self.osType = osType self.placementGroupId = placementGroupId self.platformFaultDomain = platformFaultDomain self.platformUpdateDomain = placementUpdateDomain self.publisher = publisher self.resourceGroupName = resourceGroupName self.sku = sku self.subscriptionId = subscriptionId self.tags = tags self.version = version self.vmId = vmId self.vmSize = vmSize self.vmScaleSetName = vmScaleSetName self.zone = zone @property def image_info(self): return "{0}:{1}:{2}:{3}".format(self.publisher, self.offer, self.sku, self.version) @property def image_origin(self): """ An integer value describing the origin of the image. 0 -> unknown 1 -> custom - user created image 2 -> endorsed - See https://docs.microsoft.com/en-us/azure/virtual-machines/linux/endorsed-distros 3 -> platform - non-endorsed image that is available in the Azure Marketplace. """ try: if self.publisher == "": return IMDS_IMAGE_ORIGIN_CUSTOM if ComputeInfo.__matcher.is_match(self.publisher, self.offer, self.sku, self.version): return IMDS_IMAGE_ORIGIN_ENDORSED else: return IMDS_IMAGE_ORIGIN_PLATFORM except Exception as e: logger.periodic_warn(logger.EVERY_FIFTEEN_MINUTES, "[PERIODIC] Could not determine the image origin from IMDS: {0}".format(ustr(e))) return IMDS_IMAGE_ORIGIN_UNKNOWN class ImdsClient(object): def __init__(self, version=APIVERSION): self._api_version = version self._headers = { 'User-Agent': restutil.HTTP_USER_AGENT, 'Metadata': True, } self._health_headers = { 'User-Agent': restutil.HTTP_USER_AGENT_HEALTH, 'Metadata': True, } self._regex_ioerror = re.compile(r".*HTTP Failed. GET http://[^ ]+ -- IOError .*") self._regex_throttled = re.compile(r".*HTTP Retry. GET http://[^ ]+ -- Status Code 429 .*") def _get_metadata_url(self, endpoint, resource_path): return BASE_METADATA_URI.format(endpoint, resource_path, self._api_version) def _http_get(self, endpoint, resource_path, headers): url = self._get_metadata_url(endpoint, resource_path) return restutil.http_get(url, headers=headers, use_proxy=False) def _get_metadata_from_endpoint(self, endpoint, resource_path, headers): """ Get metadata from one of the IMDS endpoints. :param str endpoint: IMDS endpoint to call :param str resource_path: path of IMDS resource :param bool headers: headers to send in the request :return: Tuple status: one of the following response status codes: IMDS_RESPONSE_SUCCESS, IMDS_RESPONSE_ERROR, IMDS_CONNECTION_ERROR, IMDS_INTERNAL_SERVER_ERROR response: IMDS response on IMDS_RESPONSE_SUCCESS, failure message otherwise """ try: resp = self._http_get(endpoint=endpoint, resource_path=resource_path, headers=headers) except ResourceGoneError: return IMDS_INTERNAL_SERVER_ERROR, "IMDS error in /metadata/{0}: HTTP Failed with Status Code 410: Gone".format(resource_path) except HttpError as e: msg = str(e) if self._regex_throttled.match(msg): return IMDS_RESPONSE_ERROR, "IMDS error in /metadata/{0}: Throttled".format(resource_path) if self._regex_ioerror.match(msg): logger.periodic_warn(logger.EVERY_FIFTEEN_MINUTES, "[PERIODIC] [IMDS_CONNECTION_ERROR] Unable to connect to IMDS endpoint {0}".format(endpoint)) return IMDS_CONNECTION_ERROR, "IMDS error in /metadata/{0}: Unable to connect to endpoint".format(resource_path) return IMDS_INTERNAL_SERVER_ERROR, "IMDS error in /metadata/{0}: {1}".format(resource_path, msg) if resp.status >= 500: return IMDS_INTERNAL_SERVER_ERROR, "IMDS error in /metadata/{0}: {1}".format( resource_path, restutil.read_response_error(resp)) if restutil.request_failed(resp): return IMDS_RESPONSE_ERROR, "IMDS error in /metadata/{0}: {1}".format( resource_path, restutil.read_response_error(resp)) return IMDS_RESPONSE_SUCCESS, resp.read() def get_metadata(self, resource_path, is_health): """ Get metadata from IMDS, falling back to Wireserver endpoint if necessary. :param str resource_path: path of IMDS resource :param bool is_health: True if for health/heartbeat, False otherwise :return: instance of MetadataResult :rtype: MetadataResult """ headers = self._health_headers if is_health else self._headers endpoint = IMDS_ENDPOINT status, resp = self._get_metadata_from_endpoint(endpoint, resource_path, headers) if status == IMDS_RESPONSE_SUCCESS: return MetadataResult(True, False, resp) elif status == IMDS_INTERNAL_SERVER_ERROR: return MetadataResult(False, True, resp) # else it's a client-side error, e.g. IMDS_CONNECTION_ERROR return MetadataResult(False, False, resp) def get_compute(self): """ Fetch compute information. :return: instance of a ComputeInfo :rtype: ComputeInfo """ # ensure we get a 200 result = self.get_metadata('instance/compute', is_health=False) if not result.success: raise HttpError(result.response) data = json.loads(ustr(result.response, encoding="utf-8")) compute_info = ComputeInfo() set_properties('compute', compute_info, data) return compute_info def validate(self): """ Determines whether the metadata instance api returns 200, and the response is valid: compute should contain location, name, subscription id, and vm size and network should contain mac address and private ip address. :return: Tuple is_healthy: False when service returns an error, True on successful response and connection failures. error_response: validation failure details to assist with debugging """ # ensure we get a 200 result = self.get_metadata('instance', is_health=True) if not result.success: # we should only return False when the service is unhealthy return (not result.service_error), result.response # ensure the response is valid json try: json_data = json.loads(ustr(result.response, encoding="utf-8")) except Exception as e: return False, "JSON parsing failed: {0}".format(ustr(e)) # ensure all expected fields are present and have a value try: # TODO: compute fields cannot be verified yet since we need to exclude rdfe vms (#1249) self.check_field(json_data, 'network') self.check_field(json_data['network'], 'interface') self.check_field(json_data['network']['interface'][0], 'macAddress') self.check_field(json_data['network']['interface'][0], 'ipv4') self.check_field(json_data['network']['interface'][0]['ipv4'], 'ipAddress') self.check_field(json_data['network']['interface'][0]['ipv4']['ipAddress'][0], 'privateIpAddress') except ValueError as v: return False, ustr(v) return True, '' @staticmethod def check_field(dict_obj, field): if field not in dict_obj or dict_obj[field] is None: raise ValueError('Missing field: [{0}]'.format(field)) if len(dict_obj[field]) == 0: raise ValueError('Empty field: [{0}]'.format(field)) Azure-WALinuxAgent-a976115/azurelinuxagent/common/protocol/metadata_server_migration_util.py000066400000000000000000000144221510742556200326010ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import re import os import azurelinuxagent.common.conf as conf import azurelinuxagent.common.logger as logger from azurelinuxagent.common.event import add_event, WALAEventOperation from azurelinuxagent.common.future import ustr from azurelinuxagent.common.utils import shellutil from azurelinuxagent.common.utils.flexible_version import FlexibleVersion # Name for Metadata Server Protocol _METADATA_PROTOCOL_NAME = "MetadataProtocol" # MetadataServer Certificates for Cleanup _LEGACY_METADATA_SERVER_TRANSPORT_PRV_FILE_NAME = "V2TransportPrivate.pem" _LEGACY_METADATA_SERVER_TRANSPORT_CERT_FILE_NAME = "V2TransportCert.pem" _LEGACY_METADATA_SERVER_P7B_FILE_NAME = "Certificates.p7b" # MetadataServer Endpoint _KNOWN_METADATASERVER_IP = "169.254.169.254" def is_metadata_server_artifact_present(): metadata_artifact_path = os.path.join(conf.get_lib_dir(), _LEGACY_METADATA_SERVER_TRANSPORT_CERT_FILE_NAME) return os.path.isfile(metadata_artifact_path) def cleanup_metadata_server_artifacts(): logger.info("Clean up for MetadataServer to WireServer protocol migration: removing MetadataServer certificates and resetting firewall rules.") _cleanup_metadata_protocol_certificates() _reset_firewall_rules() def _cleanup_metadata_protocol_certificates(): """ Removes MetadataServer Certificates. """ lib_directory = conf.get_lib_dir() _ensure_file_removed(lib_directory, _LEGACY_METADATA_SERVER_TRANSPORT_PRV_FILE_NAME) _ensure_file_removed(lib_directory, _LEGACY_METADATA_SERVER_TRANSPORT_CERT_FILE_NAME) _ensure_file_removed(lib_directory, _LEGACY_METADATA_SERVER_P7B_FILE_NAME) def _reset_firewall_rules(): """ Removes MetadataServer firewall rule so IMDS can be used. """ try: _remove_firewall(dst_ip=_KNOWN_METADATASERVER_IP, uid=os.getuid(), wait=_get_firewall_will_wait()) except Exception as e: add_event(op=WALAEventOperation.Firewall, message="Failed to remove firewall rule for MetadataServer: {0}".format(e), is_success=False) def _ensure_file_removed(directory, file_name): """ Removes files if they are present. """ path = os.path.join(directory, file_name) if os.path.isfile(path): os.remove(path) # # NOTE: The code below was taken almost verbatim from the old firewall code that use to reside in osutil (default.py), with only very minor edits # _IPTABLES_VERSION_PATTERN = re.compile(r"^[^\d\.]*([\d\.]+).*$") _IPTABLES_LOCKING_VERSION = FlexibleVersion('1.4.21') def _add_wait(wait, command): """ If 'wait' is True, adds the wait option (-w) to the given iptables command line """ if wait: command.insert(1, "-w") return command def _get_iptables_version_command(): return ["iptables", "--version"] # Precisely delete the rules created by the agent. This rule was used <= 2.2.25. This rule helped to validate our change, and determine impact. def _get_firewall_delete_conntrack_accept_command(wait, destination): return _add_wait( wait, ["iptables", "-t", "security", "-D", "OUTPUT", "-d", destination, "-p", "tcp", "-m", "conntrack", "--ctstate", "INVALID,NEW", "-j", "ACCEPT"]) def _get_delete_accept_tcp_rule(wait, destination): return _add_wait( wait, ["iptables", "-t", "security", "-D", "OUTPUT", "-d", destination, "-p", "tcp", "--destination-port", "53", "-j", "ACCEPT"]) def _get_firewall_delete_owner_accept_command(wait, destination, owner_uid): return _add_wait( wait, ["iptables", "-t", "security", "-D", "OUTPUT", "-d", destination, "-p", "tcp", "-m", "owner", "--uid-owner", str(owner_uid), "-j", "ACCEPT"]) def _get_firewall_delete_conntrack_drop_command(wait, destination): return _add_wait( wait, ["iptables", "-t", "security", "-D", "OUTPUT", "-d", destination, "-p", "tcp", "-m", "conntrack", "--ctstate", "INVALID,NEW", "-j", "DROP"]) def _get_firewall_will_wait(): # Determine if iptables will serialize access try: output = shellutil.run_command(_get_iptables_version_command()) except Exception as e: msg = "Unable to determine version of iptables: {0}".format(ustr(e)) logger.warn(msg) raise Exception(msg) m = _IPTABLES_VERSION_PATTERN.match(output) if m is None: msg = "iptables did not return version information: {0}".format(output) logger.warn(msg) raise Exception(msg) wait = "-w" \ if FlexibleVersion(m.group(1)) >= _IPTABLES_LOCKING_VERSION \ else "" return wait def _delete_rule(rule): """ Continually execute the delete operation until the return code is non-zero or the limit has been reached. """ for i in range(1, 100): # pylint: disable=W0612 try: rc = shellutil.run_command(rule) # pylint: disable=W0612 except shellutil.CommandError as e: if e.returncode == 1: return if e.returncode == 2: raise Exception("invalid firewall deletion rule '{0}'".format(rule)) def _remove_firewall(dst_ip, uid, wait): try: # This rule was <= 2.2.25 only, and may still exist on some VMs. Until 2.2.25 # has aged out, keep this cleanup in place. _delete_rule(_get_firewall_delete_conntrack_accept_command(wait, dst_ip)) _delete_rule(_get_delete_accept_tcp_rule(wait, dst_ip)) _delete_rule(_get_firewall_delete_owner_accept_command(wait, dst_ip, uid)) _delete_rule(_get_firewall_delete_conntrack_drop_command(wait, dst_ip)) return True except Exception as e: logger.info("Unable to remove firewall -- no further attempts will be made: {0}".format(ustr(e))) return False Azure-WALinuxAgent-a976115/azurelinuxagent/common/protocol/ovfenv.py000066400000000000000000000116601510742556200256310ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # """ Copy and parse ovf-env.xml from provisioning ISO and local cache """ import os # pylint: disable=W0611 import re # pylint: disable=W0611 import shutil # pylint: disable=W0611 import xml.dom.minidom as minidom # pylint: disable=W0611 import azurelinuxagent.common.logger as logger from azurelinuxagent.common.exception import ProtocolError from azurelinuxagent.common.future import ustr # pylint: disable=W0611 import azurelinuxagent.common.utils.fileutil as fileutil # pylint: disable=W0611 from azurelinuxagent.common.utils.textutil import parse_doc, findall, find, findtext OVF_VERSION = "1.0" OVF_NAME_SPACE = "http://schemas.dmtf.org/ovf/environment/1" WA_NAME_SPACE = "http://schemas.microsoft.com/windowsazure" def _validate_ovf(val, msg): if val is None: raise ProtocolError("Failed to validate OVF: {0}".format(msg)) class OvfEnv(object): """ Read, and process provisioning info from provisioning file OvfEnv.xml """ def __init__(self, xml_text): if xml_text is None: raise ValueError("ovf-env is None") logger.verbose("Load ovf-env.xml") self.hostname = None self.username = None self.user_password = None self.customdata = None self.disable_ssh_password_auth = True self.ssh_pubkeys = [] self.ssh_keypairs = [] self.provision_guest_agent = None self.parse(xml_text) def parse(self, xml_text): """ Parse xml tree, retreiving user and ssh key information. Return self. """ wans = WA_NAME_SPACE ovfns = OVF_NAME_SPACE xml_doc = parse_doc(xml_text) environment = find(xml_doc, "Environment", namespace=ovfns) _validate_ovf(environment, "Environment not found") section = find(environment, "ProvisioningSection", namespace=wans) _validate_ovf(section, "ProvisioningSection not found") version = findtext(environment, "Version", namespace=wans) _validate_ovf(version, "Version not found") if version > OVF_VERSION: logger.warn("Newer provisioning configuration detected. " "Please consider updating waagent") conf_set = find(section, "LinuxProvisioningConfigurationSet", namespace=wans) _validate_ovf(conf_set, "LinuxProvisioningConfigurationSet not found") self.hostname = findtext(conf_set, "HostName", namespace=wans) _validate_ovf(self.hostname, "HostName not found") self.username = findtext(conf_set, "UserName", namespace=wans) _validate_ovf(self.username, "UserName not found") self.user_password = findtext(conf_set, "UserPassword", namespace=wans) self.customdata = findtext(conf_set, "CustomData", namespace=wans) auth_option = findtext(conf_set, "DisableSshPasswordAuthentication", namespace=wans) if auth_option is not None and auth_option.lower() == "true": self.disable_ssh_password_auth = True else: self.disable_ssh_password_auth = False public_keys = findall(conf_set, "PublicKey", namespace=wans) for public_key in public_keys: path = findtext(public_key, "Path", namespace=wans) fingerprint = findtext(public_key, "Fingerprint", namespace=wans) value = findtext(public_key, "Value", namespace=wans) self.ssh_pubkeys.append((path, fingerprint, value)) keypairs = findall(conf_set, "KeyPair", namespace=wans) for keypair in keypairs: path = findtext(keypair, "Path", namespace=wans) fingerprint = findtext(keypair, "Fingerprint", namespace=wans) self.ssh_keypairs.append((path, fingerprint)) platform_settings_section = find(environment, "PlatformSettingsSection", namespace=wans) _validate_ovf(platform_settings_section, "PlatformSettingsSection not found") platform_settings = find(platform_settings_section, "PlatformSettings", namespace=wans) _validate_ovf(platform_settings, "PlatformSettings not found") self.provision_guest_agent = findtext(platform_settings, "ProvisionGuestAgent", namespace=wans) _validate_ovf(self.provision_guest_agent, "ProvisionGuestAgent not found") Azure-WALinuxAgent-a976115/azurelinuxagent/common/protocol/restapi.py000066400000000000000000000255261510742556200260030ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import socket import time from azurelinuxagent.common.datacontract import DataContract, DataContractList from azurelinuxagent.common.future import ustr from azurelinuxagent.common.utils.textutil import getattrib from azurelinuxagent.common.version import DISTRO_VERSION, DISTRO_NAME, CURRENT_VERSION VERSION_0 = "0.0.0.0" class VMInfo(DataContract): def __init__(self, subscriptionId=None, vmName=None, roleName=None, roleInstanceName=None, tenantName=None): self.subscriptionId = subscriptionId self.vmName = vmName self.roleName = roleName self.roleInstanceName = roleInstanceName self.tenantName = tenantName class VMAgentFamily(object): def __init__(self, name): self.name = name # Two-state: None, string. Set to None if version not specified in the GS self.version = None # Two-state: None, string. Set to None if this property not specified in the GS. self.from_version = None # Tri-state: None, True, False. Set to None if this property not specified in the GS. self.is_version_from_rsm = None # Tri-state: None, True, False. Set to None if this property not specified in the GS. self.is_vm_enabled_for_rsm_upgrades = None self.uris = [] def __repr__(self): return self.__str__() def __str__(self): return "[name: '{0}' uris: {1}]".format(self.name, self.uris) class ExtensionState(object): Enabled = ustr("enabled") Disabled = ustr("disabled") class ExtensionRequestedState(object): """ This is the state of the Handler as requested by the Goal State. CRP only supports 2 states as of now - Enabled and Uninstall Disabled was used for older XML extensions and we keep it to support backward compatibility. """ Enabled = ustr("enabled") Disabled = ustr("disabled") Uninstall = ustr("uninstall") All = [Enabled, Disabled, Uninstall] class ExtensionSettings(object): """ The runtime settings associated with a Handler - Maps to Extension.PluginSettings.Plugin.RuntimeSettings for single config extensions in the ExtensionConfig.xml Eg: 1.settings, 2.settings - Maps to Extension.PluginSettings.Plugin.ExtensionRuntimeSettings for multi-config extensions in the ExtensionConfig.xml Eg: .1.settings, .2.settings """ def __init__(self, name=None, sequenceNumber=None, publicSettings=None, protectedSettings=None, certificateThumbprint=None, dependencyLevel=0, state=ExtensionState.Enabled): self.name = name self.sequenceNumber = sequenceNumber self.publicSettings = publicSettings self.protectedSettings = protectedSettings self.certificateThumbprint = certificateThumbprint self.dependencyLevel = dependencyLevel self.state = state def dependency_level_sort_key(self, handler_state): level = self.dependencyLevel # Process uninstall or disabled before enabled, in reverse order # Prioritize Handler state and Extension state both when sorting extensions # remap 0 to -1, 1 to -2, 2 to -3, etc if handler_state != ExtensionRequestedState.Enabled or self.state != ExtensionState.Enabled: level = (0 - level) - 1 return level def __repr__(self): return self.__str__() def __str__(self): return "{0}".format(self.name) class Extension(object): """ The main Plugin/handler specified by the publishers. Maps to Extension.PluginSettings.Plugins.Plugin in the ExtensionConfig.xml file Eg: Microsoft.OSTC.CustomScript """ def __init__(self, name=None): self.name = name self.version = None self.state = None self.settings = [] self.manifest_uris = [] self.supports_multi_config = False # An empty string for encoded_signature indicates that the extension is not signed, or that the goal state API does not support the signature property. self.encoded_signature = "" self.__invalid_handler_setting_reason = None @property def is_invalid_setting(self): return self.__invalid_handler_setting_reason is not None @property def invalid_setting_reason(self): return self.__invalid_handler_setting_reason @invalid_setting_reason.setter def invalid_setting_reason(self, value): self.__invalid_handler_setting_reason = value def dependency_level_sort_key(self): levels = [e.dependencyLevel for e in self.settings] if len(levels) == 0: level = 0 else: level = min(levels) # Process uninstall or disabled before enabled, in reverse order # remap 0 to -1, 1 to -2, 2 to -3, etc if self.state != u"enabled": level = (0 - level) - 1 return level def __repr__(self): return self.__str__() def __str__(self): return "{0}-{1}".format(self.name, self.version) class InVMGoalStateMetaData(DataContract): """ Object for parsing the GoalState MetaData received from CRP Eg: """ def __init__(self, in_vm_metadata_node): self.correlation_id = getattrib(in_vm_metadata_node, "correlationId") self.activity_id = getattrib(in_vm_metadata_node, "activityId") self.created_on_ticks = getattrib(in_vm_metadata_node, "createdOnTicks") self.in_svd_seq_no = getattrib(in_vm_metadata_node, "inSvdSeqNo") class ExtHandlerPackage(DataContract): def __init__(self, version=None): self.version = version self.uris = [] # TODO update the naming to align with metadata protocol self.isinternal = False self.disallow_major_upgrade = False class ExtHandlerPackageList(DataContract): def __init__(self): self.versions = DataContractList(ExtHandlerPackage) class VMProperties(DataContract): def __init__(self, certificateThumbprint=None): # TODO need to confirm the property name self.certificateThumbprint = certificateThumbprint class ProvisionStatus(DataContract): def __init__(self, status=None, subStatus=None, description=None): self.status = status self.subStatus = subStatus self.description = description self.properties = VMProperties() class ExtensionSubStatus(DataContract): def __init__(self, name=None, status=None, code=None, message=None): self.name = name self.status = status self.code = code self.message = message class ExtensionStatus(DataContract): def __init__(self, name=None, configurationAppliedTime=None, operation=None, status=None, seq_no=None, code=None, message=None): self.name = name self.configurationAppliedTime = configurationAppliedTime self.operation = operation self.status = status self.sequenceNumber = seq_no self.code = code self.message = message self.substatusList = DataContractList(ExtensionSubStatus) class ExtHandlerStatus(DataContract): def __init__(self, name=None, version=None, status=None, code=0, message=None): self.name = name self.version = version self.status = status self.code = code self.message = message self.supports_multi_config = False self.extension_status = None class VMAgentStatus(DataContract): def __init__(self, status=None, message=None, gs_aggregate_status=None, update_status=None): self.status = status self.message = message self.hostname = socket.gethostname() self.version = str(CURRENT_VERSION) self.osname = DISTRO_NAME self.osversion = DISTRO_VERSION self.extensionHandlers = DataContractList(ExtHandlerStatus) self.vm_artifacts_aggregate_status = VMArtifactsAggregateStatus(gs_aggregate_status) self.update_status = update_status self._supports_fast_track = False @property def supports_fast_track(self): return self._supports_fast_track def set_supports_fast_track(self, value): self._supports_fast_track = value class VMStatus(DataContract): def __init__(self, status, message, gs_aggregate_status=None, vm_agent_update_status=None): self.vmAgent = VMAgentStatus(status=status, message=message, gs_aggregate_status=gs_aggregate_status, update_status=vm_agent_update_status) class GoalStateAggregateStatus(DataContract): def __init__(self, seq_no, status=None, message="", code=None): self.message = message self.in_svd_seq_no = seq_no self.status = status self.code = code self.__utc_timestamp = time.gmtime() @property def processed_time(self): return self.__utc_timestamp class VMArtifactsAggregateStatus(DataContract): def __init__(self, gs_aggregate_status=None): self.goal_state_aggregate_status = gs_aggregate_status class RemoteAccessUser(DataContract): def __init__(self, name, encrypted_password, expiration): self.name = name self.encrypted_password = encrypted_password self.expiration = expiration class RemoteAccessUsersList(DataContract): def __init__(self): self.users = DataContractList(RemoteAccessUser) class VMAgentUpdateStatuses(object): Success = ustr("Success") Transitioning = ustr("Transitioning") Error = ustr("Error") Unknown = ustr("Unknown") class VMAgentUpdateStatus(object): def __init__(self, expected_version, status=VMAgentUpdateStatuses.Success, message="", code=0): self.expected_version = expected_version self.status = status self.message = message self.code = code Azure-WALinuxAgent-a976115/azurelinuxagent/common/protocol/util.py000066400000000000000000000313071510742556200253030ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import errno import os import re import time import threading import azurelinuxagent.common.conf as conf import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.fileutil as fileutil from azurelinuxagent.common.singletonperthread import SingletonPerThread from azurelinuxagent.common.exception import ProtocolError, OSUtilError, \ ProtocolNotFoundError, DhcpError from azurelinuxagent.common.future import ustr from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.dhcp import get_dhcp_handler from azurelinuxagent.common.protocol.metadata_server_migration_util import cleanup_metadata_server_artifacts, \ is_metadata_server_artifact_present from azurelinuxagent.common.protocol.ovfenv import OvfEnv from azurelinuxagent.common.protocol.wire import WireProtocol from azurelinuxagent.common.utils.restutil import KNOWN_WIRESERVER_IP, \ IOErrorCounter OVF_FILE_NAME = "ovf-env.xml" PROTOCOL_FILE_NAME = "Protocol" MAX_RETRY = 360 PROBE_INTERVAL = 10 ENDPOINT_FILE_NAME = "WireServerEndpoint" PASSWORD_PATTERN = ".*?<" PASSWORD_REPLACEMENT = "*<" WIRE_PROTOCOL_NAME = "WireProtocol" def get_protocol_util(): return ProtocolUtil() class ProtocolUtil(SingletonPerThread): """ ProtocolUtil handles initialization for protocol instance. 2 protocol types are invoked, wire protocol and metadata protocols. Note: ProtocolUtil is a sub class of SingletonPerThread, this basically means that there would only be 1 single instance of ProtocolUtil object per thread. """ def __init__(self): self._lock = threading.RLock() # protects the files on disk created during protocol detection self._protocol = None self.endpoint = None self.osutil = get_osutil() self.dhcp_handler = get_dhcp_handler() def copy_ovf_env(self): """ Copy ovf env file from dvd to hard disk. Remove password before save it to the disk """ dvd_mount_point = conf.get_dvd_mount_point() ovf_file_path_on_dvd = os.path.join(dvd_mount_point, OVF_FILE_NAME) ovf_file_path = os.path.join(conf.get_lib_dir(), OVF_FILE_NAME) try: self.osutil.mount_dvd() except OSUtilError as e: raise ProtocolError("[CopyOvfEnv] Error mounting dvd: " "{0}".format(ustr(e))) try: ovfxml = fileutil.read_file(ovf_file_path_on_dvd, remove_bom=True) ovfenv = OvfEnv(ovfxml) except (IOError, OSError) as e: raise ProtocolError("[CopyOvfEnv] Error reading file " "{0}: {1}".format(ovf_file_path_on_dvd, ustr(e))) try: ovfxml = re.sub(PASSWORD_PATTERN, PASSWORD_REPLACEMENT, ovfxml) fileutil.write_file(ovf_file_path, ovfxml) except (IOError, OSError) as e: raise ProtocolError("[CopyOvfEnv] Error writing file " "{0}: {1}".format(ovf_file_path, ustr(e))) self._cleanup_ovf_dvd() return ovfenv def _cleanup_ovf_dvd(self): try: self.osutil.umount_dvd() self.osutil.eject_dvd() except OSUtilError as e: logger.warn(ustr(e)) def get_ovf_env(self): """ Load saved ovf-env.xml """ ovf_file_path = os.path.join(conf.get_lib_dir(), OVF_FILE_NAME) if os.path.isfile(ovf_file_path): xml_text = fileutil.read_file(ovf_file_path) return OvfEnv(xml_text) else: raise ProtocolError( "ovf-env.xml is missing from {0}".format(ovf_file_path)) def _get_protocol_file_path(self): return os.path.join( conf.get_lib_dir(), PROTOCOL_FILE_NAME) def _get_wireserver_endpoint_file_path(self): return os.path.join( conf.get_lib_dir(), ENDPOINT_FILE_NAME) def get_wireserver_endpoint(self): self._lock.acquire() try: if self.endpoint: return self.endpoint file_path = self._get_wireserver_endpoint_file_path() if os.path.isfile(file_path): try: self.endpoint = fileutil.read_file(file_path) if self.endpoint: logger.info("WireServer endpoint {0} read from file", self.endpoint) return self.endpoint logger.error("[GetWireserverEndpoint] Unexpected empty file {0}", file_path) except (IOError, OSError) as e: logger.error("[GetWireserverEndpoint] Error reading file {0}: {1}", file_path, str(e)) else: logger.error("[GetWireserverEndpoint] Missing file {0}", file_path) self.endpoint = KNOWN_WIRESERVER_IP logger.info("Using hardcoded Wireserver endpoint {0}", self.endpoint) return self.endpoint finally: self._lock.release() def _set_wireserver_endpoint(self, endpoint): try: self.endpoint = endpoint file_path = self._get_wireserver_endpoint_file_path() fileutil.write_file(file_path, endpoint) except (IOError, OSError) as e: raise OSUtilError(ustr(e)) def _clear_wireserver_endpoint(self): """ Cleanup previous saved wireserver endpoint. """ self.endpoint = None endpoint_file_path = self._get_wireserver_endpoint_file_path() if not os.path.isfile(endpoint_file_path): return try: os.remove(endpoint_file_path) except (IOError, OSError) as e: # Ignore file-not-found errors (since the file is being removed) if e.errno == errno.ENOENT: return logger.error("Failed to clear wiresever endpoint: {0}", e) def _detect_protocol(self, init_goal_state, create_transport_certificate, save_to_history): """ Probe protocol endpoints in turn. """ self.clear_protocol() for retry in range(0, MAX_RETRY): try: endpoint = self.dhcp_handler.endpoint if endpoint is None: # pylint: disable=W0105 ''' Check if DHCP can be used to get the wire protocol endpoint ''' # pylint: enable=W0105 dhcp_available = self.osutil.is_dhcp_available() # If user has DHCP disabled for their VM then the agent may enter a loop of failed dhcp requests. # The user can configure the agent to use the known wire server ip instead. use_dhcp = conf.get_dhcp_discovery_enabled() if dhcp_available and use_dhcp: logger.info("WireServer endpoint is not found. Rerun dhcp handler") try: self.dhcp_handler.run() except DhcpError as e: raise ProtocolError(ustr(e)) endpoint = self.dhcp_handler.endpoint else: if not use_dhcp: logger.info("_detect_protocol: DHCP usage for endpoint discovery is disabled (Protocol.EndpointDiscovery={0}). Will use known wireserver endpoint.".format(conf.get_protocol_endpoint_discovery())) elif not dhcp_available: logger.info("_detect_protocol: DHCP not available") endpoint = self.get_wireserver_endpoint() try: protocol = WireProtocol(endpoint) protocol.detect(init_goal_state=init_goal_state, create_transport_certificate=create_transport_certificate, save_to_history=save_to_history) self._set_wireserver_endpoint(endpoint) return protocol except ProtocolError as e: logger.info("WireServer is not responding. Reset dhcp endpoint") self.dhcp_handler.endpoint = None self.dhcp_handler.skip_cache = True raise e except ProtocolError as e: logger.info("Protocol endpoint not found: {0}", e) if retry < MAX_RETRY - 1: logger.info("Retry detect protocol: retry={0}", retry) time.sleep(PROBE_INTERVAL) raise ProtocolNotFoundError("No protocol found.") def _save_protocol(self, protocol_name): """ Save protocol endpoint """ protocol_file_path = self._get_protocol_file_path() try: fileutil.write_file(protocol_file_path, protocol_name) except (IOError, OSError) as e: logger.error("Failed to save protocol endpoint: {0}", e) def clear_protocol(self): """ Cleanup previous saved protocol endpoint. """ self._lock.acquire() try: logger.info("Clean protocol and wireserver endpoint") self._clear_wireserver_endpoint() self._protocol = None protocol_file_path = self._get_protocol_file_path() if not os.path.isfile(protocol_file_path): return try: os.remove(protocol_file_path) except (IOError, OSError) as e: # Ignore file-not-found errors (since the file is being removed) if e.errno == errno.ENOENT: return logger.error("Failed to clear protocol endpoint: {0}", e) finally: self._lock.release() def get_protocol(self, init_goal_state=True, create_transport_certificate=True, save_to_history=False): """ Detect protocol by endpoint. :returns: protocol instance """ self._lock.acquire() try: if self._protocol is not None: return self._protocol # If the protocol file contains MetadataProtocol we need to fall through to # _detect_protocol so that we can generate the WireServer transport certificates. protocol_file_path = self._get_protocol_file_path() if os.path.isfile(protocol_file_path) and fileutil.read_file(protocol_file_path) == WIRE_PROTOCOL_NAME: endpoint = self.get_wireserver_endpoint() self._protocol = WireProtocol(endpoint) # If metadataserver certificates are present we clean certificates # and remove MetadataServer firewall rule. It is possible # there was a previous intermediate upgrade before 2.2.48 but metadata artifacts # were not cleaned up (intermediate updated agent does not have cleanup # logic but we transitioned from Metadata to Wire protocol) if is_metadata_server_artifact_present(): cleanup_metadata_server_artifacts() return self._protocol logger.info("Detect protocol endpoint") protocol = self._detect_protocol(init_goal_state=init_goal_state, create_transport_certificate=create_transport_certificate, save_to_history=save_to_history) IOErrorCounter.set_protocol_endpoint(endpoint=protocol.get_endpoint()) self._save_protocol(WIRE_PROTOCOL_NAME) self._protocol = protocol # Need to clean up MDS artifacts only after _detect_protocol so that we don't # delete MDS certificates if we can't reach WireServer and have to roll back # the update if is_metadata_server_artifact_present(): cleanup_metadata_server_artifacts() return self._protocol finally: self._lock.release() Azure-WALinuxAgent-a976115/azurelinuxagent/common/protocol/wire.py000066400000000000000000001570661510742556200253070ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import json import os import random import shutil import time import zipfile from collections import defaultdict from datetime import datetime, timedelta from xml.sax import saxutils from azurelinuxagent.common import conf from azurelinuxagent.common import logger from azurelinuxagent.common.utils import textutil from azurelinuxagent.common.agent_supported_feature import get_agent_supported_features_list_for_crp, SupportedFeatureNames from azurelinuxagent.common.datacontract import validate_param from azurelinuxagent.common.event import add_event, WALAEventOperation, report_event, \ CollectOrReportEventDebugInfo, add_periodic from azurelinuxagent.common.exception import ProtocolNotFoundError, \ ResourceGoneError, ExtensionDownloadError, InvalidContainerError, ProtocolError, HttpError, ExtensionErrorCodes from azurelinuxagent.common.future import httpclient, bytebuffer, ustr, UTC from azurelinuxagent.common.protocol.goal_state import GoalState, TRANSPORT_CERT_FILE_NAME, TRANSPORT_PRV_FILE_NAME, GoalStateProperties from azurelinuxagent.common.protocol.hostplugin import HostPluginProtocol from azurelinuxagent.common.protocol.restapi import DataContract, ProvisionStatus, VMInfo, VMStatus from azurelinuxagent.common.telemetryevent import GuestAgentExtensionEventsSchema from azurelinuxagent.common.utils import fileutil, restutil from azurelinuxagent.common.utils.cryptutil import CryptUtil from azurelinuxagent.common.utils.restutil import TELEMETRY_THROTTLE_DELAY_IN_SECONDS, \ TELEMETRY_FLUSH_THROTTLE_DELAY_IN_SECONDS, TELEMETRY_DATA from azurelinuxagent.common.utils.textutil import parse_doc, findall, find, \ findtext, gettext, remove_bom, get_bytes_from_pem, parse_json, redact_sas_token from azurelinuxagent.common.version import AGENT_NAME, CURRENT_VERSION from azurelinuxagent.ga.signature_validation_util import validate_signature, SignatureValidationError VERSION_INFO_URI = "http://{0}/?comp=versions" HEALTH_REPORT_URI = "http://{0}/machine?comp=health" ROLE_PROP_URI = "http://{0}/machine?comp=roleProperties" TELEMETRY_URI = "http://{0}/machine?comp={1}" PROTOCOL_VERSION = "2012-11-30" ENDPOINT_FINE_NAME = "WireServer" SHORT_WAITING_INTERVAL = 1 # 1 second MAX_EVENT_BUFFER_SIZE = 2 ** 16 - 2 ** 10 _DOWNLOAD_TIMEOUT = timedelta(minutes=5) class UploadError(HttpError): pass class WireProtocol(DataContract): def __init__(self, endpoint): if endpoint is None: raise ProtocolError("WireProtocol endpoint is None") self.client = WireClient(endpoint) def detect(self, init_goal_state=True, create_transport_certificate=True, save_to_history=False): self.client.check_wire_protocol_version() if create_transport_certificate: trans_prv_file = os.path.join(conf.get_lib_dir(), TRANSPORT_PRV_FILE_NAME) trans_cert_file = os.path.join(conf.get_lib_dir(), TRANSPORT_CERT_FILE_NAME) cryptutil = CryptUtil(conf.get_openssl_cmd()) cryptutil.gen_transport_cert(trans_prv_file, trans_cert_file) # Initialize the goal state, including all the inner properties if init_goal_state: logger.info('Initializing goal state during protocol detection') # # TODO: Currently protocol detection retrieves the entire goal state. This is not needed; in particular, retrieving the Extensions goal state # is not needed. However, the goal state is cached in self.client._goal_state and other components, including the Extension Handler, # depend on this cached value. This has been a long-standing issue that causes multiple problems. Before removing the cached goal state, # though, a careful review of these dependencies is needed. One of the problems of fetching the full goal state is that issues while # retrieving it can block protocol detection and make the Agent go into a retry loop that can last 1 full hour. # self.client.reset_goal_state(save_to_history=save_to_history) def update_host_plugin_from_goal_state(self): self.client.update_host_plugin_from_goal_state() def get_endpoint(self): return self.client.get_endpoint() def get_vminfo(self): goal_state = self.client.get_goal_state() hosting_env = self.client.get_hosting_env() vminfo = VMInfo() vminfo.subscriptionId = None vminfo.vmName = hosting_env.vm_name vminfo.tenantName = hosting_env.deployment_name vminfo.roleName = hosting_env.role_name vminfo.roleInstanceName = goal_state.role_instance_id return vminfo def get_certs(self): return self.client.get_certs() def get_goal_state(self): return self.client.get_goal_state() def report_provision_status(self, provision_status): validate_param("provision_status", provision_status, ProvisionStatus) if provision_status.status is not None: self.client.report_health(provision_status.status, provision_status.subStatus, provision_status.description) if provision_status.properties.certificateThumbprint is not None: thumbprint = provision_status.properties.certificateThumbprint self.client.report_role_prop(thumbprint) def report_vm_status(self, vm_status): validate_param("vm_status", vm_status, VMStatus) self.client.status_blob.set_vm_status(vm_status) self.client.upload_status_blob() def report_event(self, events_iterator, flush=False): return self.client.report_event(events_iterator, flush) def upload_logs(self, logs): self.client.upload_logs(logs) def get_status_blob_data(self): return self.client.status_blob.data def _build_role_properties(container_id, role_instance_id, thumbprint): xml = (u"" u"" u"" u"{0}" u"" u"" u"{1}" u"" u"" u"" u"" u"" u"" u"" u"").format(container_id, role_instance_id, thumbprint) return xml def _build_health_report(incarnation, container_id, role_instance_id, status, substatus, description): # The max description that can be sent to WireServer is 4096 bytes. # Exceeding this max can result in a failure to report health. # To keep this simple, we will keep a 10% buffer and trim before # encoding the description. if description: max_chars_before_encoding = 3686 len_before_trim = len(description) description = description[:max_chars_before_encoding] trimmed_char_count = len_before_trim - len(description) if trimmed_char_count > 0: logger.info( 'Trimmed health report description by {0} characters'.format( trimmed_char_count ) ) # Escape '&', '<' and '>' description = saxutils.escape(ustr(description)) detail = u'' if substatus is not None: substatus = saxutils.escape(ustr(substatus)) detail = (u"
" u"{0}" u"{1}" u"
").format(substatus, description) xml = (u"" u"" u"{0}" u"" u"{1}" u"" u"" u"{2}" u"" u"{3}" u"{4}" u"" u"" u"" u"" u"" u"").format(incarnation, container_id, role_instance_id, status, detail) return xml def ga_status_to_guest_info(ga_status): """ Convert VMStatus object to status blob format """ v1_ga_guest_info = { "computerName": ga_status.hostname, "osName": ga_status.osname, "osVersion": ga_status.osversion, "version": ga_status.version, } return v1_ga_guest_info def __get_formatted_msg_for_status_reporting(msg, lang="en-US"): return { 'lang': lang, 'message': redact_sas_token(msg) } def _get_utc_timestamp_for_status_reporting(time_format="%Y-%m-%dT%H:%M:%SZ", timestamp=None): timestamp = time.gmtime() if timestamp is None else timestamp return time.strftime(time_format, timestamp) def ga_status_to_v1(ga_status): v1_ga_status = { "version": ga_status.version, "status": ga_status.status, "formattedMessage": __get_formatted_msg_for_status_reporting(ga_status.message) } if ga_status.update_status is not None: v1_ga_status["updateStatus"] = get_ga_update_status_to_v1(ga_status.update_status) return v1_ga_status def get_ga_update_status_to_v1(update_status): v1_ga_update_status = { "expectedVersion": update_status.expected_version, "status": update_status.status, "code": update_status.code, "formattedMessage": __get_formatted_msg_for_status_reporting(update_status.message) } return v1_ga_update_status def ext_substatus_to_v1(sub_status_list): status_list = [] for substatus in sub_status_list: status = { "name": substatus.name, "status": substatus.status, "code": substatus.code, "formattedMessage": __get_formatted_msg_for_status_reporting(substatus.message) } status_list.append(status) return status_list def ext_status_to_v1(ext_status): if ext_status is None: return None timestamp = _get_utc_timestamp_for_status_reporting() v1_sub_status = ext_substatus_to_v1(ext_status.substatusList) v1_ext_status = { "status": { "name": ext_status.name, "configurationAppliedTime": ext_status.configurationAppliedTime, "operation": ext_status.operation, "status": ext_status.status, "code": ext_status.code, "formattedMessage": __get_formatted_msg_for_status_reporting(ext_status.message) }, "version": 1.0, "timestampUTC": timestamp } if len(v1_sub_status) != 0: v1_ext_status['status']['substatus'] = v1_sub_status return v1_ext_status def ext_handler_status_to_v1(ext_handler_status): v1_handler_status = { 'handlerVersion': ext_handler_status.version, 'handlerName': ext_handler_status.name, 'status': ext_handler_status.status, 'code': ext_handler_status.code, 'useExactVersion': True } if ext_handler_status.message is not None: v1_handler_status["formattedMessage"] = __get_formatted_msg_for_status_reporting(ext_handler_status.message) v1_ext_status = ext_status_to_v1(ext_handler_status.extension_status) if ext_handler_status.extension_status is not None and v1_ext_status is not None: v1_handler_status["runtimeSettingsStatus"] = { 'settingsStatus': v1_ext_status, 'sequenceNumber': ext_handler_status.extension_status.sequenceNumber } # Add extension name if Handler supports MultiConfig if ext_handler_status.supports_multi_config: v1_handler_status["runtimeSettingsStatus"]["extensionName"] = ext_handler_status.extension_status.name return v1_handler_status def vm_artifacts_aggregate_status_to_v1(vm_artifacts_aggregate_status): gs_aggregate_status = vm_artifacts_aggregate_status.goal_state_aggregate_status if gs_aggregate_status is None: return None v1_goal_state_aggregate_status = { "formattedMessage": __get_formatted_msg_for_status_reporting(gs_aggregate_status.message), "timestampUTC": _get_utc_timestamp_for_status_reporting(timestamp=gs_aggregate_status.processed_time), "inSvdSeqNo": gs_aggregate_status.in_svd_seq_no, "status": gs_aggregate_status.status, "code": gs_aggregate_status.code } v1_artifact_aggregate_status = { "goalStateAggregateStatus": v1_goal_state_aggregate_status } return v1_artifact_aggregate_status def vm_status_to_v1(vm_status): timestamp = _get_utc_timestamp_for_status_reporting() v1_ga_guest_info = ga_status_to_guest_info(vm_status.vmAgent) v1_ga_status = ga_status_to_v1(vm_status.vmAgent) v1_vm_artifact_aggregate_status = vm_artifacts_aggregate_status_to_v1( vm_status.vmAgent.vm_artifacts_aggregate_status) v1_handler_status_list = [] for handler_status in vm_status.vmAgent.extensionHandlers: v1_handler_status_list.append(ext_handler_status_to_v1(handler_status)) v1_agg_status = { 'guestAgentStatus': v1_ga_status, 'handlerAggregateStatus': v1_handler_status_list } if v1_vm_artifact_aggregate_status is not None: v1_agg_status['vmArtifactsAggregateStatus'] = v1_vm_artifact_aggregate_status v1_vm_status = { 'version': '1.1', 'timestampUTC': timestamp, 'aggregateStatus': v1_agg_status, 'guestOSInfo': v1_ga_guest_info } supported_features = [] for _, feature in get_agent_supported_features_list_for_crp().items(): supported_features.append( { "Key": feature.name, "Value": feature.version } ) if vm_status.vmAgent.supports_fast_track: supported_features.append( { "Key": SupportedFeatureNames.FastTrack, "Value": "1.0" # This is a dummy version; CRP ignores it } ) if supported_features: v1_vm_status["supportedFeatures"] = supported_features return v1_vm_status class StatusBlob(object): def __init__(self, client): self.vm_status = None self.client = client self.type = None self.data = None def set_vm_status(self, vm_status): validate_param("vmAgent", vm_status, VMStatus) self.vm_status = vm_status def to_json(self): report = vm_status_to_v1(self.vm_status) return json.dumps(report) __storage_version__ = "2014-02-14" def prepare(self, blob_type): logger.verbose("Prepare status blob") self.data = self.to_json() self.type = blob_type def upload(self, url): try: if not self.type in ["BlockBlob", "PageBlob"]: raise ProtocolError("Illegal blob type: {0}".format(self.type)) if self.type == "BlockBlob": self.put_block_blob(url, self.data) else: self.put_page_blob(url, self.data) return True except Exception as e: logger.verbose("Initial status upload failed: {0}", e) return False def get_block_blob_headers(self, blob_size): return { "Content-Length": ustr(blob_size), "x-ms-blob-type": "BlockBlob", "x-ms-date": _get_utc_timestamp_for_status_reporting(), "x-ms-version": self.__class__.__storage_version__ } def put_block_blob(self, url, data): logger.verbose("Put block blob") headers = self.get_block_blob_headers(len(data)) resp = self.client.call_storage_service(restutil.http_put, url, data, headers) if resp.status != httpclient.CREATED: raise UploadError( "Failed to upload block blob: {0}".format(resp.status)) def get_page_blob_create_headers(self, blob_size): return { "Content-Length": "0", "x-ms-blob-content-length": ustr(blob_size), "x-ms-blob-type": "PageBlob", "x-ms-date": _get_utc_timestamp_for_status_reporting(), "x-ms-version": self.__class__.__storage_version__ } def get_page_blob_page_headers(self, start, end): return { "Content-Length": ustr(end - start), "x-ms-date": _get_utc_timestamp_for_status_reporting(), "x-ms-range": "bytes={0}-{1}".format(start, end - 1), "x-ms-page-write": "update", "x-ms-version": self.__class__.__storage_version__ } def put_page_blob(self, url, data): logger.verbose("Put page blob") # Convert string into bytes and align to 512 bytes data = bytearray(data, encoding='utf-8') page_blob_size = int((len(data) + 511) / 512) * 512 headers = self.get_page_blob_create_headers(page_blob_size) resp = self.client.call_storage_service(restutil.http_put, url, "", headers) if resp.status != httpclient.CREATED: raise UploadError( "Failed to clean up page blob: {0}".format(resp.status)) if url.count("?") <= 0: url = "{0}?comp=page".format(url) else: url = "{0}&comp=page".format(url) logger.verbose("Upload page blob") page_max = 4 * 1024 * 1024 # Max page size: 4MB start = 0 end = 0 while end < len(data): end = min(len(data), start + page_max) content_size = end - start # Align to 512 bytes page_end = int((end + 511) / 512) * 512 buf_size = page_end - start buf = bytearray(buf_size) buf[0: content_size] = data[start: end] headers = self.get_page_blob_page_headers(start, page_end) resp = self.client.call_storage_service( restutil.http_put, url, bytebuffer(buf), headers) if resp is None or resp.status != httpclient.CREATED: raise UploadError( "Failed to upload page blob: {0}".format(resp.status)) start = end def event_param_to_v1(param): param_format = ustr('') param_type = type(param.value) attr_type = "" if param_type is int: attr_type = 'mt:uint64' elif param_type is str: attr_type = 'mt:wstr' elif ustr(param_type).count("'unicode'") > 0: attr_type = 'mt:wstr' elif param_type is bool: attr_type = 'mt:bool' elif param_type is float: attr_type = 'mt:float64' return param_format.format(param.name, saxutils.quoteattr(ustr(param.value)), attr_type) def event_to_v1_encoded(event, encoding='utf-8'): params = "" for param in event.parameters: params += event_param_to_v1(param) event_str = ustr('').format(event.eventId, params) return event_str.encode(encoding) class WireClient(object): def __init__(self, endpoint): logger.info("Wire server endpoint:{0}", endpoint) self._endpoint = endpoint self._goal_state = None self._host_plugin = None self.status_blob = StatusBlob(self) def get_endpoint(self): return self._endpoint def call_wireserver(self, http_req, *args, **kwargs): try: # Never use the HTTP proxy for wireserver kwargs['use_proxy'] = False resp = http_req(*args, **kwargs) if restutil.request_failed(resp): msg = "[Wireserver Failed] URI {0} ".format(args[0]) if resp is not None: msg += " [HTTP Failed] Status Code {0}".format(resp.status) raise ProtocolError(msg) # If the GoalState is stale, pass along the exception to the caller except ResourceGoneError: raise except Exception as e: raise ProtocolError("[Wireserver Exception] {0}".format(ustr(e))) return resp def decode_config(self, data): if data is None: return None data = remove_bom(data) xml_text = ustr(data, encoding='utf-8') return xml_text def fetch_config(self, uri, headers): resp = self.call_wireserver(restutil.http_get, uri, headers=headers) return self.decode_config(resp.read()) @staticmethod def call_storage_service(http_req, *args, **kwargs): # Default to use the configured HTTP proxy if not 'use_proxy' in kwargs or kwargs['use_proxy'] is None: kwargs['use_proxy'] = True return http_req(*args, **kwargs) def fetch_artifacts_profile_blob(self, uri): return self._fetch_content("artifacts profile blob", [uri], use_verify_header=False)[1] # _fetch_content returns a (uri, content) tuple def fetch_manifest(self, manifest_type, uris, use_verify_header): uri, content = self._fetch_content("{0} manifest".format(manifest_type), uris, use_verify_header=use_verify_header) self.get_host_plugin().update_manifest_uri(uri) return content def _fetch_content(self, download_type, uris, use_verify_header): """ Walks the given list of 'uris' issuing HTTP GET requests; returns a tuple with the URI and the content of the first successful request. The 'download_type' is added to any log messages produced by this method; it should describe the type of content of the given URIs (e.g. "manifest", "extension package", etc). """ host_ga_plugin = self.get_host_plugin() direct_download = lambda uri: self.fetch(uri)[0] def hgap_download(uri): request_uri, request_headers = host_ga_plugin.get_artifact_request(uri, use_verify_header=use_verify_header) response, _ = self.fetch(request_uri, request_headers, use_proxy=False, retry_codes=restutil.HGAP_GET_EXTENSION_ARTIFACT_RETRY_CODES) return response return self._download_with_fallback_channel(download_type, uris, direct_download=direct_download, hgap_download=hgap_download) def download_zip_package(self, package_name, uris, target_file, target_directory, use_verify_header, signature, enforce_signature): """ Downloads the ZIP package specified in 'uris' (which is a list of alternate locations for the ZIP), saving it to 'target_file' and then expanding its contents to 'target_directory'. Deletes the target file after it has been expanded. The 'package_name' parameter is used only for logging and telemetry. It should be the full name of the ZIP package, formatted as "Name-Version" (e.g., "Microsoft.Azure.Extensions.CustomScript-2.1.13" for an extension package, or "WALinuxAgent-9.9.9.9" for the agent package). The name and version will be extracted from this string for telemetry purposes only. The 'use_verify_header' parameter indicates whether the verify header should be added when using the extensionArtifact API of the HostGAPlugin. The 'signature' parameter should be a base64-encoded signature string. If signature is not an empty string, package signature will be validated immediately after downloading the package but before expanding it. Currently, the 'enforce_signature' flag only affects logging and telemetry. If set to False, a message is appended to any validation failure indicating that the error can be safely ignored. TODO: Update logic so that 'enforce_signature' also controls whether validation failures raise an exception. """ host_ga_plugin = self.get_host_plugin() direct_download = lambda uri: self.stream(uri, target_file, headers=None, use_proxy=True) def hgap_download(uri): request_uri, request_headers = host_ga_plugin.get_artifact_request(uri, use_verify_header=use_verify_header, artifact_manifest_url=host_ga_plugin.manifest_uri) return self.stream(request_uri, target_file, headers=request_headers, use_proxy=False) def on_downloaded(): # If 'signature' parameter is not an empty string, validate the zip package signature immediately after download. # Signature validation errors are caught and stored, allowing download to proceed. After zip package extraction, # the error is re-raised to surface the failure, so the caller has knowledge of the failure and can handle appropriately. # In future releases, once sufficient telemetry is collected and we gain confidence in the validation process, # extraction will be blocked if signature validation fails, and the zip will be removed. # # TODO: Block packages failing signature validation when 'enforce_signature' is True validation_error = None if signature != "": try: failure_log_level = logger.LogLevel.ERROR if enforce_signature else logger.LogLevel.WARNING validate_signature(target_file, signature, package_full_name=package_name, failure_log_level=failure_log_level) except SignatureValidationError as ex: # validate_signature() only raises SignatureValidationError, and already sends logs/telemetry for the error. # If signature is not being enforced, catch the error and re-raise after expanding the zip. # TODO: if signature is being enforced, raise error and and cleanup zip file validation_error = ex WireClient._try_expand_zip_package(package_name, target_file, target_directory) # Surface any validation errors after extraction so the caller can decide how to handle. if validation_error is not None: raise validation_error # If on_downloaded() raises a SignatureValidationError, _download_with_fallback_channel will not attempt retries with other URIs, error will propagate immediately. self._download_with_fallback_channel(package_name, uris, direct_download=direct_download, hgap_download=hgap_download, on_downloaded=on_downloaded) def _download_with_fallback_channel(self, download_type, uris, direct_download, hgap_download, on_downloaded=None): """ Walks the given list of 'uris' issuing HTTP GET requests, attempting to download the content of each URI. The download is done using both the default and the fallback channels, until one of them succeeds. The 'direct_download' and 'hgap_download' functions define the logic to do direct calls to the URI or to use the HostGAPlugin as a proxy for the download. Initially the default channel is the direct download and the fallback channel is the HostGAPlugin, but the default can be depending on the success/failure of each channel (see _download_using_appropriate_channel() for the logic to do this). The 'download_type' is added to any log messages produced by this method; it should describe the type of content of the given URIs (e.g. "manifest", "Microsoft.Azure.Extensions.CustomScript-2.1.13, "WALinuxAgent-9.9.9.9", etc). When the download is successful, _download_with_fallback_channel invokes the 'on_downloaded' function, which can be used to process the results of the download. This function should return True on success, and False on failure (it should not raise any exceptions). If the return value is False, the download is considered a failure and the next URI is tried. When the download succeeds, this method returns a (uri, response) tuple where the first item is the URI of the successful download and the second item is the response returned by the successful channel (i.e. one of direct_download and hgap_download). This method enforces a timeout (_DOWNLOAD_TIMEOUT) on the download and raises an exception if the limit is exceeded. """ logger.info("Downloading {0}", download_type) start_time = datetime.now(UTC) uris_shuffled = uris random.shuffle(uris_shuffled) most_recent_error = "None" for index, uri in enumerate(uris_shuffled): elapsed = datetime.now(UTC) - start_time if elapsed > _DOWNLOAD_TIMEOUT: message = "Timeout downloading {0}. Elapsed: {1} URIs tried: {2}/{3}. Last error: {4}".format(download_type, elapsed, index, len(uris), ustr(most_recent_error)) raise ExtensionDownloadError(message, code=ExtensionErrorCodes.PluginManifestDownloadError) try: # Disable W0640: OK to use uri in a lambda within the loop's body response = self._download_using_appropriate_channel(lambda: direct_download(uri), lambda: hgap_download(uri)) # pylint: disable=W0640 if on_downloaded is not None: on_downloaded() return uri, response except SignatureValidationError: # If download fails due to package signature validation, do not retry. raise except Exception as exception: most_recent_error = exception raise ExtensionDownloadError("Failed to download {0} from all URIs. Last error: {1}".format(download_type, ustr(most_recent_error)), code=ExtensionErrorCodes.PluginManifestDownloadError) @staticmethod def _try_expand_zip_package(package_type, target_file, target_directory): logger.info("Unzipping {0}: {1}", package_type, target_file) try: zipfile.ZipFile(target_file).extractall(target_directory) except Exception as exception: logger.error("Error while unzipping {0}: {1}", package_type, ustr(exception)) if os.path.exists(target_directory): try: shutil.rmtree(target_directory) except Exception as rmtree_exception: logger.warn("Cannot delete {0}: {1}", target_directory, ustr(rmtree_exception)) raise finally: try: os.remove(target_file) except Exception as exception: logger.warn("Cannot delete {0}: {1}", target_file, ustr(exception)) def stream(self, uri, destination, headers=None, use_proxy=None): """ Downloads the content of the given 'uri' and saves it to the 'destination' file. """ try: logger.verbose("Fetch [{0}] with headers [{1}] to file [{2}]", uri, headers, destination) response = self._fetch_response(uri, headers, use_proxy) if response is not None and not restutil.request_failed(response): chunk_size = 1024 * 1024 # 1MB buffer with open(destination, 'wb', chunk_size) as destination_fh: complete = False while not complete: chunk = response.read(chunk_size) destination_fh.write(chunk) complete = len(chunk) < chunk_size return "" except: if os.path.exists(destination): # delete the destination file, in case we did a partial download try: os.remove(destination) except Exception as exception: logger.warn("Can't delete {0}: {1}", destination, ustr(exception)) raise def fetch(self, uri, headers=None, use_proxy=None, decode=True, retry_codes=None, ok_codes=None): """ Returns a tuple with the content and headers of the response. The headers are a list of (name, value) tuples. """ logger.verbose("Fetch [{0}] with headers [{1}]", uri, headers) content = None response_headers = None response = self._fetch_response(uri, headers, use_proxy, retry_codes=retry_codes, ok_codes=ok_codes) if response is not None and not restutil.request_failed(response, ok_codes=ok_codes): response_content = response.read() content = self.decode_config(response_content) if decode else response_content response_headers = response.getheaders() return content, response_headers def _fetch_response(self, uri, headers=None, use_proxy=None, retry_codes=None, ok_codes=None): resp = None try: resp = self.call_storage_service( restutil.http_get, uri, headers=headers, use_proxy=use_proxy, retry_codes=retry_codes) host_plugin = self.get_host_plugin() if restutil.request_failed(resp, ok_codes=ok_codes): error_response = restutil.read_response_error(resp) msg = "Fetch failed from [{0}]: {1}".format(uri, error_response) logger.warn(msg) if host_plugin is not None: host_plugin.report_fetch_health(uri, is_healthy=not restutil.request_failed_at_hostplugin(resp), source='WireClient', response=error_response) raise ProtocolError(msg) else: if host_plugin is not None: host_plugin.report_fetch_health(uri, source='WireClient') except (HttpError, ProtocolError, IOError) as error: msg = "Fetch failed: {0}".format(error) logger.warn(msg) report_event(op=WALAEventOperation.HttpGet, is_success=False, message=msg, log_event=False) raise return resp def update_host_plugin_from_goal_state(self): """ Fetches a new goal state and updates the Container ID and Role Config Name of the host plugin client """ if self._host_plugin is not None: GoalState.update_host_plugin_headers(self) def update_host_plugin(self, container_id, role_config_name): if self._host_plugin is not None: self._host_plugin.update_container_id(container_id) self._host_plugin.update_role_config_name(role_config_name) def update_goal_state(self, force_update=False, silent=False, save_to_history=False): """ Updates the goal state if the incarnation or etag changed """ try: if self._goal_state is None: self._goal_state = GoalState(self, silent=silent, save_to_history=save_to_history) else: self._goal_state.update(force_update=force_update, silent=silent) except ProtocolError: raise except Exception as exception: raise ProtocolError("Error fetching goal state: {0}".format(ustr(exception))) def reset_goal_state(self, goal_state_properties=GoalStateProperties.All, silent=False, save_to_history=False): """ Resets the goal state """ try: if not silent: logger.info("Forcing an update of the goal state.") self._goal_state = GoalState(self, goal_state_properties=goal_state_properties, silent=silent, save_to_history=save_to_history) except ProtocolError: raise except Exception as exception: raise ProtocolError("Error fetching goal state: {0}".format(ustr(exception))) def get_goal_state(self): if self._goal_state is None: raise ProtocolError("Trying to fetch goal state before initialization!") return self._goal_state def get_hosting_env(self): if self._goal_state is None: raise ProtocolError("Trying to fetch Hosting Environment before initialization!") return self._goal_state.hosting_env def get_shared_conf(self): if self._goal_state is None: raise ProtocolError("Trying to fetch Shared Conf before initialization!") return self._goal_state.shared_conf def get_certs(self): if self._goal_state is None: raise ProtocolError("Trying to fetch Certificates before initialization!") return self._goal_state.certs def get_remote_access(self): if self._goal_state is None: raise ProtocolError("Trying to fetch Remote Access before initialization!") return self._goal_state.remote_access def check_wire_protocol_version(self): uri = VERSION_INFO_URI.format(self.get_endpoint()) version_info_xml = self.fetch_config(uri, None) version_info = VersionInfo(version_info_xml) preferred = version_info.get_preferred() if PROTOCOL_VERSION == preferred: logger.info("Wire protocol version:{0}", PROTOCOL_VERSION) elif PROTOCOL_VERSION in version_info.get_supported(): logger.info("Wire protocol version:{0}", PROTOCOL_VERSION) logger.info("Server preferred version:{0}", preferred) else: error = ("Agent supported wire protocol version: {0} was not " "advised by Fabric.").format(PROTOCOL_VERSION) raise ProtocolNotFoundError(error) def _call_hostplugin_with_container_check(self, host_func): """ Calls host_func on host channel and accounts for stale resource (ResourceGoneError or InvalidContainerError). If stale, it refreshes the goal state and retries host_func. """ try: return host_func() except (ResourceGoneError, InvalidContainerError) as error: host_plugin = self.get_host_plugin() old_container_id, old_role_config_name = host_plugin.container_id, host_plugin.role_config_name msg = "[PERIODIC] Request failed with the current host plugin configuration. " \ "ContainerId: {0}, role config file: {1}. Fetching new goal state and retrying the call." \ "Error: {2}".format(old_container_id, old_role_config_name, ustr(error)) logger.periodic_info(logger.EVERY_SIX_HOURS, msg) self.update_host_plugin_from_goal_state() new_container_id, new_role_config_name = host_plugin.container_id, host_plugin.role_config_name msg = "[PERIODIC] Host plugin reconfigured with new parameters. " \ "ContainerId: {0}, role config file: {1}.".format(new_container_id, new_role_config_name) logger.periodic_info(logger.EVERY_SIX_HOURS, msg) try: ret = host_func() msg = "[PERIODIC] Request succeeded using the host plugin channel after goal state refresh. " \ "ContainerId changed from {0} to {1}, " \ "role config file changed from {2} to {3}.".format(old_container_id, new_container_id, old_role_config_name, new_role_config_name) add_periodic(delta=logger.EVERY_SIX_HOURS, name=AGENT_NAME, version=CURRENT_VERSION, op=WALAEventOperation.HostPlugin, is_success=True, message=msg, log_event=True) return ret except (ResourceGoneError, InvalidContainerError) as host_error: msg = "[PERIODIC] Request failed using the host plugin channel after goal state refresh. " \ "ContainerId changed from {0} to {1}, role config file changed from {2} to {3}. " \ "Exception type: {4}.".format(old_container_id, new_container_id, old_role_config_name, new_role_config_name, type(host_error).__name__) add_periodic(delta=logger.EVERY_SIX_HOURS, name=AGENT_NAME, version=CURRENT_VERSION, op=WALAEventOperation.HostPlugin, is_success=False, message=msg, log_event=True) raise def _download_using_appropriate_channel(self, direct_download, hgap_download): """ Does a download using both the default and fallback channels. By default, the primary channel is direct, host channel is the fallback. We call the primary channel first and return on success. If primary fails, we try the fallback. If fallback fails, we return and *don't* switch the default channel. If fallback succeeds, we change the default channel. """ hgap_download_function_with_retry = lambda: self._call_hostplugin_with_container_check(hgap_download) if HostPluginProtocol.is_default_channel: primary_channel, secondary_channel = hgap_download_function_with_retry, direct_download else: primary_channel, secondary_channel = direct_download, hgap_download_function_with_retry try: return primary_channel() except Exception as exception: primary_channel_error = exception try: return_value = secondary_channel() # Since the secondary channel succeeded, flip the default channel HostPluginProtocol.is_default_channel = not HostPluginProtocol.is_default_channel message = "Default channel changed to {0} channel.".format("HostGAPlugin" if HostPluginProtocol.is_default_channel else "Direct") logger.info(message) add_event(AGENT_NAME, op=WALAEventOperation.DefaultChannelChange, version=CURRENT_VERSION, is_success=True, message=message, log_event=False) return return_value except Exception as exception: raise HttpError("Download failed both on the primary and fallback channels. Primary: [{0}] Fallback: [{1}]".format(ustr(primary_channel_error), ustr(exception))) def upload_status_blob(self): extensions_goal_state = self.get_goal_state().extensions_goal_state if extensions_goal_state.status_upload_blob is None: # the status upload blob is in ExtensionsConfig so force a full goal state refresh self.reset_goal_state(silent=True, save_to_history=True) extensions_goal_state = self.get_goal_state().extensions_goal_state if extensions_goal_state.status_upload_blob is None: raise ProtocolNotFoundError("Status upload uri is missing") logger.info("Refreshed the goal state to get the status upload blob. New Goal State ID: {0}", extensions_goal_state.id) blob_type = extensions_goal_state.status_upload_blob_type try: self.status_blob.prepare(blob_type) except Exception as e: raise ProtocolError("Exception creating status blob: {0}".format(ustr(e))) # Swap the order of use for the HostPlugin vs. the "direct" route. # Prefer the use of HostPlugin. If HostPlugin fails fall back to the # direct route. # # The code previously preferred the "direct" route always, and only fell back # to the HostPlugin *if* there was an error. We would like to move to # the HostPlugin for all traffic, but this is a big change. We would like # to see how this behaves at scale, and have a fallback should things go # wrong. This is why we try HostPlugin then direct. try: host = self.get_host_plugin() host.put_vm_status(self.status_blob, extensions_goal_state.status_upload_blob, extensions_goal_state.status_upload_blob_type) return except ResourceGoneError: # refresh the host plugin client and try again on the next iteration of the main loop self.update_host_plugin_from_goal_state() return except Exception as e: # for all other errors, fall back to direct msg = "Falling back to direct upload: {0}".format(ustr(e)) self.report_status_event(msg, is_success=True) try: if self.status_blob.upload(extensions_goal_state.status_upload_blob): return except Exception as e: msg = "Exception uploading status blob: {0}".format(ustr(e)) self.report_status_event(msg, is_success=False) raise ProtocolError("Failed to upload status blob via either channel") def report_role_prop(self, thumbprint): goal_state = self.get_goal_state() role_prop = _build_role_properties(goal_state.container_id, goal_state.role_instance_id, thumbprint) role_prop = role_prop.encode("utf-8") role_prop_uri = ROLE_PROP_URI.format(self.get_endpoint()) headers = self.get_header_for_xml_content() try: resp = self.call_wireserver(restutil.http_post, role_prop_uri, role_prop, headers=headers) except HttpError as e: raise ProtocolError((u"Failed to send role properties: " u"{0}").format(e)) if resp.status != httpclient.ACCEPTED: raise ProtocolError((u"Failed to send role properties: " u",{0}: {1}").format(resp.status, resp.read())) def report_health(self, status, substatus, description): goal_state = self.get_goal_state() health_report = _build_health_report(goal_state.incarnation, goal_state.container_id, goal_state.role_instance_id, status, substatus, description) health_report = health_report.encode("utf-8") health_report_uri = HEALTH_REPORT_URI.format(self.get_endpoint()) headers = self.get_header_for_xml_content() try: # 30 retries with 10s sleep gives ~5min for wireserver updates; # this is retried 3 times with 15s sleep before throwing a # ProtocolError, for a total of ~15min. resp = self.call_wireserver(restutil.http_post, health_report_uri, health_report, headers=headers, max_retry=30, retry_delay=15) except HttpError as e: raise ProtocolError((u"Failed to send provision status: " u"{0}").format(e)) if restutil.request_failed(resp): raise ProtocolError((u"Failed to send provision status: " u",{0}: {1}").format(resp.status, resp.read())) def _send_encoded_event(self, provider_id, event_str, flush, encoding='utf8'): uri = TELEMETRY_URI.format(self.get_endpoint(), TELEMETRY_DATA) data_format_header = ustr('').format( provider_id).encode(encoding) data_format_footer = ustr('').encode(encoding) # Event string should already be encoded by the time it gets here, to avoid double encoding, # dividing it into parts. data = data_format_header + event_str + data_format_footer try: header = self.get_header_for_xml_content() # NOTE: The call to wireserver requests utf-8 encoding in the headers, but the body should not # be encoded: some nodes in the telemetry pipeline do not support utf-8 encoding. # if it's important event flush, we use less throttle delay(to avoid long delay to complete this operation)) on throttling errors if flush: resp = self.call_wireserver(restutil.http_post, uri, data, header, max_retry=3, throttle_delay=TELEMETRY_FLUSH_THROTTLE_DELAY_IN_SECONDS) else: resp = self.call_wireserver(restutil.http_post, uri, data, header, max_retry=3, throttle_delay=TELEMETRY_THROTTLE_DELAY_IN_SECONDS) except HttpError as e: raise ProtocolError("Failed to send events:{0}".format(e)) if restutil.request_failed(resp): logger.verbose(resp.read()) raise ProtocolError( "Failed to send events:{0}".format(resp.status)) def report_event(self, events_iterator, flush=False): buf = {} debug_info = CollectOrReportEventDebugInfo(operation=CollectOrReportEventDebugInfo.OP_REPORT) events_per_provider = defaultdict(int) def _send_event(provider_id, debug_info, flush): try: self._send_encoded_event(provider_id, buf[provider_id], flush) except UnicodeError as uni_error: debug_info.update_unicode_error(uni_error) except Exception as error: debug_info.update_op_error(error) # Group events by providerId for event in events_iterator: try: if event.providerId not in buf: buf[event.providerId] = b"" event_str = event_to_v1_encoded(event) if len(event_str) >= MAX_EVENT_BUFFER_SIZE: # Ignore single events that are too large to send out details_of_event = [ustr(x.name) + ":" + ustr(x.value) for x in event.parameters if x.name in [GuestAgentExtensionEventsSchema.Name, GuestAgentExtensionEventsSchema.Version, GuestAgentExtensionEventsSchema.Operation, GuestAgentExtensionEventsSchema.OperationSuccess]] logger.periodic_warn(logger.EVERY_HALF_HOUR, "Single event too large: {0}, with the length: {1} more than the limit({2})" .format(str(details_of_event), len(event_str), MAX_EVENT_BUFFER_SIZE)) continue # If buffer is full, send out the events in buffer and reset buffer if len(buf[event.providerId] + event_str) >= MAX_EVENT_BUFFER_SIZE: logger.verbose("No of events this request = {0}".format(events_per_provider[event.providerId])) _send_event(event.providerId, debug_info, flush) buf[event.providerId] = b"" events_per_provider[event.providerId] = 0 # Add encoded events to the buffer buf[event.providerId] = buf[event.providerId] + event_str events_per_provider[event.providerId] += 1 except Exception as error: logger.warn("Unexpected error when generating Events:{0}", textutil.format_exception(error)) # Send out all events left in buffer. for provider_id in list(buf.keys()): if buf[provider_id]: logger.verbose("No of events this request = {0}".format(events_per_provider[provider_id])) _send_event(provider_id, debug_info, flush) debug_info.report_debug_info() return debug_info.get_error_count() == 0 def report_status_event(self, message, is_success): report_event(op=WALAEventOperation.ReportStatus, is_success=is_success, message=message, log_event=not is_success) def get_header(self): return { "x-ms-agent-name": "WALinuxAgent", "x-ms-version": PROTOCOL_VERSION } def get_header_for_xml_content(self): return { "x-ms-agent-name": "WALinuxAgent", "x-ms-version": PROTOCOL_VERSION, "Content-Type": "text/xml;charset=utf-8" } def get_header_for_remote_access(self): return self.get_headers_for_encrypted_request("AES128_CBC") @staticmethod def get_headers_for_encrypted_request(cypher): trans_cert_file = os.path.join(conf.get_lib_dir(), TRANSPORT_CERT_FILE_NAME) try: content = fileutil.read_file(trans_cert_file) except IOError as e: raise ProtocolError("Failed to read {0}: {1}".format(trans_cert_file, e)) cert = get_bytes_from_pem(content) headers = { "x-ms-agent-name": "WALinuxAgent", "x-ms-version": PROTOCOL_VERSION, "x-ms-guest-agent-public-x509-cert": cert } if cypher is not None: # the cypher header is optional, currently defaults to AES128_CBC headers["x-ms-cipher-name"] = cypher return headers def get_host_plugin(self): if self._host_plugin is None: self._host_plugin = HostPluginProtocol(self.get_endpoint()) GoalState.update_host_plugin_headers(self) return self._host_plugin def get_on_hold(self): return self.get_goal_state().extensions_goal_state.on_hold def upload_logs(self, content): host = self.get_host_plugin() return host.put_vm_log(content) class VersionInfo(object): def __init__(self, xml_text): """ Query endpoint server for wire protocol version. Fail if our desired protocol version is not seen. """ logger.verbose("Load Version.xml") self.parse(xml_text) def parse(self, xml_text): xml_doc = parse_doc(xml_text) preferred = find(xml_doc, "Preferred") self.preferred = findtext(preferred, "Version") logger.info("Fabric preferred wire protocol version:{0}", self.preferred) self.supported = [] supported = find(xml_doc, "Supported") supported_version = findall(supported, "Version") for node in supported_version: version = gettext(node) logger.verbose("Fabric supported wire protocol version:{0}", version) self.supported.append(version) def get_preferred(self): return self.preferred def get_supported(self): return self.supported # Do not extend this class class InVMArtifactsProfile(object): """ deserialized json string of InVMArtifactsProfile. It is expected to contain the following fields: * inVMArtifactsProfileBlobSeqNo * profileId (optional) * onHold (optional) * certificateThumbprint (optional) * encryptedHealthChecks (optional) * encryptedApplicationProfile (optional) """ def __init__(self, artifacts_profile): if not textutil.is_str_empty(artifacts_profile): self.__dict__.update(parse_json(artifacts_profile)) def is_on_hold(self): # hasattr() is not available in Python 2.6 if 'onHold' in self.__dict__: return str(self.onHold).lower() == 'true' # pylint: disable=E1101 return False Azure-WALinuxAgent-a976115/azurelinuxagent/common/singletonperthread.py000066400000000000000000000030661510742556200263670ustar00rootroot00000000000000from threading import Lock, current_thread class _SingletonPerThreadMetaClass(type): """ A metaclass that creates a SingletonPerThread base class when called. """ _instances = {} _lock = Lock() def __call__(cls, *args, **kwargs): with cls._lock: # Object Name = className__threadName obj_name = "%s__%s" % (cls.__name__, current_thread().name) if obj_name not in cls._instances: cls._instances[obj_name] = super(_SingletonPerThreadMetaClass, cls).__call__(*args, **kwargs) return cls._instances[obj_name] class SingletonPerThread(_SingletonPerThreadMetaClass('SingleObjectPerThreadMetaClass', (object,), {})): # This base class calls the metaclass above to create the singleton per thread object. This class provides an # abstraction over how to invoke the Metaclass so just inheriting this class makes the # child class a singleton per thread (As opposed to invoking the Metaclass separately for each derived classes) # More info here - https://stackoverflow.com/questions/6760685/creating-a-singleton-in-python # # Usage: # Inheriting this class will create a Singleton per thread for that class # To delete the cached object of a class, call DerivedClassName.clear() to delete the object per thread # Note: If the thread dies and is recreated with the same thread name, the existing object would be reused # and no new object for the derived class would be created unless DerivedClassName.clear() is called explicitly to # delete the cache pass Azure-WALinuxAgent-a976115/azurelinuxagent/common/telemetryevent.py000066400000000000000000000073271510742556200255460ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2019 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from azurelinuxagent.common.datacontract import DataContract, DataContractList from azurelinuxagent.common.version import AGENT_NAME class CommonTelemetryEventSchema(object): # Common schema keys for GuestAgentExtensionEvents, GuestAgentGenericLogs # and GuestAgentPerformanceCounterEvents tables in Kusto. EventPid = "EventPid" EventTid = "EventTid" GAVersion = "GAVersion" ContainerId = "ContainerId" TaskName = "TaskName" OpcodeName = "OpcodeName" KeywordName = "KeywordName" OSVersion = "OSVersion" ExecutionMode = "ExecutionMode" RAM = "RAM" Processors = "Processors" TenantName = "TenantName" RoleName = "RoleName" RoleInstanceName = "RoleInstanceName" Location = "Location" SubscriptionId = "SubscriptionId" ResourceGroupName = "ResourceGroupName" VMId = "VMId" ImageOrigin = "ImageOrigin" class GuestAgentGenericLogsSchema(CommonTelemetryEventSchema): # GuestAgentGenericLogs table specific schema keys EventName = "EventName" CapabilityUsed = "CapabilityUsed" Context1 = "Context1" Context2 = "Context2" Context3 = "Context3" class GuestAgentExtensionEventsSchema(CommonTelemetryEventSchema): # GuestAgentExtensionEvents table specific schema keys ExtensionType = "ExtensionType" IsInternal = "IsInternal" Name = "Name" Version = "Version" Operation = "Operation" OperationSuccess = "OperationSuccess" Message = "Message" Duration = "Duration" class GuestAgentPerfCounterEventsSchema(CommonTelemetryEventSchema): # GuestAgentPerformanceCounterEvents table specific schema keys Category = "Category" Counter = "Counter" Instance = "Instance" Value = "Value" class TelemetryEventParam(DataContract): def __init__(self, name=None, value=None): self.name = name self.value = value def __eq__(self, other): return isinstance(other, TelemetryEventParam) and other.name == self.name and other.value == self.value class TelemetryEvent(DataContract): def __init__(self, eventId=None, providerId=None): self.eventId = eventId self.providerId = providerId self.parameters = DataContractList(TelemetryEventParam) self.file_type = "" # Checking if the particular param name is in the TelemetryEvent. def __contains__(self, param_name): return param_name in [param.name for param in self.parameters] def is_extension_event(self): # Events originating from the agent have "WALinuxAgent" as the Name parameter, or they don't have a Name # parameter, in the case of log and metric events. So, in case the Name parameter exists and it is not # "WALinuxAgent", it is an extension event. for param in self.parameters: if param.name == GuestAgentExtensionEventsSchema.Name: return param.value != AGENT_NAME return False def get_version(self): for param in self.parameters: if param.name == GuestAgentExtensionEventsSchema.Version: return param.value return NoneAzure-WALinuxAgent-a976115/azurelinuxagent/common/utils/000077500000000000000000000000001510742556200232475ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/azurelinuxagent/common/utils/__init__.py000066400000000000000000000011661510742556200253640ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # Azure-WALinuxAgent-a976115/azurelinuxagent/common/utils/archive.py000066400000000000000000000260411510742556200252450ustar00rootroot00000000000000# Copyright (c) Microsoft Corporation. All rights reserved. # Licensed under the Apache License. import errno import glob import os import re import shutil import zipfile from azurelinuxagent.common import conf from azurelinuxagent.common import logger from azurelinuxagent.common.utils import fileutil # pylint: disable=W0105 """ archive.py The module supports the archiving of guest agent state. Guest agent state is flushed whenever there is a incarnation change. The flush is archived periodically (once a day). The process works as follows whenever a new incarnation arrives. 1. Flush - move all state files to a new directory under .../history/timestamp/. 2. Archive - enumerate all directories under .../history/timestamp and create a .zip file named timestamp.zip. Delete the archive directory 3. Purge - glob the list .zip files, sort by timestamp in descending order, keep the first 50 results, and delete the rest. ... is the directory where the agent's state resides, by default this is /var/lib/waagent. The timestamp is an ISO8601 formatted value. """ # pylint: enable=W0105 ARCHIVE_DIRECTORY_NAME = 'history' # TODO: See comment in GoalStateHistory._save_placeholder and remove this code when no longer needed _PLACEHOLDER_FILE_NAME = 'GoalState.1.xml' # END TODO _MAX_ARCHIVED_STATES = 50 _CACHE_PATTERNS = [ # # Note that SharedConfig.xml is not included here; this file is used by other components (Azsec and Singularity/HPC Infiniband) # re.compile(r"^VmSettings\.\d+\.json$"), re.compile(r"^(.*)\.(\d+)\.(agentsManifest)$", re.IGNORECASE), re.compile(r"^(.*)\.(\d+)\.(manifest\.xml)$", re.IGNORECASE), re.compile(r"^(.*)\.(\d+)\.(xml)$", re.IGNORECASE), re.compile(r"^HostingEnvironmentConfig\.xml$", re.IGNORECASE), re.compile(r"^RemoteAccess\.xml$", re.IGNORECASE), re.compile(r"^waagent_status\.\d+\.json$"), ] # # Legacy names # 2018-04-06T08:21:37.142697 # 2018-04-06T08:21:37.142697.zip # 2018-04-06T08:21:37.142697_incarnation_N # 2018-04-06T08:21:37.142697_incarnation_N.zip # 2018-04-06T08:21:37.142697_N-M # 2018-04-06T08:21:37.142697_N-M.zip # # Current names # # 2018-04-06T08-21-37__N-M # 2018-04-06T08-21-37__N-M.zip # _ARCHIVE_BASE_PATTERN = r"\d{4}\-\d{2}\-\d{2}T\d{2}[:-]\d{2}[:-]\d{2}(\.\d+)?((_incarnation)?_+(\d+|status)(-\d+)?)?" _ARCHIVE_PATTERNS_DIRECTORY = re.compile(r'^{0}$'.format(_ARCHIVE_BASE_PATTERN)) _ARCHIVE_PATTERNS_ZIP = re.compile(r'^{0}\.zip$'.format(_ARCHIVE_BASE_PATTERN)) _GOAL_STATE_FILE_NAME = "GoalState.xml" _VM_SETTINGS_FILE_NAME = "VmSettings.json" _CERTIFICATES_FILE_NAME = "Certificates.json" _HOSTING_ENV_FILE_NAME = "HostingEnvironmentConfig.xml" _REMOTE_ACCESS_FILE_NAME = "RemoteAccess.xml" _EXT_CONF_FILE_NAME = "ExtensionsConfig.xml" _MANIFEST_FILE_NAME = "{0}.manifest.xml" AGENT_STATUS_FILE = "waagent_status.json" SHARED_CONF_FILE_NAME = "SharedConfig.xml" # TODO: use @total_ordering once RHEL/CentOS and SLES 11 are EOL. # @total_ordering first appeared in Python 2.7 and 3.2 # If there are more use cases for @total_ordering, I will # consider re-implementing it. class State(object): def __init__(self, path, timestamp): self._path = path self._timestamp = timestamp @property def timestamp(self): return self._timestamp def delete(self): pass def archive(self): pass def __eq__(self, other): return self._timestamp == other.timestamp def __ne__(self, other): return self._timestamp != other.timestamp def __lt__(self, other): return self._timestamp < other.timestamp def __gt__(self, other): return self._timestamp > other.timestamp def __le__(self, other): return self._timestamp <= other.timestamp def __ge__(self, other): return self._timestamp >= other.timestamp class StateZip(State): def delete(self): os.remove(self._path) class StateDirectory(State): def delete(self): shutil.rmtree(self._path) def archive(self): fn_tmp = "{0}.zip.tmp".format(self._path) filename = "{0}.zip".format(self._path) ziph = None try: # contextmanager for zipfile.ZipFile doesn't exist for py2.6, manually closing it ziph = zipfile.ZipFile(fn_tmp, 'w') for current_file in os.listdir(self._path): full_path = os.path.join(self._path, current_file) ziph.write(full_path, current_file, zipfile.ZIP_DEFLATED) finally: if ziph is not None: ziph.close() os.rename(fn_tmp, filename) shutil.rmtree(self._path) class StateArchiver(object): def __init__(self, lib_dir): self._source = os.path.join(lib_dir, ARCHIVE_DIRECTORY_NAME) if not os.path.isdir(self._source): try: fileutil.mkdir(self._source, mode=0o700) except IOError as exception: if exception.errno != errno.EEXIST: logger.warn("{0} : {1}", self._source, exception.strerror) @staticmethod def purge_legacy_goal_state_history(): lib_dir = conf.get_lib_dir() for current_file in os.listdir(lib_dir): # Don't remove the placeholder goal state file. # TODO: See comment in GoalStateHistory._save_placeholder and remove this code when no longer needed if current_file == _PLACEHOLDER_FILE_NAME: continue # END TODO full_path = os.path.join(lib_dir, current_file) for pattern in _CACHE_PATTERNS: match = pattern.match(current_file) if match is not None: try: os.remove(full_path) except Exception as e: logger.warn("Cannot delete legacy history file '{0}': {1}".format(full_path, e)) break def archive(self): states = self._get_archive_states() if len(states) > 0: # Skip the most recent goal state, since it may still be in use for state in states[1:]: state.archive() def _get_archive_states(self): states = [] for current_file in os.listdir(self._source): full_path = os.path.join(self._source, current_file) match = _ARCHIVE_PATTERNS_DIRECTORY.match(current_file) if match is not None: states.append(StateDirectory(full_path, match.group(0))) match = _ARCHIVE_PATTERNS_ZIP.match(current_file) if match is not None: states.append(StateZip(full_path, match.group(0))) states.sort(key=lambda state: os.path.getctime(state._path), reverse=True) return states class GoalStateHistory(object): def __init__(self, time, tag): self._errors = False timestamp = GoalStateHistory._create_timestamp(time) self._root = os.path.join(conf.get_lib_dir(), ARCHIVE_DIRECTORY_NAME, "{0}__{1}".format(timestamp, tag) if tag is not None else timestamp) GoalStateHistory._purge() @staticmethod def _create_timestamp(dt): """ Returns a string with the given datetime formatted as a timestamp for the agent's history folder """ return dt.strftime('%Y-%m-%dT%H-%M-%S') @staticmethod def tag_exists(tag): """ Returns True when an item with the given 'tag' already exists in the history directory """ return len(glob.glob(os.path.join(conf.get_lib_dir(), ARCHIVE_DIRECTORY_NAME, "*_{0}".format(tag)))) > 0 def save(self, data, file_name): try: if not os.path.exists(self._root): fileutil.mkdir(self._root, mode=0o700) with open(os.path.join(self._root, file_name), "w") as handle: handle.write(data) except Exception as e: if not self._errors: # report only 1 error per directory self._errors = True logger.warn("Failed to save {0} to the goal state history: {1} [no additional errors saving the goal state will be reported]".format(file_name, e)) _purge_error_count = 0 @staticmethod def _purge(): """ Delete "old" history directories and .zip archives. Old is defined as any directories or files older than the X newest ones. """ try: history_root = os.path.join(conf.get_lib_dir(), ARCHIVE_DIRECTORY_NAME) if not os.path.exists(history_root): return items = [] for current_item in os.listdir(history_root): full_path = os.path.join(history_root, current_item) items.append(full_path) items.sort(key=os.path.getctime, reverse=True) for current_item in items[_MAX_ARCHIVED_STATES:]: if os.path.isfile(current_item): os.remove(current_item) else: shutil.rmtree(current_item) if GoalStateHistory._purge_error_count > 0: GoalStateHistory._purge_error_count = 0 # Log a success message when we are recovering from errors. logger.info("Successfully cleaned up the goal state history directory") except Exception as e: GoalStateHistory._purge_error_count += 1 if GoalStateHistory._purge_error_count < 5: logger.warn("Failed to clean up the goal state history directory: {0}".format(e)) elif GoalStateHistory._purge_error_count == 5: logger.warn("Failed to clean up the goal state history directory [will stop reporting these errors]: {0}".format(e)) @staticmethod def _save_placeholder(): """ Some internal components took a dependency in the legacy GoalState.*.xml file. We create it here while those components are updated to remove the dependency. When removing this code, also remove the check in StateArchiver.purge_legacy_goal_state_history, and the definition of _PLACEHOLDER_FILE_NAME """ try: placeholder = os.path.join(conf.get_lib_dir(), _PLACEHOLDER_FILE_NAME) with open(placeholder, "w") as handle: handle.write("empty placeholder file") except Exception as e: logger.warn("Failed to save placeholder file ({0}): {1}".format(_PLACEHOLDER_FILE_NAME, e)) def save_goal_state(self, text): self.save(text, _GOAL_STATE_FILE_NAME) self._save_placeholder() def save_extensions_config(self, text): self.save(text, _EXT_CONF_FILE_NAME) def save_vm_settings(self, text): self.save(text, _VM_SETTINGS_FILE_NAME) def save_remote_access(self, text): self.save(text, _REMOTE_ACCESS_FILE_NAME) def save_certificates(self, text): self.save(text, _CERTIFICATES_FILE_NAME) def save_hosting_env(self, text): self.save(text, _HOSTING_ENV_FILE_NAME) def save_shared_conf(self, text): self.save(text, SHARED_CONF_FILE_NAME) def save_manifest(self, name, text): self.save(text, _MANIFEST_FILE_NAME.format(name)) Azure-WALinuxAgent-a976115/azurelinuxagent/common/utils/cryptutil.py000066400000000000000000000162511510742556200256650ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import base64 import errno import struct import os.path import subprocess from azurelinuxagent.common.future import ustr, bytebuffer from azurelinuxagent.common.exception import CryptError import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.shellutil as shellutil DECRYPT_SECRET_CMD = "{0} cms -decrypt -inform DER -inkey {1} -in /dev/stdin" class CryptUtil(object): def __init__(self, openssl_cmd): self.openssl_cmd = openssl_cmd def gen_transport_cert(self, prv_file, crt_file): """ Create ssl certificate for https communication with endpoint server. """ cmd = [self.openssl_cmd, "req", "-x509", "-nodes", "-subj", "/CN=LinuxTransport", "-days", "730", "-newkey", "rsa:2048", "-keyout", prv_file, "-out", crt_file] try: shellutil.run_command(cmd) except shellutil.CommandError as cmd_err: msg = "Failed to create {0} and {1} certificates.\n[stdout]\n{2}\n\n[stderr]\n{3}\n"\ .format(prv_file, crt_file, cmd_err.stdout, cmd_err.stderr) logger.error(msg) def get_pubkey_from_prv(self, file_name): if not os.path.exists(file_name): raise IOError(errno.ENOENT, "File not found", file_name) # OpenSSL's pkey command may not be available on older versions so try 'rsa' first. try: command = [self.openssl_cmd, "rsa", "-in", file_name, "-pubout"] return shellutil.run_command(command, log_error=False) except shellutil.CommandError as error: if not ("Not an RSA key" in error.stderr or "expecting an rsa key" in error.stderr): logger.error( "Command: [{0}], return code: [{1}], stdout: [{2}] stderr: [{3}]", " ".join(command), error.returncode, error.stdout, error.stderr) raise return shellutil.run_command([self.openssl_cmd, "pkey", "-in", file_name, "-pubout"], log_error=True) def get_pubkey_from_crt(self, file_name): if not os.path.exists(file_name): raise IOError(errno.ENOENT, "File not found", file_name) else: cmd = [self.openssl_cmd, "x509", "-in", file_name, "-pubkey", "-noout"] pub = shellutil.run_command(cmd, log_error=True) return pub def get_thumbprint_from_crt(self, file_name): if not os.path.exists(file_name): raise IOError(errno.ENOENT, "File not found", file_name) else: cmd = [self.openssl_cmd, "x509", "-in", file_name, "-fingerprint", "-noout"] thumbprint = shellutil.run_command(cmd) thumbprint = thumbprint.rstrip().split('=')[1].replace(':', '').upper() return thumbprint def decrypt_certificates_p7m(self, p7m_file, trans_prv_file, trans_cert_file, pfx_file): umask = None try: umask = os.umask(0o077) with open(pfx_file, "wb") as pfx_file_: shellutil.run_command([self.openssl_cmd, "cms", "-decrypt", "-in", p7m_file, "-inkey", trans_prv_file, "-recip", trans_cert_file], stdout=pfx_file_) finally: if umask is not None: os.umask(umask) def convert_pfx_to_pem(self, pfx_file, nomacver, pem_file): command = [self.openssl_cmd, "pkcs12", "-nodes", "-password", "pass:", "-in", pfx_file, "-out", pem_file] if nomacver: command.append("-nomacver") shellutil.run_command(command) def crt_to_ssh(self, input_file, output_file): with open(output_file, "ab") as file_out: cmd = ["ssh-keygen", "-i", "-m", "PKCS8", "-f", input_file] try: shellutil.run_command(cmd, stdout=file_out, log_error=True) except shellutil.CommandError: pass # nothing to do; the error is already logged def asn1_to_ssh(self, pubkey): lines = pubkey.split("\n") lines = [x for x in lines if not x.startswith("----")] base64_encoded = "".join(lines) try: #TODO remove pyasn1 dependency from pyasn1.codec.der import decoder as der_decoder der_encoded = base64.b64decode(base64_encoded) der_encoded = der_decoder.decode(der_encoded)[0][1] # pylint: disable=unsubscriptable-object key = der_decoder.decode(self.bits_to_bytes(der_encoded))[0] n=key[0] # pylint: disable=unsubscriptable-object e=key[1] # pylint: disable=unsubscriptable-object keydata = bytearray() keydata.extend(struct.pack('>I', len("ssh-rsa"))) keydata.extend(b"ssh-rsa") keydata.extend(struct.pack('>I', len(self.num_to_bytes(e)))) keydata.extend(self.num_to_bytes(e)) keydata.extend(struct.pack('>I', len(self.num_to_bytes(n)) + 1)) keydata.extend(b"\0") keydata.extend(self.num_to_bytes(n)) keydata_base64 = base64.b64encode(bytebuffer(keydata)) return ustr(b"ssh-rsa " + keydata_base64 + b"\n", encoding='utf-8') except ImportError: raise CryptError("Failed to load pyasn1.codec.der") def num_to_bytes(self, num): """ Pack number into bytes. Retun as string. """ result = bytearray() while num: result.append(num & 0xFF) num >>= 8 result.reverse() return result def bits_to_bytes(self, bits): """ Convert an array contains bits, [0,1] to a byte array """ index = 7 byte_array = bytearray() curr = 0 for bit in bits: curr = curr | (bit << index) index = index - 1 if index == -1: byte_array.append(curr) curr = 0 index = 7 return bytes(byte_array) def decrypt_secret(self, encrypted_password, private_key): try: decoded = base64.b64decode(encrypted_password) args = DECRYPT_SECRET_CMD.format(self.openssl_cmd, private_key).split(' ') output = shellutil.run_command(args, input=decoded, stderr=subprocess.STDOUT, encode_input=False, encode_output=False) return output.decode('utf-16') except shellutil.CommandError as command_error: raise subprocess.CalledProcessError(command_error.returncode, "openssl cms -decrypt", output=command_error.stdout) except Exception as e: raise CryptError("Error decoding secret", e) Azure-WALinuxAgent-a976115/azurelinuxagent/common/utils/distro_version.py000066400000000000000000000066661510742556200267100ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # """ """ import re class DistroVersion(object): """ Distro versions (as exposed by azurelinuxagent.common.version.DISTRO_VERSION) can be very arbitrary: 9.2.0 0.0.0.0_99496 10.0_RC2 1.4-rolling-202402090309 2015.11-git 2023 2023.02.1 2.1-systemd-rc1 2308a 3.11.2-dev20240212t1512utc-autotag 3.11.2-rc.1 3.1.22-1.8 8.1.3-p1-24838 8.1.3-p8-khilan.unadkat-08415223c9a99546b566df0dbc683ffa378cfd77 9.13.1P8X1 9.13.1RC1 9.2.0-beta1-25971 a ArrayOS bookworm/sid Clawhammer__9.14.0 FFFF h JNPR-11.0-20200922.4042921_build lighthouse-23.10.0 Lighthouse__9.13.1 linux-os-31700 Mightysquirrel__9.15.0 n/a NAME="SLES" ngfw-6.10.13.26655.fips.2 r11427-9ce6aa9d8d SonicOSX 7.1.1-7047-R3003-HF24239 unstable vsbc-x86_pi3-6.10.3 vsbc-x86_pi3-6.12.2pre02 The DistroVersion allows to compare these versions following an strategy similar to the now deprecated distutils.LooseVersion: versions consist of a series of sequences of numbers, alphabetic characters, or any other characters, optionally separated dots (the dots themselves are stripped out). When comparing versions the numeric components are compared numerically, while the other components are compared lexicographically. NOTE: For entities with simpler version schemes (e.g. extensions and the Agent), use FlexibleVersion. """ def __init__(self, version): self._version = version self._fragments = [ int(x) if DistroVersion._number_re.match(x) else x for x in DistroVersion._fragment_re.split(self._version) if x != '' and x != '.' ] _fragment_re = re.compile(r'(\d+|[a-z]+|\.)', re.IGNORECASE) _number_re = re.compile(r'\d+') def __str__(self): return self._version def __repr__(self): return str(self) def __eq__(self, other): return self._compare(other) == 0 def __lt__(self, other): return self._compare(other) < 0 def __le__(self, other): return self._compare(other) <= 0 def __gt__(self, other): return self._compare(other) > 0 def __ge__(self, other): return self._compare(other) >= 0 def _compare(self, other): if isinstance(other, str): other = DistroVersion(other) if self._fragments < other._fragments: return -1 if self._fragments > other._fragments: return 1 return 0 Azure-WALinuxAgent-a976115/azurelinuxagent/common/utils/fileutil.py000066400000000000000000000154001510742556200254360ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # """ File operation util functions """ import errno as errno import glob import os import pwd import re import shutil import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.textutil as textutil from azurelinuxagent.common.future import ustr KNOWN_IOERRORS = [ errno.EIO, # I/O error errno.ENOMEM, # Out of memory errno.ENFILE, # File table overflow errno.EMFILE, # Too many open files errno.ENOSPC, # Out of space errno.ENAMETOOLONG, # Name too long errno.ELOOP, # Too many symbolic links encountered 121 # Remote I/O error (errno.EREMOTEIO -- not present in all Python 2.7+) ] def read_file(filepath, asbin=False, remove_bom=False, encoding='utf-8'): """ Read and return contents of 'filepath'. """ mode = 'rb' with open(filepath, mode) as in_file: data = in_file.read() if data is None: return None if asbin: return data if remove_bom: # remove bom on bytes data before it is converted into string. data = textutil.remove_bom(data) data = ustr(data, encoding=encoding) return data def write_file(filepath, contents, asbin=False, encoding='utf-8', append=False): """ Write 'contents' to 'filepath'. """ mode = "ab" if append else "wb" data = contents if not asbin: data = contents.encode(encoding) with open(filepath, mode) as out_file: out_file.write(data) def append_file(filepath, contents, asbin=False, encoding='utf-8'): """ Append 'contents' to 'filepath'. """ write_file(filepath, contents, asbin=asbin, encoding=encoding, append=True) def base_name(path): head, tail = os.path.split(path) # pylint: disable=W0612 return tail def get_line_startingwith(prefix, filepath): """ Return line from 'filepath' if the line startswith 'prefix' """ for line in read_file(filepath).split('\n'): if line.startswith(prefix): return line return None def mkdir(dirpath, mode=None, owner=None, reset_mode_and_owner=True): if not os.path.isdir(dirpath): os.makedirs(dirpath) reset_mode_and_owner = True # force setting the mode and owner if reset_mode_and_owner: if mode is not None: chmod(dirpath, mode) if owner is not None: chowner(dirpath, owner) def chowner(path, owner): if not os.path.exists(path): logger.error("Path does not exist: {0}".format(path)) else: owner_info = pwd.getpwnam(owner) os.chown(path, owner_info[2], owner_info[3]) def chmod(path, mode): if not os.path.exists(path): logger.error("Path does not exist: {0}".format(path)) else: os.chmod(path, mode) def rm_files(*args): for paths in args: # find all possible file paths for path in glob.glob(paths): if os.path.isfile(path): os.remove(path) def rm_dirs(*args): """ Remove the contents of each directory """ for p in args: if not os.path.isdir(p): continue for pp in os.listdir(p): path = os.path.join(p, pp) if os.path.isfile(path): os.remove(path) elif os.path.islink(path): os.unlink(path) elif os.path.isdir(path): shutil.rmtree(path) def trim_ext(path, ext): if not ext.startswith("."): ext = "." + ext return path.split(ext)[0] if path.endswith(ext) else path def update_conf_file(path, line_start, val, chk_err=False): conf = [] if not os.path.isfile(path) and chk_err: raise IOError("Can't find config file:{0}".format(path)) conf = read_file(path).split('\n') conf = [x for x in conf if x is not None and len(x) > 0 and not x.startswith(line_start)] conf.append(val) write_file(path, '\n'.join(conf) + '\n') def search_file(target_dir_name, target_file_name): for root, dirs, files in os.walk(target_dir_name): # pylint: disable=W0612 for file_name in files: if file_name == target_file_name: return os.path.join(root, file_name) return None def chmod_tree(path, mode): for root, dirs, files in os.walk(path): # pylint: disable=W0612 for file_name in files: os.chmod(os.path.join(root, file_name), mode) def findstr_in_file(file_path, line_str): """ Return True if the line is in the file; False otherwise. (Trailing whitespace is ignored.) """ try: with open(file_path, 'r') as fh: for line in fh.readlines(): if line_str == line.rstrip(): return True except Exception: # swallow exception pass return False def findre_in_file(file_path, line_re): """ Return match object if found in file. """ try: with open(file_path, 'r') as fh: pattern = re.compile(line_re) for line in fh.readlines(): match = re.search(pattern, line) if match: return match except: # pylint: disable=W0702 pass return None def get_all_files(root_path): """ Find all files under the given root path """ result = [] for root, dirs, files in os.walk(root_path): # pylint: disable=W0612 result.extend([os.path.join(root, file) for file in files]) # pylint: disable=redefined-builtin return result def clean_ioerror(e, paths=None): """ Clean-up possibly bad files and directories after an IO error. The code ignores *all* errors since disk state may be unhealthy. """ if paths is None: paths = [] if isinstance(e, IOError) and e.errno in KNOWN_IOERRORS: for path in paths: if path is None: continue try: if os.path.isdir(path): shutil.rmtree(path, ignore_errors=True) else: os.remove(path) except Exception: # swallow exception pass Azure-WALinuxAgent-a976115/azurelinuxagent/common/utils/flexible_version.py000066400000000000000000000176371510742556200271760ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import re class FlexibleVersion(object): """ A more flexible implementation of distutils.version.StrictVersion. NOTE: Use this class for generic version comparisons, e.g. extension and Agent versions. Distro versions can be very arbitrary and should be handled using the DistroVersion class. The implementation allows to specify: - an arbitrary number of version numbers: not only '1.2.3' , but also '1.2.3.4.5' - the separator between version numbers: '1-2-3' is allowed when '-' is specified as separator - a flexible pre-release separator: '1.2.3.alpha1', '1.2.3-alpha1', and '1.2.3alpha1' are considered equivalent - an arbitrary ordering of pre-release tags: 1.1alpha3 < 1.1beta2 < 1.1rc1 < 1.1 when ["alpha", "beta", "rc"] is specified as pre-release tag list Inspiration from this discussion at StackOverflow: http://stackoverflow.com/questions/12255554/sort-versions-in-python """ def __init__(self, vstring=None, sep='.', prerel_tags=('alpha', 'beta', 'rc')): if sep is None: sep = '.' if prerel_tags is None: prerel_tags = () self.sep = sep self.prerel_sep = '' self.prerel_tags = tuple(prerel_tags) if prerel_tags is not None else () self._compile_pattern() self.prerelease = None self.version = () if vstring: self._parse(str(vstring)) return _nn_version = 'version' _nn_prerel_sep = 'prerel_sep' _nn_prerel_tag = 'tag' _nn_prerel_num = 'tag_num' _re_prerel_sep = r'(?P<{pn}>{sep})?'.format( pn=_nn_prerel_sep, sep='|'.join(map(re.escape, ('.', '-')))) @property def major(self): return self.version[0] if len(self.version) > 0 else 0 @property def minor(self): return self.version[1] if len(self.version) > 1 else 0 @property def patch(self): return self.version[2] if len(self.version) > 2 else 0 def _parse(self, vstring): m = self.version_re.match(vstring) if not m: raise ValueError("Invalid version number '{0}'".format(vstring)) self.prerelease = None self.version = () self.prerel_sep = m.group(self._nn_prerel_sep) tag = m.group(self._nn_prerel_tag) tag_num = m.group(self._nn_prerel_num) if tag is not None and tag_num is not None: self.prerelease = (tag, int(tag_num) if len(tag_num) else None) self.version = tuple(map(int, self.sep_re.split(m.group(self._nn_version)))) return def __add__(self, increment): version = list(self.version) # pylint: disable=W0621 version[-1] += increment vstring = self._assemble(version, self.sep, self.prerel_sep, self.prerelease) return FlexibleVersion(vstring=vstring, sep=self.sep, prerel_tags=self.prerel_tags) def __sub__(self, decrement): version = list(self.version) # pylint: disable=W0621 if version[-1] <= 0: raise ArithmeticError("Cannot decrement final numeric component of {0} below zero" \ .format(self)) version[-1] -= decrement vstring = self._assemble(version, self.sep, self.prerel_sep, self.prerelease) return FlexibleVersion(vstring=vstring, sep=self.sep, prerel_tags=self.prerel_tags) def __repr__(self): return "{cls} ('{vstring}', '{sep}', {prerel_tags})"\ .format( cls=self.__class__.__name__, vstring=str(self), sep=self.sep, prerel_tags=self.prerel_tags) def __str__(self): return self._assemble(self.version, self.sep, self.prerel_sep, self.prerelease) def __ge__(self, that): return not self.__lt__(that) def __gt__(self, that): return (not self.__lt__(that)) and (not self.__eq__(that)) def __le__(self, that): return (self.__lt__(that)) or (self.__eq__(that)) def __lt__(self, that): this_version, that_version = self._ensure_compatible(that) if this_version != that_version \ or self.prerelease is None and that.prerelease is None: return this_version < that_version if self.prerelease is not None and that.prerelease is None: return True if self.prerelease is None and that.prerelease is not None: return False this_index = self.prerel_tags_set[self.prerelease[0]] that_index = self.prerel_tags_set[that.prerelease[0]] if this_index == that_index: return self.prerelease[1] < that.prerelease[1] return this_index < that_index def __ne__(self, that): return not self.__eq__(that) def __eq__(self, that): this_version, that_version = self._ensure_compatible(that) if this_version != that_version: return False if self.prerelease != that.prerelease: return False return True def matches(self, that): if self.sep != that.sep or len(self.version) > len(that.version): return False for i in range(len(self.version)): if self.version[i] != that.version[i]: return False if self.prerel_tags: return self.prerel_tags == that.prerel_tags return True def _assemble(self, version, sep, prerel_sep, prerelease): # pylint: disable=W0621 s = sep.join(map(str, version)) if prerelease is not None: if prerel_sep is not None: s += prerel_sep s += prerelease[0] if prerelease[1] is not None: s += str(prerelease[1]) return s def _compile_pattern(self): sep, self.sep_re = self._compile_separator(self.sep) if self.prerel_tags: tags = '|'.join(re.escape(tag) for tag in self.prerel_tags) self.prerel_tags_set = dict(zip(self.prerel_tags, range(len(self.prerel_tags)))) release_re = r'(?:{prerel_sep}(?P<{tn}>{tags})(?P<{nn}>\d*))?'.format( prerel_sep=self._re_prerel_sep, tags=tags, tn=self._nn_prerel_tag, nn=self._nn_prerel_num) else: release_re = '' version_re = r'^(?P<{vn}>\d+(?:(?:{sep}\d+)*)?){rel}$'.format( vn=self._nn_version, sep=sep, rel=release_re) self.version_re = re.compile(version_re) return def _compile_separator(self, sep): if sep is None: return '', re.compile('') return re.escape(sep), re.compile(re.escape(sep)) def _ensure_compatible(self, that): """ Ensures the instances have the same structure and, if so, returns length compatible version lists (so that x.y.0.0 is equivalent to x.y). """ if self.prerel_tags != that.prerel_tags or self.sep != that.sep: raise ValueError("Unable to compare: versions have different structures") this_version = list(self.version[:]) that_version = list(that.version[:]) while len(this_version) < len(that_version): this_version.append(0) while len(that_version) < len(this_version): that_version.append(0) return this_version, that_version Azure-WALinuxAgent-a976115/azurelinuxagent/common/utils/networkutil.py000066400000000000000000000066001510742556200262120ustar00rootroot00000000000000# # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # class RouteEntry(object): """ Represents a single route. The destination, gateway, and mask members are hex representations of the IPv4 address in network byte order. """ def __init__(self, interface, destination, gateway, mask, flags, metric): self.interface = interface self.destination = destination self.gateway = gateway self.mask = mask self.flags = int(flags, 16) self.metric = int(metric) @staticmethod def _net_hex_to_dotted_quad(value): if len(value) != 8: raise Exception("String to dotted quad conversion must be 8 characters") octets = [] for idx in range(6, -2, -2): octets.append(str(int(value[idx:idx + 2], 16))) return ".".join(octets) def destination_quad(self): return self._net_hex_to_dotted_quad(self.destination) def gateway_quad(self): return self._net_hex_to_dotted_quad(self.gateway) def mask_quad(self): return self._net_hex_to_dotted_quad(self.mask) def to_json(self): f = '{{"Iface": "{0}", "Destination": "{1}", "Gateway": "{2}", "Mask": "{3}", "Flags": "{4:#06x}", "Metric": "{5}"}}' return f.format(self.interface, self.destination_quad(), self.gateway_quad(), self.mask_quad(), self.flags, self.metric) def __str__(self): f = "Iface: {0}\tDestination: {1}\tGateway: {2}\tMask: {3}\tFlags: {4:#06x}\tMetric: {5}" return f.format(self.interface, self.destination_quad(), self.gateway_quad(), self.mask_quad(), self.flags, self.metric) def __repr__(self): return 'RouteEntry("{0}", "{1}", "{2}", "{3}", "{4:#04x}", "{5}")' \ .format(self.interface, self.destination, self.gateway, self.mask, self.flags, self.metric) class NetworkInterfaceCard: def __init__(self, name, link_info): self.name = name self.ipv4 = set() self.ipv6 = set() self.link = link_info def add_ipv4(self, info): self.ipv4.add(info) def add_ipv6(self, info): self.ipv6.add(info) def __eq__(self, other): return self.link == other.link and \ self.ipv4 == other.ipv4 and \ self.ipv6 == other.ipv6 @staticmethod def _json_array(items): return "[{0}]".format(",".join(['"{0}"'.format(x) for x in sorted(items)])) def __str__(self): entries = ['"name": "{0}"'.format(self.name), '"link": "{0}"'.format(self.link)] if len(self.ipv4) > 0: entries.append('"ipv4": {0}'.format(self._json_array(self.ipv4))) if len(self.ipv6) > 0: entries.append('"ipv6": {0}'.format(self._json_array(self.ipv6))) return "{{ {0} }}".format(", ".join(entries)) Azure-WALinuxAgent-a976115/azurelinuxagent/common/utils/restutil.py000066400000000000000000000550661510742556200255100ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import threading import time import socket import struct import azurelinuxagent.common.conf as conf import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.textutil as textutil from azurelinuxagent.common.exception import HttpError, ResourceGoneError, InvalidContainerError from azurelinuxagent.common.future import httpclient, urlparse, ustr from azurelinuxagent.common.version import PY_VERSION_MAJOR, AGENT_NAME, GOAL_STATE_AGENT_VERSION SECURE_WARNING_EMITTED = False DEFAULT_RETRIES = 6 DELAY_IN_SECONDS = 1 THROTTLE_RETRIES = 25 THROTTLE_DELAY_IN_SECONDS = 1 # Reducing next attempt calls when throttled since telemetrydata endpoint has a limit 15 calls per 15 secs, TELEMETRY_THROTTLE_DELAY_IN_SECONDS = 8 # Considering short delay for telemetry flush imp events TELEMETRY_FLUSH_THROTTLE_DELAY_IN_SECONDS = 2 RETRY_CODES = [ httpclient.RESET_CONTENT, httpclient.PARTIAL_CONTENT, httpclient.FORBIDDEN, httpclient.INTERNAL_SERVER_ERROR, httpclient.NOT_IMPLEMENTED, httpclient.BAD_GATEWAY, httpclient.SERVICE_UNAVAILABLE, httpclient.GATEWAY_TIMEOUT, httpclient.INSUFFICIENT_STORAGE, 429, # Request Rate Limit Exceeded ] # # Currently the HostGAPlugin has an issue its cache that may produce a BAD_REQUEST failure for valid URIs when using the extensionArtifact API. # Add this status to the retryable codes, but use it only when requesting downloads via the HostGAPlugin. The retry logic in the download code # would give enough time to the HGAP to refresh its cache. Once the fix to address that issue is deployed, consider removing the use of # HGAP_GET_EXTENSION_ARTIFACT_RETRY_CODES. # HGAP_GET_EXTENSION_ARTIFACT_RETRY_CODES = RETRY_CODES[:] # make a copy of RETRY_CODES HGAP_GET_EXTENSION_ARTIFACT_RETRY_CODES.append(httpclient.BAD_REQUEST) RESOURCE_GONE_CODES = [ httpclient.GONE ] OK_CODES = [ httpclient.OK, httpclient.CREATED, httpclient.ACCEPTED ] NOT_MODIFIED_CODES = [ httpclient.NOT_MODIFIED ] HOSTPLUGIN_UPSTREAM_FAILURE_CODES = [ 502 ] THROTTLE_CODES = [ httpclient.FORBIDDEN, httpclient.SERVICE_UNAVAILABLE, 429, # Request Rate Limit Exceeded ] RETRY_EXCEPTIONS = [ httpclient.NotConnected, httpclient.IncompleteRead, httpclient.ImproperConnectionState, httpclient.BadStatusLine ] # http://www.gnu.org/software/wget/manual/html_node/Proxies.html HTTP_PROXY_ENV = "http_proxy" HTTPS_PROXY_ENV = "https_proxy" NO_PROXY_ENV = "no_proxy" HTTP_USER_AGENT = "{0}/{1}".format(AGENT_NAME, GOAL_STATE_AGENT_VERSION) HTTP_USER_AGENT_HEALTH = "{0}+health".format(HTTP_USER_AGENT) INVALID_CONTAINER_CONFIGURATION = "InvalidContainerConfiguration" REQUEST_ROLE_CONFIG_FILE_NOT_FOUND = "RequestRoleConfigFileNotFound" KNOWN_WIRESERVER_IP = '168.63.129.16' HOST_PLUGIN_PORT = 32526 TELEMETRY_DATA = "telemetrydata" class IOErrorCounter(object): _lock = threading.RLock() _protocol_endpoint = KNOWN_WIRESERVER_IP _counts = {"hostplugin":0, "protocol":0, "other":0} @staticmethod def increment(host=None, port=None): with IOErrorCounter._lock: if host == IOErrorCounter._protocol_endpoint: if port == HOST_PLUGIN_PORT: IOErrorCounter._counts["hostplugin"] += 1 else: IOErrorCounter._counts["protocol"] += 1 else: IOErrorCounter._counts["other"] += 1 @staticmethod def get_and_reset(): with IOErrorCounter._lock: counts = IOErrorCounter._counts.copy() IOErrorCounter.reset() return counts @staticmethod def reset(): with IOErrorCounter._lock: IOErrorCounter._counts = {"hostplugin":0, "protocol":0, "other":0} @staticmethod def set_protocol_endpoint(endpoint=KNOWN_WIRESERVER_IP): IOErrorCounter._protocol_endpoint = endpoint def _compute_delay(retry_attempt=1, delay=DELAY_IN_SECONDS): fib = (1, 1) for _ in range(retry_attempt): fib = (fib[1], fib[0]+fib[1]) return delay*fib[1] def _is_retry_status(status, retry_codes=None): if retry_codes is None: retry_codes = RETRY_CODES return status in retry_codes def _is_retry_exception(e): return len([x for x in RETRY_EXCEPTIONS if isinstance(e, x)]) > 0 def _is_throttle_status(status): return status in THROTTLE_CODES def _is_telemetry_req(url): if TELEMETRY_DATA in url: return True return False def _parse_url(url): """ Parse URL to get the components of the URL broken down to host, port :rtype: string, int, bool, string """ o = urlparse(url) rel_uri = o.path if o.fragment: rel_uri = "{0}#{1}".format(rel_uri, o.fragment) if o.query: rel_uri = "{0}?{1}".format(rel_uri, o.query) secure = False if o.scheme.lower() == "https": secure = True return o.hostname, o.port, secure, rel_uri def _trim_url_parameters(url): """ Parse URL and return scheme://hostname:port/path """ o = urlparse(url) if o.hostname: if o.port: return "{0}://{1}:{2}{3}".format(o.scheme, o.hostname, o.port, o.path) else: return "{0}://{1}{2}".format(o.scheme, o.hostname, o.path) return url def is_valid_cidr(string_network): """ Very simple check of the cidr format in no_proxy variable. :rtype: bool """ if string_network.count('/') == 1: try: mask = int(string_network.split('/')[1]) except ValueError: return False if mask < 1 or mask > 32: return False try: socket.inet_aton(string_network.split('/')[0]) except socket.error: return False else: return False return True def dotted_netmask(mask): """Converts mask from /xx format to xxx.xxx.xxx.xxx Example: if mask is 24 function returns 255.255.255.0 :rtype: str """ bits = 0xffffffff ^ (1 << 32 - mask) - 1 return socket.inet_ntoa(struct.pack('>I', bits)) def address_in_network(ip, net): """This function allows you to check if an IP belongs to a network subnet Example: returns True if ip = 192.168.1.1 and net = 192.168.1.0/24 returns False if ip = 192.168.1.1 and net = 192.168.100.0/24 :rtype: bool """ ipaddr = struct.unpack('=L', socket.inet_aton(ip))[0] netaddr, bits = net.split('/') netmask = struct.unpack('=L', socket.inet_aton(dotted_netmask(int(bits))))[0] network = struct.unpack('=L', socket.inet_aton(netaddr))[0] & netmask return (ipaddr & netmask) == (network & netmask) def is_ipv4_address(string_ip): """ :rtype: bool """ try: socket.inet_aton(string_ip) except socket.error: return False return True def get_no_proxy(): no_proxy = os.environ.get(NO_PROXY_ENV) or os.environ.get(NO_PROXY_ENV.upper()) if no_proxy: no_proxy = [host for host in no_proxy.replace(' ', '').split(',') if host] # no_proxy in the proxies argument takes precedence return no_proxy def bypass_proxy(host): no_proxy = get_no_proxy() if no_proxy: if is_ipv4_address(host): for proxy_ip in no_proxy: if is_valid_cidr(proxy_ip): if address_in_network(host, proxy_ip): return True elif host == proxy_ip: # If no_proxy ip was defined in plain IP notation instead of cidr notation & # matches the IP of the index return True else: for proxy_domain in no_proxy: if host.lower().endswith(proxy_domain.lower()): # The URL does match something in no_proxy, so we don't want # to apply the proxies on this URL. return True return False def _get_http_proxy(secure=False): # Prefer the configuration settings over environment variables host = conf.get_httpproxy_host() port = None if not host is None: port = conf.get_httpproxy_port() else: http_proxy_env = HTTPS_PROXY_ENV if secure else HTTP_PROXY_ENV http_proxy_url = None for v in [http_proxy_env, http_proxy_env.upper()]: if v in os.environ: http_proxy_url = os.environ[v] break if not http_proxy_url is None: host, port, _, _ = _parse_url(http_proxy_url) return host, port def _http_request(method, host, rel_uri, timeout, port=None, data=None, secure=False, headers=None, proxy_host=None, proxy_port=None, redact_data=False): headers = {} if headers is None else headers headers['Connection'] = 'close' use_proxy = proxy_host is not None and proxy_port is not None if port is None: port = 443 if secure else 80 if 'User-Agent' not in headers: headers['User-Agent'] = HTTP_USER_AGENT if use_proxy: conn_host, conn_port = proxy_host, proxy_port scheme = "https" if secure else "http" url = "{0}://{1}:{2}{3}".format(scheme, host, port, rel_uri) else: conn_host, conn_port = host, port url = rel_uri if secure: conn = httpclient.HTTPSConnection(conn_host, conn_port, timeout=timeout) if use_proxy: conn.set_tunnel(host, port) else: conn = httpclient.HTTPConnection(conn_host, conn_port, timeout=timeout) payload = data if redact_data: payload = "[REDACTED]" # Logger requires the msg to be a ustr to log properly, ensuring that the data string that we log is always ustr logger.verbose("HTTP connection [{0}] [{1}] [{2}] [{3}]", method, url, textutil.str_to_encoded_ustr(payload), headers) conn.request(method=method, url=url, body=data, headers=headers) return conn.getresponse() def http_request(method, url, data, timeout, headers=None, use_proxy=False, max_retry=None, retry_codes=None, retry_delay=DELAY_IN_SECONDS, throttle_delay=THROTTLE_DELAY_IN_SECONDS, redact_data=False, return_raw_response=False): """ NOTE: This method provides some logic to handle errors in the HTTP request, including checking the HTTP status of the response and handling some exceptions. If return_raw_response is set to True all the error handling will be skipped and the method will return the actual HTTP response and bubble up any exceptions while issuing the request. Also note that if return_raw_response is True no retries will be done. """ if max_retry is None: max_retry = DEFAULT_RETRIES if retry_codes is None: retry_codes = RETRY_CODES global SECURE_WARNING_EMITTED # pylint: disable=W0603 host, port, secure, rel_uri = _parse_url(url) # Use the HTTP(S) proxy proxy_host, proxy_port = (None, None) if use_proxy and not bypass_proxy(host): proxy_host, proxy_port = _get_http_proxy(secure=secure) if proxy_host or proxy_port: logger.verbose("HTTP proxy: [{0}:{1}]", proxy_host, proxy_port) # If httplib module is not built with ssl support, # fallback to HTTP if allowed if secure and not hasattr(httpclient, "HTTPSConnection"): if not conf.get_allow_http(): raise HttpError("HTTPS is unavailable and required") secure = False if not SECURE_WARNING_EMITTED: logger.warn("Python does not include SSL support") SECURE_WARNING_EMITTED = True # If httplib module doesn't support HTTPS tunnelling, # fallback to HTTP if allowed if secure and \ proxy_host is not None and \ proxy_port is not None \ and not hasattr(httpclient.HTTPSConnection, "set_tunnel"): if not conf.get_allow_http(): raise HttpError("HTTPS tunnelling is unavailable and required") secure = False if not SECURE_WARNING_EMITTED: logger.warn("Python does not support HTTPS tunnelling") SECURE_WARNING_EMITTED = True msg = '' attempt = 0 delay = 0 was_throttled = False while attempt < max_retry: if attempt > 0: # Compute the request delay # -- Use a fixed delay if the server ever rate-throttles the request # (with a safe, minimum number of retry attempts) # -- Otherwise, compute a delay that is the product of the next # item in the Fibonacci series and the initial delay value if was_throttled: delay = throttle_delay else: delay = _compute_delay(retry_attempt=attempt, delay=retry_delay) logger.verbose("[HTTP Retry] " "Attempt {0} of {1} will delay {2} seconds: {3}", attempt+1, max_retry, delay, msg) time.sleep(delay) attempt += 1 try: resp = _http_request(method, host, rel_uri, timeout, port=port, data=data, secure=secure, headers=headers, proxy_host=proxy_host, proxy_port=proxy_port, redact_data=redact_data) logger.verbose("[HTTP Response] Status Code {0}", resp.status) if return_raw_response: # skip all error handling return resp if request_failed(resp): if _is_retry_status(resp.status, retry_codes=retry_codes): msg = '[HTTP Retry] {0} {1} -- Status Code {2}'.format(method, url, resp.status) # Note if throttled and ensure a safe, minimum number of # retry attempts if _is_throttle_status(resp.status): was_throttled = True # Today, THROTTLE_RETRIES is set to a large number (26) for retries, as opposed to backing off and attempting fewer retries. # However, for telemetry calls (due to throttle limit 15 calls per 15 seconds), we use max_retry set by the caller for overall retry attempts instead of THROTTLE_RETRIES. if not _is_telemetry_req(url): max_retry = max(max_retry, THROTTLE_RETRIES) continue # If we got a 410 (resource gone) for any reason, raise an exception. The caller will handle it by # forcing a goal state refresh and retrying the call. if resp.status in RESOURCE_GONE_CODES: response_error = read_response_error(resp) raise ResourceGoneError(response_error) # If we got a 400 (bad request) because the container id is invalid, it could indicate a stale goal # state. The caller will handle this exception by forcing a goal state refresh and retrying the call. if resp.status == httpclient.BAD_REQUEST: response_error = read_response_error(resp) if INVALID_CONTAINER_CONFIGURATION in response_error: raise InvalidContainerError(response_error) return resp except httpclient.HTTPException as e: if return_raw_response: # skip all error handling raise clean_url = _trim_url_parameters(url) msg = '[HTTP Failed] {0} {1} -- HttpException {2}'.format(method, clean_url, e) if _is_retry_exception(e): continue break except IOError as e: if return_raw_response: # skip all error handling raise IOErrorCounter.increment(host=host, port=port) clean_url = _trim_url_parameters(url) msg = '[HTTP Failed] {0} {1} -- IOError {2}'.format(method, clean_url, e) continue raise HttpError("{0} -- {1} attempts made".format(msg, attempt)) def http_get(url, headers=None, use_proxy=False, max_retry=None, retry_codes=None, retry_delay=DELAY_IN_SECONDS, return_raw_response=False, timeout=10): """ NOTE: This method provides some logic to handle errors in the HTTP request, including checking the HTTP status of the response and handling some exceptions. If return_raw_response is set to True all the error handling will be skipped and the method will return the actual HTTP response and bubble up any exceptions while issuing the request. Also note that if return_raw_response is True no retries will be done. """ if max_retry is None: max_retry = DEFAULT_RETRIES if retry_codes is None: retry_codes = RETRY_CODES return http_request("GET", url, None, timeout, headers=headers, use_proxy=use_proxy, max_retry=max_retry, retry_codes=retry_codes, retry_delay=retry_delay, return_raw_response=return_raw_response) def http_head(url, headers=None, use_proxy=False, max_retry=None, retry_codes=None, retry_delay=DELAY_IN_SECONDS, timeout=10): if max_retry is None: max_retry = DEFAULT_RETRIES if retry_codes is None: retry_codes = RETRY_CODES return http_request("HEAD", url, None, timeout, headers=headers, use_proxy=use_proxy, max_retry=max_retry, retry_codes=retry_codes, retry_delay=retry_delay) def http_post(url, data, headers=None, use_proxy=False, max_retry=None, retry_codes=None, retry_delay=DELAY_IN_SECONDS, throttle_delay=THROTTLE_DELAY_IN_SECONDS, timeout=10): if max_retry is None: max_retry = DEFAULT_RETRIES if retry_codes is None: retry_codes = RETRY_CODES return http_request("POST", url, data, timeout, headers=headers, use_proxy=use_proxy, max_retry=max_retry, retry_codes=retry_codes, retry_delay=retry_delay, throttle_delay=throttle_delay) def http_put(url, data, headers=None, use_proxy=False, max_retry=None, retry_codes=None, retry_delay=DELAY_IN_SECONDS, redact_data=False, timeout=10): if max_retry is None: max_retry = DEFAULT_RETRIES if retry_codes is None: retry_codes = RETRY_CODES return http_request("PUT", url, data, timeout, headers=headers, use_proxy=use_proxy, max_retry=max_retry, retry_codes=retry_codes, retry_delay=retry_delay, redact_data=redact_data) def http_delete(url, headers=None, use_proxy=False, max_retry=None, retry_codes=None, retry_delay=DELAY_IN_SECONDS, timeout=10): if max_retry is None: max_retry = DEFAULT_RETRIES if retry_codes is None: retry_codes = RETRY_CODES return http_request("DELETE", url, None, timeout, headers=headers, use_proxy=use_proxy, max_retry=max_retry, retry_codes=retry_codes, retry_delay=retry_delay) def request_failed(resp, ok_codes=None): if ok_codes is None: ok_codes = OK_CODES return not request_succeeded(resp, ok_codes=ok_codes) def request_succeeded(resp, ok_codes=None): if ok_codes is None: ok_codes = OK_CODES return resp is not None and resp.status in ok_codes def request_not_modified(resp): return resp is not None and resp.status in NOT_MODIFIED_CODES def request_failed_at_hostplugin(resp, upstream_failure_codes=None): """ Host plugin will return 502 for any upstream issue, so a failure is any 5xx except 502 """ if upstream_failure_codes is None: upstream_failure_codes = HOSTPLUGIN_UPSTREAM_FAILURE_CODES return resp is not None and resp.status >= 500 and resp.status not in upstream_failure_codes def read_response_error(resp): result = '' if resp is not None: try: result = "[HTTP Failed] [{0}: {1}] {2}".format( resp.status, resp.reason, resp.read()) # this result string is passed upstream to several methods # which do a raise HttpError() or a format() of some kind; # as a result it cannot have any unicode characters if PY_VERSION_MAJOR < 3: result = ustr(result, encoding='ascii', errors='ignore') else: result = result\ .encode(encoding='ascii', errors='ignore')\ .decode(encoding='ascii', errors='ignore') result = textutil.replace_non_ascii(result) except Exception as e: logger.warn(textutil.format_exception(e)) return result Azure-WALinuxAgent-a976115/azurelinuxagent/common/utils/shellutil.py000066400000000000000000000401121510742556200256240ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import subprocess import sys import tempfile import threading if sys.version_info[0] == 2: # TimeoutExpired was introduced on Python 3; define a dummy class for Python 2 class TimeoutExpired(Exception): pass else: from subprocess import TimeoutExpired import azurelinuxagent.common.logger as logger from azurelinuxagent.common.future import ustr if not hasattr(subprocess, 'check_output'): def check_output(*popenargs, **kwargs): r"""Backport from subprocess module from python 2.7""" if 'stdout' in kwargs: raise ValueError('stdout argument not allowed, ' 'it will be overridden.') process = subprocess.Popen(stdout=subprocess.PIPE, *popenargs, **kwargs) output, unused_err = process.communicate() retcode = process.poll() if retcode: cmd = kwargs.get("args") if cmd is None: cmd = popenargs[0] raise subprocess.CalledProcessError(retcode, cmd, output=output) return output # Exception classes used by this module. class CalledProcessError(Exception): def __init__(self, returncode, cmd, output=None): # pylint: disable=W0231 self.returncode = returncode self.cmd = cmd self.output = output def __str__(self): return ("Command '{0}' returned non-zero exit status {1}" "").format(self.cmd, self.returncode) subprocess.check_output = check_output subprocess.CalledProcessError = CalledProcessError # pylint: disable=W0105 """ Shell command util functions """ # pylint: enable=W0105 def has_command(cmd): """ Return True if the given command is on the path """ return not run(cmd, False) def run(cmd, chk_err=True, expected_errors=None): """ Note: Deprecating in favour of `azurelinuxagent.common.utils.shellutil.run_command` function. Calls run_get_output on 'cmd', returning only the return code. If chk_err=True then errors will be reported in the log. If chk_err=False then errors will be suppressed from the log. """ if expected_errors is None: expected_errors = [] retcode, out = run_get_output(cmd, chk_err=chk_err, expected_errors=expected_errors) # pylint: disable=W0612 return retcode def run_get_output(cmd, chk_err=True, log_cmd=True, expected_errors=None): """ Wrapper for subprocess.check_output. Execute 'cmd'. Returns return code and STDOUT, trapping expected exceptions. Reports exceptions to Error if chk_err parameter is True For new callers, consider using run_command instead as it separates stdout from stderr, returns only stdout on success, logs both outputs and return code on error and raises an exception. """ if expected_errors is None: expected_errors = [] if log_cmd: logger.verbose(u"Command: [{0}]", cmd) try: process = _popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True) output, _ = process.communicate() _on_command_completed(process.pid) output = __encode_command_output(output) if process.returncode != 0: if chk_err: msg = u"Command: [{0}], " \ u"return code: [{1}], " \ u"result: [{2}]".format(cmd, process.returncode, output) if process.returncode in expected_errors: logger.info(msg) else: logger.error(msg) return process.returncode, output except Exception as exception: if chk_err: logger.error(u"Command [{0}] raised unexpected exception: [{1}]" .format(cmd, ustr(exception))) return -1, ustr(exception) return 0, output def __format_command(command): """ Formats the command taken by run_command/run_pipe. Examples: > __format_command("sort") 'sort' > __format_command(["sort", "-u"]) 'sort -u' > __format_command([["sort"], ["unique", "-n"]]) 'sort | unique -n' """ if isinstance(command, list): if command and isinstance(command[0], list): return " | ".join([" ".join(cmd) for cmd in command]) return " ".join(command) return command def __encode_command_output(output): """ Encodes the stdout/stderr returned by subprocess.communicate() """ return ustr(output if output is not None else b'', encoding='utf-8', errors="backslashreplace") class CommandError(Exception): """ Exception raised by run_command/run_pipe when the command returns an error """ @staticmethod def _get_message(command, return_code, stderr): command_name = command[0] if isinstance(command, list) and len(command) > 0 else command return "'{0}' failed: {1} ({2})".format(command_name, return_code, stderr.rstrip()) def __init__(self, command, return_code, stdout, stderr): super(Exception, self).__init__(CommandError._get_message(command, return_code, stderr)) # pylint: disable=E1003 self.command = command self.returncode = return_code self.stdout = stdout self.stderr = stderr def __run_command(command_action, command, log_error, encode_output): """ Executes the given command_action and returns its stdout. The command_action is a function that executes a command/pipe and returns its exit code, stdout, and stderr. If there are any errors executing the command it raises a RunCommandException; if 'log_error' is True, it also logs details about the error. If encode_output is True the stdout is returned as a string, otherwise it is returned as a bytes object. """ try: return_code, stdout, stderr = command_action() if encode_output: stdout = __encode_command_output(stdout) stderr = __encode_command_output(stderr) if return_code != 0: if log_error: logger.error( "Command: [{0}], return code: [{1}], stdout: [{2}] stderr: [{3}]", __format_command(command), return_code, stdout, stderr) raise CommandError(command=__format_command(command), return_code=return_code, stdout=stdout, stderr=stderr) return stdout except CommandError: raise except Exception as exception: if log_error: logger.error(u"Command [{0}] raised unexpected exception: [{1}]", __format_command(command), ustr(exception)) raise # W0622: Redefining built-in 'input' -- disabled: the parameter name mimics subprocess.communicate() def run_command(command, input=None, stdin=None, stdout=subprocess.PIPE, stderr=subprocess.PIPE, log_error=False, encode_input=True, encode_output=True, track_process=True, timeout=None): # pylint:disable=W0622 """ Executes the given command and returns its stdout. If there are any errors executing the command it raises a RunCommandException; if 'log_error' is True, it also logs details about the error. If encode_output is True the stdout is returned as a string, otherwise it is returned as a bytes object. If track_process is False the command is not added to list of running commands This function is a thin wrapper around Popen/communicate in the subprocess module: * The 'input' parameter corresponds to the same parameter in communicate * The 'stdin' parameter corresponds to the same parameters in Popen * Only one of 'input' and 'stdin' can be specified * The 'stdout' and 'stderr' parameters correspond to the same parameters in Popen, except that they default to subprocess.PIPE instead of None * If the output of the command is redirected using the 'stdout' or 'stderr' parameters (i.e. if the value for these parameters is anything other than the default (subprocess.PIPE)), then the corresponding values returned by this function or the CommandError exception will be empty strings. NOTE: The 'timeout' parameter is ignored on Python 2 NOTE: This is the preferred method to execute shell commands over `azurelinuxagent.common.utils.shellutil.run` function. """ if input is not None and stdin is not None: raise ValueError("The input and stdin arguments are mutually exclusive") def command_action(): popen_stdin = communicate_input = None if input is not None: popen_stdin = subprocess.PIPE communicate_input = input.encode() if encode_input and isinstance(input, str) else input # communicate() needs an array of bytes if stdin is not None: popen_stdin = stdin communicate_input = None if track_process: process = _popen(command, stdin=popen_stdin, stdout=stdout, stderr=stderr, shell=False) else: process = subprocess.Popen(command, stdin=popen_stdin, stdout=stdout, stderr=stderr, shell=False) try: if sys.version_info[0] == 2: # communicate() doesn't support timeout on Python 2 command_stdout, command_stderr = process.communicate(input=communicate_input) else: command_stdout, command_stderr = process.communicate(input=communicate_input, timeout=timeout) except TimeoutExpired: if log_error: logger.error(u"Command [{0}] timed out", __format_command(command)) command_stdout, command_stderr = '', '' try: process.kill() # try to get any output from the command, but ignore any errors if we can't try: command_stdout, command_stderr = process.communicate() # W0702: No exception type(s) specified (bare-except) except: # pylint: disable=W0702 pass except Exception as exception: if log_error: logger.error(u"Can't terminate timed out process: {0}", ustr(exception)) raise CommandError(command=__format_command(command), return_code=-1, stdout=command_stdout, stderr="command timeout\n{0}".format(command_stderr)) if track_process: _on_command_completed(process.pid) return process.returncode, command_stdout, command_stderr return __run_command(command_action=command_action, command=command, log_error=log_error, encode_output=encode_output) def run_pipe(pipe, stdin=None, stdout=subprocess.PIPE, stderr=subprocess.PIPE, log_error=False, encode_output=True): """ Executes the given commands as a pipe and returns its stdout as a string. The pipe is a list of commands, which in turn are a list of strings, e.g. [["sort"], ["uniq", "-n"]] represents 'sort | unique -n' If there are any errors executing the command it raises a RunCommandException; if 'log_error' is True, it also logs details about the error. If encode_output is True the stdout is returned as a string, otherwise it is returned as a bytes object. This function is a thin wrapper around Popen/communicate in the subprocess module: * The 'stdin' parameter is used as input for the first command in the pipe * The 'stdout', and 'stderr' can be used to redirect the output of the pipe * If the output of the pipe is redirected using the 'stdout' or 'stderr' parameters (i.e. if the value for these parameters is anything other than the default (subprocess.PIPE)), then the corresponding values returned by this function or the CommandError exception will be empty strings. """ if len(pipe) < 2: raise ValueError("The pipe must consist of at least 2 commands") def command_action(): stderr_file = None try: popen_stdin = stdin # If stderr is subprocess.PIPE each call to Popen would create a new pipe. We want to collect the stderr of all the # commands in the pipe so we replace stderr with a temporary file that we read once the pipe completes. if stderr == subprocess.PIPE: stderr_file = tempfile.TemporaryFile() popen_stderr = stderr_file else: popen_stderr = stderr processes = [] i = 0 while i < len(pipe) - 1: processes.append(_popen(pipe[i], stdin=popen_stdin, stdout=subprocess.PIPE, stderr=popen_stderr)) popen_stdin = processes[i].stdout i += 1 processes.append(_popen(pipe[i], stdin=popen_stdin, stdout=stdout, stderr=popen_stderr)) i = 0 while i < len(processes) - 1: processes[i].stdout.close() # see https://docs.python.org/2/library/subprocess.html#replacing-shell-pipeline i += 1 pipe_stdout, pipe_stderr = processes[i].communicate() for proc in processes: _on_command_completed(proc.pid) if stderr_file is not None: stderr_file.seek(0) pipe_stderr = stderr_file.read() return processes[i].returncode, pipe_stdout, pipe_stderr finally: if stderr_file is not None: stderr_file.close() return __run_command(command_action=command_action, command=pipe, log_error=log_error, encode_output=encode_output) def quote(word_list): """ Quote a list or tuple of strings for Unix Shell as words, using the byte-literal single quote. The resulting string is safe for use with ``shell=True`` in ``subprocess``, and in ``os.system``. ``assert shlex.split(ShellQuote(wordList)) == wordList``. See POSIX.1:2013 Vol 3, Chap 2, Sec 2.2.2: http://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_02_02 """ if not isinstance(word_list, (tuple, list)): word_list = (word_list,) return " ".join(list("'{0}'".format(s.replace("'", "'\\''")) for s in word_list)) # # The run_command/run_pipe/run/run_get_output functions maintain a list of the commands that they are currently executing. # # _running_commands = [] _running_commands_lock = threading.RLock() PARENT_PROCESS_NAME = "AZURE_GUEST_AGENT_PARENT_PROCESS_NAME" AZURE_GUEST_AGENT = "AZURE_GUEST_AGENT" def _popen(*args, **kwargs): with _running_commands_lock: # Add the environment variables env = {} if 'env' in kwargs: env.update(kwargs['env']) else: env.update(os.environ) # Set the marker before process start env[PARENT_PROCESS_NAME] = AZURE_GUEST_AGENT kwargs['env'] = env process = subprocess.Popen(*args, **kwargs) _running_commands.append(process.pid) return process def _on_command_completed(pid): with _running_commands_lock: _running_commands.remove(pid) def get_running_commands(): """ Returns the commands started by run/run_get_output/run_command/run_pipe that are currently running. NOTE: This function is not synchronized with process completion, so the returned array may include processes that have already completed. Also, keep in mind that by the time this function returns additional processes may have started or completed. """ with _running_commands_lock: return _running_commands[:] # return a copy, since the call may originate on another thread Azure-WALinuxAgent-a976115/azurelinuxagent/common/utils/textutil.py000066400000000000000000000275161510742556200255160ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import base64 import re import struct import sys import traceback import xml.dom.minidom as minidom import zlib from azurelinuxagent.common.future import ustr def parse_doc(xml_text): """ Parse xml document from string """ # The minidom lib has some issue with unicode in python2. # Encode the string into utf-8 first xml_text = xml_text.encode('utf-8') return minidom.parseString(xml_text) def findall(root, tag, namespace=None): """ Get all nodes by tag and namespace under Node root. """ if root is None: return [] if namespace is None: return root.getElementsByTagName(tag) else: return root.getElementsByTagNameNS(namespace, tag) def find(root, tag, namespace=None): """ Get first node by tag and namespace under Node root. """ nodes = findall(root, tag, namespace=namespace) if nodes is not None and len(nodes) >= 1: return nodes[0] else: return None def gettext(node): """ Get node text """ if node is None: return None for child in node.childNodes: if child.nodeType == child.TEXT_NODE: return child.data return None def gettextxml(node): """ Get the raw XML of a text node """ if node is None: return None for child in node.childNodes: if child.nodeType == child.TEXT_NODE: return child.toxml() return None def findtext(root, tag, namespace=None): """ Get text of node by tag and namespace under Node root. """ node = find(root, tag, namespace=namespace) return gettext(node) def getattrib(node, attr_name): """ Get attribute of xml node. Returns None if node is None. Returns "" if node does not have attribute attr_name """ if node is not None: return node.getAttribute(attr_name) else: return None def hasattrib(node, attr_name): """ Return True if xml node has attribute, False if node is None or node does not have attribute attr_name """ if node is not None: return node.hasAttribute(attr_name) else: return False def unpack(buf, offset, value_range): """ Unpack bytes into python values. """ result = 0 for i in value_range: result = (result << 8) | str_to_ord(buf[offset + i]) return result def unpack_little_endian(buf, offset, length): """ Unpack little endian bytes into python values. """ return unpack(buf, offset, list(range(length - 1, -1, -1))) def unpack_big_endian(buf, offset, length): """ Unpack big endian bytes into python values. """ return unpack(buf, offset, list(range(0, length))) def hex_dump3(buf, offset, length): """ Dump range of buf in formatted hex. """ return ''.join(['%02X' % str_to_ord(char) for char in buf[offset:offset + length]]) def hex_dump2(buf): """ Dump buf in formatted hex. """ return hex_dump3(buf, 0, len(buf)) def is_in_range(a, low, high): """ Return True if 'a' in 'low' <= a <= 'high' """ return low <= a <= high def is_printable(ch): """ Return True if character is displayable. """ return (is_in_range(ch, str_to_ord('A'), str_to_ord('Z')) or is_in_range(ch, str_to_ord('a'), str_to_ord('z')) or is_in_range(ch, str_to_ord('0'), str_to_ord('9'))) def hex_dump(buffer, size): # pylint: disable=redefined-builtin """ Return Hex formated dump of a 'buffer' of 'size'. """ if size < 0: size = len(buffer) result = "" for i in range(0, size): if (i % 16) == 0: result += "%06X: " % i byte = buffer[i] if type(byte) == str: byte = ord(byte.decode('latin1')) result += "%02X " % byte if (i & 15) == 7: result += " " if ((i + 1) % 16) == 0 or (i + 1) == size: j = i while ((j + 1) % 16) != 0: result += " " if (j & 7) == 7: result += " " j += 1 result += " " for j in range(i - (i % 16), i + 1): byte = buffer[j] if type(byte) == str: byte = str_to_ord(byte.decode('latin1')) k = '.' if is_printable(byte): k = chr(byte) result += k if (i + 1) != size: result += "\n" return result def str_to_ord(a): """ Allows indexing into a string or an array of integers transparently. Generic utility function. """ if type(a) == type(b'') or type(a) == type(u''): a = ord(a) return a def compare_bytes(a, b, start, length): for offset in range(start, start + length): if str_to_ord(a[offset]) != str_to_ord(b[offset]): return False return True def int_to_ip4_addr(a): """ Build DHCP request string. """ return "%u.%u.%u.%u" % ((a >> 24) & 0xFF, (a >> 16) & 0xFF, (a >> 8) & 0xFF, (a) & 0xFF) def hexstr_to_bytearray(a): """ Return hex string packed into a binary struct. """ b = b"" for c in range(0, len(a) // 2): b += struct.pack("B", int(a[c * 2:c * 2 + 2], 16)) return b def set_ssh_config(config, name, val): found = False no_match = -1 match_start = no_match for i in range(0, len(config)): if config[i].startswith(name) and match_start == no_match: config[i] = "{0} {1}".format(name, val) found = True elif config[i].lower().startswith("match"): if config[i].lower().startswith("match all"): # outside match block match_start = no_match elif match_start == no_match: # inside match block match_start = i if not found: if match_start != no_match: i = match_start config.insert(i, "{0} {1}".format(name, val)) return config def set_ini_config(config, name, val): notfound = True nameEqual = name + '=' length = len(config) text = "{0}=\"{1}\"".format(name, val) for i in reversed(range(0, length)): if config[i].startswith(nameEqual): config[i] = text notfound = False break if notfound: config.insert(length - 1, text) def replace_non_ascii(incoming, replace_char=''): outgoing = '' if incoming is not None: for c in incoming: if str_to_ord(c) > 128: outgoing += replace_char else: outgoing += c return outgoing def remove_bom(c): """ bom is comprised of a sequence of three chars,0xef, 0xbb, 0xbf, in case of utf-8. """ if not is_str_none_or_whitespace(c) and \ len(c) > 2 and \ str_to_ord(c[0]) > 128 and \ str_to_ord(c[1]) > 128 and \ str_to_ord(c[2]) > 128: c = c[3:] return c def get_bytes_from_pem(pem_str): base64_bytes = "" for line in pem_str.split('\n'): if "----" not in line: base64_bytes += line return base64_bytes def compress(s): """ Compress a string, and return the base64 encoded result of the compression. This method returns a string instead of a byte array. It is expected that this method is called to compress smallish strings, not to compress the contents of a file. The output of this method is suitable for embedding in log statements. """ if sys.version_info[0] > 2: return base64.b64encode(zlib.compress(bytes(s, 'utf-8'))).decode('utf-8') return base64.b64encode(zlib.compress(s)) def b64encode(s): if sys.version_info[0] > 2: return base64.b64encode(bytes(s, 'utf-8')).decode('utf-8') return base64.b64encode(s) def b64decode(s): if sys.version_info[0] > 2: return base64.b64decode(s).decode('utf-8') return base64.b64decode(s) def safe_shlex_split(s): import shlex if sys.version_info[:2] == (2, 6): return shlex.split(s.encode('utf-8')) return shlex.split(s) def swap_hexstring(s, width=2): r = len(s) % width if r != 0: s = ('0' * (width - (len(s) % width))) + s return ''.join(reversed( re.findall( r'[a-f0-9]{{{0}}}'.format(width), s, re.IGNORECASE))) def parse_json(json_str): """ Parse json string and return a resulting dictionary """ # trim null and whitespaces result = None if not is_str_empty(json_str): import json result = json.loads(json_str.rstrip(' \t\r\n\0')) return result def is_str_none_or_whitespace(s): return s is None or len(s) == 0 or s.isspace() def is_str_empty(s): return is_str_none_or_whitespace(s) or is_str_none_or_whitespace(s.rstrip(' \t\r\n\0')) def format_memory_value(unit, value): units = {'bytes': 1, 'kilobytes': 1024, 'megabytes': 1024*1024, 'gigabytes': 1024*1024*1024} if unit not in units: raise ValueError("Unit must be one of {0}".format(units.keys())) try: value = float(value) except TypeError: raise TypeError('Value must be convertible to a float') return int(value * units[unit]) def str_to_encoded_ustr(s, encoding='utf-8'): """ This function takes the string and converts it into the corresponding encoded ustr if its not already a ustr. The encoding is utf-8 by default if not specified. Note: ustr() is a unicode object for Py2 and a str object for Py3. :param s: The string to convert to ustr :param encoding: Encoding to use. Utf-8 by default :return: Returns the corresponding ustr string. Returns None if input is None. """ if s is None or type(s) is ustr: # If its already a ustr/None then return as is return s if sys.version_info[0] > 2: try: # For py3+, str() is unicode by default if isinstance(s, bytes): # str.encode() returns bytes which should be decoded to get the str. return s.decode(encoding) else: # If its not encoded, just return the string return ustr(s) except Exception: # If some issues in decoding, just return the string return ustr(s) # For Py2, explicitly convert the string to unicode with the specified encoding return ustr(s, encoding=encoding) def format_exception(exception): # Function to format exception message e = None if sys.version_info[0] == 2: _, e, tb = sys.exc_info() else: tb = exception.__traceback__ msg = ustr(exception) + "\n" if tb is None or (sys.version_info[0] == 2 and e != exception): msg += "[Traceback not available]" else: msg += ''.join(traceback.format_exception(type(exception), value=exception, tb=tb)) return msg SAS_TOKEN_RE = re.compile(r'(https://\S+\?)((sv|st|se|sr|sp|sip|spr|sig)=\S+)+', flags=re.IGNORECASE) def redact_sas_token(msg): """ Redact SAS tokens from the message """ if msg is None: return msg return SAS_TOKEN_RE.sub(r'\1', msg) Azure-WALinuxAgent-a976115/azurelinuxagent/common/utils/timeutil.py000066400000000000000000000017511510742556200254610ustar00rootroot00000000000000# Copyright (c) Microsoft Corporation. All rights reserved. # Licensed under the Apache License. import datetime def create_utc_timestamp(dt): """ Formats the given datetime, which must be timezone-aware and in UTC, as "YYYY-MM-DDTHH:MM:SS.ffffffZ". This is basically ISO-8601, but using "Z" (Zero offset) to represent the timezone offset (instead of "+00:00"). The corresponding format for strftime/strptime is "%Y-%m-%dT%H:%M:%S.%fZ". """ if dt.tzinfo is None: raise ValueError("The datetime must be timezone-aware") if dt.utcoffset() != datetime.timedelta(0): raise ValueError("The datetime must be in UTC") # We use isoformat() instead of strftime() since the later is limited to years >= 1900 in Python < 3.2. We remove the # timezone information since we are using "Z" to represent UTC, and we force the microseconds to be 000000 when it is 0. return dt.replace(tzinfo=None).isoformat() + (".000000Z" if dt.microsecond == 0 else "Z") Azure-WALinuxAgent-a976115/azurelinuxagent/common/version.py000066400000000000000000000251431510742556200241530ustar00rootroot00000000000000# Copyright 2019 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import re import platform import sys import azurelinuxagent.common.conf as conf import azurelinuxagent.common.utils.shellutil as shellutil from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from azurelinuxagent.common.future import ustr, get_linux_distribution __DAEMON_VERSION_ENV_VARIABLE = '_AZURE_GUEST_AGENT_DAEMON_VERSION_' """ The daemon process sets this variable's value to the daemon's version number. The variable is set only on versions >= 2.2.53 """ def set_daemon_version(version): """ Sets the value of the _AZURE_GUEST_AGENT_DAEMON_VERSION_ environment variable. The given 'version' can be a FlexibleVersion or a string that can be parsed into a FlexibleVersion """ flexible_version = version if isinstance(version, FlexibleVersion) else FlexibleVersion(version) os.environ[__DAEMON_VERSION_ENV_VARIABLE] = ustr(flexible_version) def get_daemon_version(): """ Retrieves the value of the _AZURE_GUEST_AGENT_DAEMON_VERSION_ environment variable. The value indicates the version of the daemon that started the current agent process or, if the current process is the daemon, the version of the current process. If the variable is not set (because the agent is < 2.2.53, or the process was not started by the daemon and the process is not the daemon itself) the function returns "0.0.0.0" """ if __DAEMON_VERSION_ENV_VARIABLE in os.environ: return FlexibleVersion(os.environ[__DAEMON_VERSION_ENV_VARIABLE]) return FlexibleVersion("0.0.0.0") def get_f5_platform(): """ Add this workaround for detecting F5 products because BIG-IP/IQ/etc do not show their version info in the /etc/product-version location. Instead, the version and product information is contained in the /VERSION file. """ result = [None, None, None, None] f5_version = re.compile(r"^Version: (\d+\.\d+\.\d+)") f5_product = re.compile(r"^Product: ([\w-]+)") with open('/VERSION', 'r') as fh: content = fh.readlines() for line in content: version_matches = f5_version.match(line) product_matches = f5_product.match(line) if version_matches: result[1] = version_matches.group(1) elif product_matches: result[3] = product_matches.group(1) if result[3] == "BIG-IP": result[0] = "bigip" result[2] = "bigip" elif result[3] == "BIG-IQ": result[0] = "bigiq" result[2] = "bigiq" elif result[3] == "iWorkflow": result[0] = "iworkflow" result[2] = "iworkflow" return result def get_checkpoint_platform(): take = build = release = "" full_name = open("/etc/cp-release").read().strip() with open("/etc/cloud-version") as f: for line in f: k, _, v = line.partition(": ") v = v.strip() if k == "release": release = v elif k == "take": take = v elif k == "build": build = v return ["gaia", take + "." + build, release, full_name] def get_distro(): if 'FreeBSD' in platform.system(): release = re.sub(r'\-.*\Z', '', ustr(platform.release())) osinfo = ['freebsd', release, '', 'freebsd'] elif 'OpenBSD' in platform.system(): release = re.sub(r'\-.*\Z', '', ustr(platform.release())) osinfo = ['openbsd', release, '', 'openbsd'] elif 'Linux' in platform.system(): osinfo = get_linux_distribution(0, 'alpine') elif 'NS-BSD' in platform.system(): release = re.sub(r'\-.*\Z', '', ustr(platform.release())) osinfo = ['nsbsd', release, '', 'nsbsd'] else: try: # dist() removed in Python 3.8 osinfo = list(platform.dist()) + [''] # pylint: disable=W1505,E1101 except Exception: osinfo = ['UNKNOWN', 'FFFF', '', ''] # The platform.py lib has issue with detecting oracle linux distribution. # Merge the following patch provided by oracle as a temporary fix. if os.path.exists("/etc/oracle-release"): osinfo[2] = "oracle" osinfo[3] = "Oracle Linux" if os.path.exists("/etc/euleros-release"): osinfo[0] = "euleros" if os.path.exists("/etc/UnionTech-release"): osinfo[0] = "uos" if os.path.exists("/etc/mariner-release"): osinfo[0] = "mariner" # The platform.py lib has issue with detecting BIG-IP linux distribution. # Merge the following patch provided by F5. if os.path.exists("/shared/vadc"): osinfo = get_f5_platform() if os.path.exists("/etc/cp-release"): osinfo = get_checkpoint_platform() if os.path.exists("/home/guestshell/azure"): osinfo = ['iosxe', 'csr1000v', '', 'Cisco IOSXE Linux'] if os.path.exists("/etc/photon-release"): osinfo[0] = "photonos" if os.path.exists('/etc/alpaquita-release'): osinfo[0] = 'alpaquita' # Remove trailing whitespace and quote in distro name osinfo[0] = osinfo[0].strip('"').strip(' ').lower() return osinfo COMMAND_ABSENT = ustr("Absent") COMMAND_FAILED = ustr("Failed") def get_lis_version(): """ This uses the Linux kernel's 'modinfo' command to retrieve the "version" field for the "hv_vmbus" kernel module (the LIS drivers). This is the documented method to retrieve the LIS module version. Every Linux guest on Hyper-V will have this driver, but it may not be installed as a module (it could instead be built into the kernel). In that case, this will return "Absent" instead of the version, indicating the driver version can be deduced from the kernel version. It will only return "Failed" in the presence of an exception. This function is used to generate telemetry for the version of the LIS drivers installed on the VM. The function and associated telemetry can be removed after a few releases. """ try: modinfo_output = shellutil.run_command(["modinfo", "-F", "version", "hv_vmbus"]) if modinfo_output: return modinfo_output # If the system doesn't have LIS drivers, 'modinfo' will # return nothing on stdout, which will cause 'run_command' # to return an empty string. return COMMAND_ABSENT except Exception: # Ignore almost every possible exception because this is in a # critical code path. Unfortunately the logger isn't already # imported in this module or we'd log this too. return COMMAND_FAILED def has_logrotate(): try: logrotate_version = shellutil.run_command(["logrotate", "--version"]).split("\n")[0] return logrotate_version except shellutil.CommandError: # A non-zero return code means that logrotate isn't present on # the system; --version shouldn't fail otherwise. return COMMAND_ABSENT except Exception: return COMMAND_FAILED AGENT_NAME = "WALinuxAgent" AGENT_LONG_NAME = "Azure Linux Agent" # # IMPORTANT: Please be sure that the version is always 9.9.9.9 on the develop branch. Automation requires this, otherwise # DCR may test the wrong agent version. # # When doing a release, be sure to use the actual agent version. Current agent version: 2.4.0.0 # AGENT_VERSION = '2.15.0.1' AGENT_LONG_VERSION = "{0}-{1}".format(AGENT_NAME, AGENT_VERSION) AGENT_DESCRIPTION = """ The Azure Linux Agent supports the provisioning and running of Linux VMs in the Azure cloud. This package should be installed on Linux disk images that are built to run in the Azure environment. """ AGENT_DIR_GLOB = "{0}-*".format(AGENT_NAME) AGENT_PKG_GLOB = "{0}-*.zip".format(AGENT_NAME) AGENT_PATTERN = "{0}-(.*)".format(AGENT_NAME) AGENT_NAME_PATTERN = re.compile(AGENT_PATTERN) AGENT_PKG_PATTERN = re.compile(AGENT_PATTERN+r"\.zip") AGENT_DIR_PATTERN = re.compile(".*/{0}".format(AGENT_PATTERN)) # The execution mode of the VM - IAAS or PAAS. Linux VMs are only executed in IAAS mode. AGENT_EXECUTION_MODE = "IAAS" EXT_HANDLER_PATTERN = br".*/WALinuxAgent-(\d+.\d+.\d+[.\d+]*).*-run-exthandlers" EXT_HANDLER_REGEX = re.compile(EXT_HANDLER_PATTERN) __distro__ = get_distro() DISTRO_NAME = __distro__[0] DISTRO_VERSION = __distro__[1] DISTRO_CODE_NAME = __distro__[2] DISTRO_FULL_NAME = __distro__[3] PY_VERSION = sys.version_info PY_VERSION_MAJOR = sys.version_info[0] PY_VERSION_MINOR = sys.version_info[1] PY_VERSION_MICRO = sys.version_info[2] # Set the CURRENT_AGENT and CURRENT_VERSION to match the agent directory name # - This ensures the agent will "see itself" using the same name and version # as the code that downloads agents. def set_current_agent(): path = os.getcwd() lib_dir = conf.get_lib_dir() if lib_dir[-1] != os.path.sep: lib_dir += os.path.sep agent = path[len(lib_dir):].split(os.path.sep)[0] match = AGENT_NAME_PATTERN.match(agent) if match: version = match.group(1) else: agent = AGENT_LONG_VERSION version = AGENT_VERSION return agent, FlexibleVersion(version) def is_agent_package(path): path = os.path.basename(path) return not re.match(AGENT_PKG_PATTERN, path) is None def is_agent_path(path): path = os.path.basename(path) return not re.match(AGENT_NAME_PATTERN, path) is None CURRENT_AGENT, CURRENT_VERSION = set_current_agent() def set_goal_state_agent(): agent = None if os.path.isdir("/proc"): pids = [pid for pid in os.listdir('/proc') if pid.isdigit()] else: pids = [] for pid in pids: try: pname = open(os.path.join('/proc', pid, 'cmdline'), 'rb').read() match = EXT_HANDLER_REGEX.match(pname) if match: agent = match.group(1) if PY_VERSION_MAJOR > 2: agent = agent.decode('UTF-8') break except IOError: continue if agent is None: agent = CURRENT_VERSION return agent GOAL_STATE_AGENT_VERSION = set_goal_state_agent() Azure-WALinuxAgent-a976115/azurelinuxagent/daemon/000077500000000000000000000000001510742556200220625ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/azurelinuxagent/daemon/__init__.py000066400000000000000000000012611510742556200241730ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from azurelinuxagent.daemon.main import get_daemon_handler Azure-WALinuxAgent-a976115/azurelinuxagent/daemon/main.py000066400000000000000000000163051510742556200233650ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import sys import time import azurelinuxagent.common.conf as conf import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.fileutil as fileutil from azurelinuxagent.common.event import add_event, WALAEventOperation, initialize_event_logger_vminfo_common_parameters_and_protocol from azurelinuxagent.common.future import ustr from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.protocol.goal_state import GoalState, GoalStateProperties from azurelinuxagent.common.protocol.util import get_protocol_util from azurelinuxagent.pa.rdma.rdma import setup_rdma_device from azurelinuxagent.common.utils import textutil from azurelinuxagent.common.version import AGENT_NAME, AGENT_LONG_NAME, \ AGENT_VERSION, \ DISTRO_NAME, DISTRO_VERSION, PY_VERSION_MAJOR, PY_VERSION_MINOR, \ PY_VERSION_MICRO from azurelinuxagent.daemon.resourcedisk import get_resourcedisk_handler from azurelinuxagent.daemon.scvmm import get_scvmm_handler from azurelinuxagent.ga.update import get_update_handler from azurelinuxagent.pa.provision import get_provision_handler from azurelinuxagent.pa.rdma import get_rdma_handler OPENSSL_FIPS_ENVIRONMENT = "OPENSSL_FIPS" def get_daemon_handler(): return DaemonHandler() class DaemonHandler(object): """ Main thread of daemon. It will invoke other threads to do actual work """ def __init__(self): self.running = True self.osutil = get_osutil() def run(self, child_args=None): # # The Container ID in telemetry events is retrieved from the goal state. We can fetch the goal state # only after protocol detection, which is done during provisioning. # # Be aware that telemetry events emitted before that will not include the Container ID. # logger.info("{0} Version: {1}", AGENT_LONG_NAME, AGENT_VERSION) logger.info("OS: {0} {1}", DISTRO_NAME, DISTRO_VERSION) logger.info("Python: {0}.{1}.{2}", PY_VERSION_MAJOR, PY_VERSION_MINOR, PY_VERSION_MICRO) self.check_pid() self.initialize_environment() # If FIPS is enabled, set the OpenSSL environment variable # Note: # -- Subprocesses inherit the current environment if conf.get_fips_enabled(): os.environ[OPENSSL_FIPS_ENVIRONMENT] = '1' while self.running: try: self.daemon(child_args) except Exception as e: # pylint: disable=W0612 err_msg = textutil.format_exception(e) add_event(name=AGENT_NAME, is_success=False, message=ustr(err_msg), op=WALAEventOperation.UnhandledError) logger.warn("Daemon ended with exception -- Sleep 15 seconds and restart daemon") time.sleep(15) def check_pid(self): """Check whether daemon is already running""" pid = None pid_file = conf.get_agent_pid_file_path() if os.path.isfile(pid_file): pid = fileutil.read_file(pid_file) if self.osutil.check_pid_alive(pid): logger.info("Daemon is already running: {0}", pid) sys.exit(0) fileutil.write_file(pid_file, ustr(os.getpid())) def sleep_if_disabled(self): agent_disabled_file_path = conf.get_disable_agent_file_path() if os.path.exists(agent_disabled_file_path): import threading logger.warn("Disabling the guest agent by sleeping forever; to re-enable, remove {0} and restart".format(agent_disabled_file_path)) logger.warn("To enable VM extensions, also ensure that the VM's osProfile.allowExtensionOperations property is set to true.") self.running = False disable_event = threading.Event() disable_event.wait() def initialize_environment(self): # Create lib dir if not os.path.isdir(conf.get_lib_dir()): fileutil.mkdir(conf.get_lib_dir(), mode=0o700) os.chdir(conf.get_lib_dir()) def _initialize_telemetry(self): protocol = self.protocol_util.get_protocol() initialize_event_logger_vminfo_common_parameters_and_protocol(protocol) def daemon(self, child_args=None): logger.info("Run daemon") self.protocol_util = get_protocol_util() # pylint: disable=W0201 self.scvmm_handler = get_scvmm_handler() # pylint: disable=W0201 self.resourcedisk_handler = get_resourcedisk_handler() # pylint: disable=W0201 self.rdma_handler = get_rdma_handler() # pylint: disable=W0201 self.provision_handler = get_provision_handler() # pylint: disable=W0201 self.update_handler = get_update_handler() # pylint: disable=W0201 if conf.get_detect_scvmm_env(): self.scvmm_handler.run() if conf.get_resourcedisk_format(): self.resourcedisk_handler.run() # Always redetermine the protocol start (e.g., wireserver vs. # on-premise) since a VHD can move between environments self.protocol_util.clear_protocol() self.provision_handler.run() # Once we have the protocol, complete initialization of the telemetry fields # that require the goal state and IMDS self._initialize_telemetry() # Enable RDMA, continue in errors if conf.enable_rdma(): nd_version = self.rdma_handler.get_rdma_version() self.rdma_handler.install_driver_if_needed() logger.info("RDMA capabilities are enabled in configuration") try: # Ensure the most recent SharedConfig is available # - Changes to RDMA state may not increment the goal state # incarnation number. A forced update ensures the most # current values. protocol = self.protocol_util.get_protocol() goal_state = GoalState(protocol.client, goal_state_properties=GoalStateProperties.SharedConfig) setup_rdma_device(nd_version, goal_state.shared_conf) except Exception as e: logger.error("Error setting up rdma device: %s" % e) else: logger.info("RDMA capabilities are not enabled, skipping") self.sleep_if_disabled() # Disable output to /dev/console once provisioning has completed if logger.console_output_enabled(): logger.info("End of log to /dev/console. The agent will now check for updates and then will process extensions.") logger.disable_console_output() while self.running: self.update_handler.run_latest(child_args=child_args) Azure-WALinuxAgent-a976115/azurelinuxagent/daemon/resourcedisk/000077500000000000000000000000001510742556200245645ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/azurelinuxagent/daemon/resourcedisk/__init__.py000066400000000000000000000013471510742556200267020ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from azurelinuxagent.daemon.resourcedisk.factory import get_resourcedisk_handler Azure-WALinuxAgent-a976115/azurelinuxagent/daemon/resourcedisk/default.py000066400000000000000000000354261510742556200265740ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import re import stat import sys import threading from time import sleep import azurelinuxagent.common.logger as logger from azurelinuxagent.common.future import ustr import azurelinuxagent.common.conf as conf from azurelinuxagent.common.event import add_event, WALAEventOperation import azurelinuxagent.common.utils.fileutil as fileutil import azurelinuxagent.common.utils.shellutil as shellutil from azurelinuxagent.common.exception import ResourceDiskError from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.version import AGENT_NAME DATALOSS_WARNING_FILE_NAME = "DATALOSS_WARNING_README.txt" DATA_LOSS_WARNING = """\ WARNING: THIS IS A TEMPORARY DISK. Any data stored on this drive is SUBJECT TO LOSS and THERE IS NO WAY TO RECOVER IT. Please do not use this disk for storing any personal or application data. For additional details to please refer to the MSDN documentation at : http://msdn.microsoft.com/en-us/library/windowsazure/jj672979.aspx """ class ResourceDiskHandler(object): def __init__(self): self.osutil = get_osutil() self.fs = conf.get_resourcedisk_filesystem() def start_activate_resource_disk(self): disk_thread = threading.Thread(target=self.run) disk_thread.start() def run(self): mount_point = None if conf.get_resourcedisk_format(): mount_point = self.activate_resource_disk() if mount_point is not None and \ conf.get_resourcedisk_enable_swap(): self.enable_swap(mount_point) def activate_resource_disk(self): logger.info("Activate resource disk") try: mount_point = conf.get_resourcedisk_mountpoint() mount_point = self.mount_resource_disk(mount_point) warning_file = os.path.join(mount_point, DATALOSS_WARNING_FILE_NAME) try: fileutil.write_file(warning_file, DATA_LOSS_WARNING) except IOError as e: logger.warn("Failed to write data loss warning:{0}", e) return mount_point except ResourceDiskError as e: logger.error("Failed to mount resource disk {0}", e) add_event(name=AGENT_NAME, is_success=False, message=ustr(e), op=WALAEventOperation.ActivateResourceDisk) return None def enable_swap(self, mount_point): logger.info("Enable swap") try: size_mb = conf.get_resourcedisk_swap_size_mb() self.create_swap_space(mount_point, size_mb) except ResourceDiskError as e: logger.error("Failed to enable swap {0}", e) def reread_partition_table(self, device): if shellutil.run("sfdisk -R {0}".format(device), chk_err=False): shellutil.run("blockdev --rereadpt {0}".format(device), chk_err=False) def mount_resource_disk(self, mount_point): device = self.osutil.device_for_ide_port(1) if device is None: raise ResourceDiskError("unable to detect disk topology") device = "/dev/{0}".format(device) partition = device + "1" mount_list = shellutil.run_get_output("mount")[1] existing = self.osutil.get_mount_point(mount_list, device) if existing: logger.info("Resource disk [{0}] is already mounted [{1}]", partition, existing) return existing try: fileutil.mkdir(mount_point, mode=0o755) except OSError as ose: msg = "Failed to create mount point " \ "directory [{0}]: {1}".format(mount_point, ose) logger.error(msg) raise ResourceDiskError(msg=msg, inner=ose) logger.info("Examining partition table") ret = shellutil.run_get_output("parted {0} print".format(device)) if ret[0]: raise ResourceDiskError("Could not determine partition info for " "{0}: {1}".format(device, ret[1])) force_option = 'F' if self.fs in ('btrfs', 'xfs'): force_option = 'f' mkfs_string = "mkfs.{0} -{2} {1}".format( self.fs, partition, force_option) if "gpt" in ret[1]: logger.info("GPT detected, finding partitions") parts = [x for x in ret[1].split("\n") if re.match(r"^\s*[0-9]+", x)] logger.info("Found {0} GPT partition(s).", len(parts)) if len(parts) > 1: logger.info("Removing old GPT partitions") for i in range(1, len(parts) + 1): logger.info("Remove partition {0}", i) shellutil.run("parted {0} rm {1}".format(device, i)) logger.info("Creating new GPT partition") shellutil.run( "parted {0} mkpart primary 0% 100%".format(device)) logger.info("Format partition [{0}]", mkfs_string) shellutil.run(mkfs_string) else: logger.info("GPT not detected, determining filesystem") ret = self.change_partition_type( suppress_message=True, option_str="{0} 1 -n".format(device)) ptype = ret[1].strip() if ptype == "7" and self.fs != "ntfs": logger.info("The partition is formatted with ntfs, updating " "partition type to 83") self.change_partition_type( suppress_message=False, option_str="{0} 1 83".format(device)) self.reread_partition_table(device) logger.info("Format partition [{0}]", mkfs_string) shellutil.run(mkfs_string) else: logger.info("The partition type is {0}", ptype) mount_options = conf.get_resourcedisk_mountoptions() mount_string = self.get_mount_string(mount_options, partition, mount_point) attempts = 5 while not os.path.exists(partition) and attempts > 0: logger.info("Waiting for partition [{0}], {1} attempts remaining", partition, attempts) sleep(5) attempts -= 1 if not os.path.exists(partition): raise ResourceDiskError( "Partition was not created [{0}]".format(partition)) logger.info("Mount resource disk [{0}]", mount_string) ret, output = shellutil.run_get_output(mount_string, chk_err=False) # if the exit code is 32, then the resource disk can be already mounted if ret == 32 and output.find("is already mounted") != -1: logger.warn("Could not mount resource disk: {0}", output) elif ret != 0: # Some kernels seem to issue an async partition re-read after a # 'parted' command invocation. This causes mount to fail if the # partition re-read is not complete by the time mount is # attempted. Seen in CentOS 7.2. Force a sequential re-read of # the partition and try mounting. logger.warn("Failed to mount resource disk. " "Retry mounting after re-reading partition info.") self.reread_partition_table(device) ret, output = shellutil.run_get_output(mount_string, chk_err=False) if ret: logger.warn("Failed to mount resource disk. " "Attempting to format and retry mount. [{0}]", output) shellutil.run(mkfs_string) ret, output = shellutil.run_get_output(mount_string) if ret: raise ResourceDiskError("Could not mount {0} " "after syncing partition table: " "[{1}] {2}".format(partition, ret, output)) logger.info("Resource disk {0} is mounted at {1} with {2}", device, mount_point, self.fs) return mount_point def change_partition_type(self, suppress_message, option_str): """ use sfdisk to change partition type. First try with --part-type; if fails, fall back to -c """ option_to_use = '--part-type' command = "sfdisk {0} {1} {2}".format( option_to_use, '-f' if suppress_message else '', option_str) err_code, output = shellutil.run_get_output( command, chk_err=False, log_cmd=True) # fall back to -c if err_code != 0: logger.info( "sfdisk with --part-type failed [{0}], retrying with -c", err_code) option_to_use = '-c' command = "sfdisk {0} {1} {2}".format( option_to_use, '-f' if suppress_message else '', option_str) err_code, output = shellutil.run_get_output(command, log_cmd=True) if err_code == 0: logger.info('{0} succeeded', command) else: logger.error('{0} failed [{1}: {2}]', command, err_code, output) return err_code, output def get_mount_string(self, mount_options, partition, mount_point): if mount_options is not None: return 'mount -t {0} -o {1} {2} {3}'.format( self.fs, mount_options, partition, mount_point ) else: return 'mount -t {0} {1} {2}'.format( self.fs, partition, mount_point ) @staticmethod def check_existing_swap_file(swapfile, swaplist, size): if swapfile in swaplist and os.path.isfile( swapfile) and os.path.getsize(swapfile) == size: logger.info("Swap already enabled") # restrict access to owner (remove all access from group, others) swapfile_mode = os.stat(swapfile).st_mode if swapfile_mode & (stat.S_IRWXG | stat.S_IRWXO): swapfile_mode = swapfile_mode & ~(stat.S_IRWXG | stat.S_IRWXO) logger.info( "Changing mode of {0} to {1:o}".format( swapfile, swapfile_mode)) os.chmod(swapfile, swapfile_mode) return True return False def create_swap_space(self, mount_point, size_mb): size_kb = size_mb * 1024 size = size_kb * 1024 swapfile = os.path.join(mount_point, 'swapfile') swaplist = shellutil.run_get_output("swapon -s")[1] if self.check_existing_swap_file(swapfile, swaplist, size): return if os.path.isfile(swapfile) and os.path.getsize(swapfile) != size: logger.info("Remove old swap file") shellutil.run("swapoff {0}".format(swapfile), chk_err=False) os.remove(swapfile) if not os.path.isfile(swapfile): logger.info("Create swap file") self.mkfile(swapfile, size_kb * 1024) shellutil.run("mkswap {0}".format(swapfile)) if shellutil.run("swapon {0}".format(swapfile)): raise ResourceDiskError("{0}".format(swapfile)) logger.info("Enabled {0}KB of swap at {1}".format(size_kb, swapfile)) def mkfile(self, filename, nbytes): """ Create a non-sparse file of that size. Deletes and replaces existing file. To allow efficient execution, fallocate will be tried first. This includes ``os.posix_fallocate`` on Python 3.3+ (unix) and the ``fallocate`` command in the popular ``util-linux{,-ng}`` package. A dd fallback will be tried too. When size < 64M, perform single-pass dd. Otherwise do two-pass dd. """ if not isinstance(nbytes, int): nbytes = int(nbytes) if nbytes <= 0: raise ResourceDiskError("Invalid swap size [{0}]".format(nbytes)) if os.path.isfile(filename): os.remove(filename) # If file system is xfs, use dd right away as we have been reported that # swap enabling fails in xfs fs when disk space is allocated with # fallocate ret = 0 fn_sh = shellutil.quote((filename,)) if self.fs not in ['xfs', 'ext4']: # os.posix_fallocate if sys.version_info >= (3, 3): # Probable errors: # - OSError: Seen on Cygwin, libc notimpl? # - AttributeError: What if someone runs this under... fd = None try: fd = os.open( filename, os.O_CREAT | os.O_WRONLY | os.O_EXCL, stat.S_IRUSR | stat.S_IWUSR) os.posix_fallocate(fd, 0, nbytes) # pylint: disable=no-member return 0 except BaseException: # Not confident with this thing, just keep trying... pass finally: if fd is not None: os.close(fd) # fallocate command ret = shellutil.run( u"umask 0077 && fallocate -l {0} {1}".format(nbytes, fn_sh)) if ret == 0: return ret logger.info("fallocate unsuccessful, falling back to dd") # dd fallback dd_maxbs = 64 * 1024 ** 2 dd_cmd = "umask 0077 && dd if=/dev/zero bs={0} count={1} " \ "conv=notrunc of={2}" blocks = int(nbytes / dd_maxbs) if blocks > 0: ret = shellutil.run(dd_cmd.format(dd_maxbs, blocks, fn_sh)) << 8 remains = int(nbytes % dd_maxbs) if remains > 0: ret += shellutil.run(dd_cmd.format(remains, 1, fn_sh)) if ret == 0: logger.info("dd successful") else: logger.error("dd unsuccessful") return ret Azure-WALinuxAgent-a976115/azurelinuxagent/daemon/resourcedisk/factory.py000066400000000000000000000025751510742556200266160ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from azurelinuxagent.common.version import DISTRO_NAME, DISTRO_VERSION, DISTRO_FULL_NAME from .default import ResourceDiskHandler from .freebsd import FreeBSDResourceDiskHandler from .openbsd import OpenBSDResourceDiskHandler from .openwrt import OpenWRTResourceDiskHandler def get_resourcedisk_handler(distro_name=DISTRO_NAME, distro_version=DISTRO_VERSION, # pylint: disable=W0613 distro_full_name=DISTRO_FULL_NAME): # pylint: disable=W0613 if distro_name == "freebsd": return FreeBSDResourceDiskHandler() if distro_name == "openbsd": return OpenBSDResourceDiskHandler() if distro_name == "openwrt": return OpenWRTResourceDiskHandler() return ResourceDiskHandler() Azure-WALinuxAgent-a976115/azurelinuxagent/daemon/resourcedisk/freebsd.py000066400000000000000000000162501510742556200265540ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.fileutil as fileutil import azurelinuxagent.common.utils.shellutil as shellutil import azurelinuxagent.common.conf as conf from azurelinuxagent.common.exception import ResourceDiskError from azurelinuxagent.daemon.resourcedisk.default import ResourceDiskHandler class FreeBSDResourceDiskHandler(ResourceDiskHandler): """ This class handles resource disk mounting for FreeBSD. The resource disk locates at following slot: scbus2 on blkvsc1 bus 0: at scbus2 target 1 lun 0 (da1,pass2) There are 2 variations based on partition table type: 1. MBR: The resource disk partition is /dev/da1s1 2. GPT: The resource disk partition is /dev/da1p2, /dev/da1p1 is for reserved usage. """ def __init__(self): # pylint: disable=W0235 super(FreeBSDResourceDiskHandler, self).__init__() @staticmethod def parse_gpart_list(data): dic = {} for line in data.split('\n'): if line.find("Geom name: ") != -1: geom_name = line[11:] elif line.find("scheme: ") != -1: dic[geom_name] = line[8:] return dic def mount_resource_disk(self, mount_point): fs = self.fs if fs != 'ufs': raise ResourceDiskError( "Unsupported filesystem type:{0}, only ufs is supported.".format(fs)) # 1. Detect device err, output = shellutil.run_get_output('gpart list') if err: raise ResourceDiskError( "Unable to detect resource disk device:{0}".format(output)) disks = self.parse_gpart_list(output) device = self.osutil.device_for_ide_port(1) if device is None or device not in disks: # fallback logic to find device err, output = shellutil.run_get_output( 'camcontrol periphlist 2:1:0') if err: # try again on "3:1:0" err, output = shellutil.run_get_output( 'camcontrol periphlist 3:1:0') if err: raise ResourceDiskError( "Unable to detect resource disk device:{0}".format(output)) # 'da1: generation: 4 index: 1 status: MORE\npass2: generation: 4 index: 2 status: LAST\n' for line in output.split('\n'): index = line.find(':') if index > 0: geom_name = line[:index] if geom_name in disks: device = geom_name break if not device: raise ResourceDiskError("Unable to detect resource disk device.") logger.info('Resource disk device {0} found.', device) # 2. Detect partition partition_table_type = disks[device] if partition_table_type == 'MBR': provider_name = device + 's1' elif partition_table_type == 'GPT': provider_name = device + 'p2' else: raise ResourceDiskError( "Unsupported partition table type:{0}".format(output)) err, output = shellutil.run_get_output( 'gpart show -p {0}'.format(device)) if err or output.find(provider_name) == -1: raise ResourceDiskError("Resource disk partition not found.") partition = '/dev/' + provider_name logger.info('Resource disk partition {0} found.', partition) # 3. Mount partition mount_list = shellutil.run_get_output("mount")[1] existing = self.osutil.get_mount_point(mount_list, partition) if existing: logger.info("Resource disk {0} is already mounted", partition) return existing fileutil.mkdir(mount_point, mode=0o755) mount_cmd = 'mount -t {0} {1} {2}'.format(fs, partition, mount_point) err = shellutil.run(mount_cmd, chk_err=False) if err: logger.info( 'Creating {0} filesystem on partition {1}'.format( fs, partition)) err, output = shellutil.run_get_output( 'newfs -U {0}'.format(partition)) if err: raise ResourceDiskError( "Failed to create new filesystem on partition {0}, error:{1}" .format( partition, output)) err, output = shellutil.run_get_output(mount_cmd, chk_err=False) if err: raise ResourceDiskError( "Failed to mount partition {0}, error {1}".format( partition, output)) logger.info( "Resource disk partition {0} is mounted at {1} with fstype {2}", partition, mount_point, fs) return mount_point def create_swap_space(self, mount_point, size_mb): size_kb = size_mb * 1024 size = size_kb * 1024 swapfile = os.path.join(mount_point, 'swapfile') swaplist = shellutil.run_get_output("swapctl -l")[1] if self.check_existing_swap_file(swapfile, swaplist, size): return if os.path.isfile(swapfile) and os.path.getsize(swapfile) != size: logger.info("Remove old swap file") shellutil.run("swapoff {0}".format(swapfile), chk_err=False) os.remove(swapfile) if not os.path.isfile(swapfile): logger.info("Create swap file") self.mkfile(swapfile, size_kb * 1024) mddevice = shellutil.run_get_output( "mdconfig -a -t vnode -f {0}".format(swapfile))[1].rstrip() shellutil.run("chmod 0600 /dev/{0}".format(mddevice)) if conf.get_resourcedisk_enable_swap_encryption(): shellutil.run("kldload aesni") shellutil.run("kldload cryptodev") shellutil.run("kldload geom_eli") shellutil.run( "geli onetime -e AES-XTS -l 256 -d /dev/{0}".format(mddevice)) shellutil.run("chmod 0600 /dev/{0}.eli".format(mddevice)) if shellutil.run("swapon /dev/{0}.eli".format(mddevice)): raise ResourceDiskError("/dev/{0}.eli".format(mddevice)) logger.info( "Enabled {0}KB of swap at /dev/{1}.eli ({2})".format(size_kb, mddevice, swapfile)) else: if shellutil.run("swapon /dev/{0}".format(mddevice)): raise ResourceDiskError("/dev/{0}".format(mddevice)) logger.info( "Enabled {0}KB of swap at /dev/{1} ({2})".format(size_kb, mddevice, swapfile)) Azure-WALinuxAgent-a976115/azurelinuxagent/daemon/resourcedisk/openbsd.py000066400000000000000000000114431510742556200265730ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # Copyright 2017 Reyk Floeter # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and OpenSSL 1.0+ # import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.fileutil as fileutil import azurelinuxagent.common.utils.shellutil as shellutil import azurelinuxagent.common.conf as conf from azurelinuxagent.common.exception import ResourceDiskError from azurelinuxagent.daemon.resourcedisk.default import ResourceDiskHandler class OpenBSDResourceDiskHandler(ResourceDiskHandler): def __init__(self): super(OpenBSDResourceDiskHandler, self).__init__() # Fase File System (FFS) is UFS if self.fs == 'ufs' or self.fs == 'ufs2': self.fs = 'ffs' def create_swap_space(self, mount_point, size_mb): pass def enable_swap(self, mount_point): size_mb = conf.get_resourcedisk_swap_size_mb() if size_mb: logger.info("Enable swap") device = self.osutil.device_for_ide_port(1) err, output = shellutil.run_get_output("swapctl -a /dev/" "{0}b".format(device), chk_err=False) if err: logger.error("Failed to enable swap, error {0}", output) def mount_resource_disk(self, mount_point): fs = self.fs if fs != 'ffs': raise ResourceDiskError("Unsupported filesystem type: {0}, only " "ufs/ffs is supported.".format(fs)) # 1. Get device device = self.osutil.device_for_ide_port(1) if not device: raise ResourceDiskError("Unable to detect resource disk device.") logger.info('Resource disk device {0} found.', device) # 2. Get partition partition = "/dev/{0}a".format(device) # 3. Mount partition mount_list = shellutil.run_get_output("mount")[1] existing = self.osutil.get_mount_point(mount_list, partition) if existing: logger.info("Resource disk {0} is already mounted", partition) return existing fileutil.mkdir(mount_point, mode=0o755) mount_cmd = 'mount -t {0} {1} {2}'.format(self.fs, partition, mount_point) err = shellutil.run(mount_cmd, chk_err=False) if err: logger.info('Creating {0} filesystem on {1}'.format(fs, device)) fdisk_cmd = "/sbin/fdisk -yi {0}".format(device) err, output = shellutil.run_get_output(fdisk_cmd, chk_err=False) if err: raise ResourceDiskError("Failed to create new MBR on {0}, " "error: {1}".format(device, output)) size_mb = conf.get_resourcedisk_swap_size_mb() if size_mb: if size_mb > 512 * 1024: size_mb = 512 * 1024 disklabel_cmd = ("echo -e '{0} 1G-* 50%\nswap 1-{1}M 50%' " "| disklabel -w -A -T /dev/stdin " "{2}").format(mount_point, size_mb, device) ret, output = shellutil.run_get_output( disklabel_cmd, chk_err=False) if ret: raise ResourceDiskError("Failed to create new disklabel " "on {0}, error " "{1}".format(device, output)) err, output = shellutil.run_get_output("newfs -O2 {0}a" "".format(device)) if err: raise ResourceDiskError("Failed to create new filesystem on " "partition {0}, error " "{1}".format(partition, output)) err, output = shellutil.run_get_output(mount_cmd, chk_err=False) if err: raise ResourceDiskError("Failed to mount partition {0}, " "error {1}".format(partition, output)) logger.info("Resource disk partition {0} is mounted at {1} with fstype " "{2}", partition, mount_point, fs) return mount_point Azure-WALinuxAgent-a976115/azurelinuxagent/daemon/resourcedisk/openwrt.py000066400000000000000000000133371510742556200266430ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # Copyright 2018 Sonus Networks, Inc. (d.b.a. Ribbon Communications Operating Company) # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os from time import sleep import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.fileutil as fileutil import azurelinuxagent.common.utils.shellutil as shellutil import azurelinuxagent.common.conf as conf from azurelinuxagent.common.exception import ResourceDiskError from azurelinuxagent.daemon.resourcedisk.default import ResourceDiskHandler class OpenWRTResourceDiskHandler(ResourceDiskHandler): def __init__(self): super(OpenWRTResourceDiskHandler, self).__init__() # Fase File System (FFS) is UFS if self.fs == 'ufs' or self.fs == 'ufs2': self.fs = 'ffs' def reread_partition_table(self, device): ret, output = shellutil.run_get_output("hdparm -z {0}".format(device), chk_err=False) # pylint: disable=W0612 if ret != 0: logger.warn("Failed refresh the partition table.") def mount_resource_disk(self, mount_point): device = self.osutil.device_for_ide_port(1) if device is None: raise ResourceDiskError("unable to detect disk topology") logger.info('Resource disk device {0} found.', device) # 2. Get partition device = "/dev/{0}".format(device) partition = device + "1" logger.info('Resource disk partition {0} found.', partition) # 3. Mount partition mount_list = shellutil.run_get_output("mount")[1] existing = self.osutil.get_mount_point(mount_list, device) if existing: logger.info("Resource disk [{0}] is already mounted [{1}]", partition, existing) return existing try: fileutil.mkdir(mount_point, mode=0o755) except OSError as ose: msg = "Failed to create mount point " \ "directory [{0}]: {1}".format(mount_point, ose) logger.error(msg) raise ResourceDiskError(msg=msg, inner=ose) force_option = 'F' if self.fs == 'xfs': force_option = 'f' mkfs_string = "mkfs.{0} -{2} {1}".format(self.fs, partition, force_option) # Compare to the Default mount_resource_disk, we don't check for GPT that is not supported on OpenWRT ret = self.change_partition_type(suppress_message=True, option_str="{0} 1 -n".format(device)) ptype = ret[1].strip() if ptype == "7" and self.fs != "ntfs": logger.info("The partition is formatted with ntfs, updating " "partition type to 83") self.change_partition_type(suppress_message=False, option_str="{0} 1 83".format(device)) self.reread_partition_table(device) logger.info("Format partition [{0}]", mkfs_string) shellutil.run(mkfs_string) else: logger.info("The partition type is {0}", ptype) mount_options = conf.get_resourcedisk_mountoptions() mount_string = self.get_mount_string(mount_options, partition, mount_point) attempts = 5 while not os.path.exists(partition) and attempts > 0: logger.info("Waiting for partition [{0}], {1} attempts remaining", partition, attempts) sleep(5) attempts -= 1 if not os.path.exists(partition): raise ResourceDiskError("Partition was not created [{0}]".format(partition)) if os.path.ismount(mount_point): logger.warn("Disk is already mounted on {0}", mount_point) else: # Some kernels seem to issue an async partition re-read after a # command invocation. This causes mount to fail if the # partition re-read is not complete by the time mount is # attempted. Seen in CentOS 7.2. Force a sequential re-read of # the partition and try mounting. logger.info("Mounting after re-reading partition info.") self.reread_partition_table(device) logger.info("Mount resource disk [{0}]", mount_string) ret, output = shellutil.run_get_output(mount_string) if ret: logger.warn("Failed to mount resource disk. " "Attempting to format and retry mount. [{0}]", output) shellutil.run(mkfs_string) ret, output = shellutil.run_get_output(mount_string) if ret: raise ResourceDiskError("Could not mount {0} " "after syncing partition table: " "[{1}] {2}".format(partition, ret, output)) logger.info("Resource disk {0} is mounted at {1} with {2}", device, mount_point, self.fs) return mount_point Azure-WALinuxAgent-a976115/azurelinuxagent/daemon/scvmm.py000066400000000000000000000053271510742556200235700ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import re import os import sys import subprocess import time import azurelinuxagent.common.logger as logger import azurelinuxagent.common.conf as conf from azurelinuxagent.common.osutil import get_osutil VMM_CONF_FILE_NAME = "linuxosconfiguration.xml" VMM_STARTUP_SCRIPT_NAME= "install" def get_scvmm_handler(): return ScvmmHandler() class ScvmmHandler(object): def __init__(self): self.osutil = get_osutil() def detect_scvmm_env(self, dev_dir='/dev'): logger.info("Detecting Microsoft System Center VMM Environment") found=False # try to load the ATAPI driver, continue on failure self.osutil.try_load_atapiix_mod() # cycle through all available /dev/sr*|hd*|cdrom*|cd* looking for the scvmm configuration file mount_point = conf.get_dvd_mount_point() for devices in filter(lambda x: x is not None, [re.match(r'(sr[0-9]|hd[c-z]|cdrom[0-9]?|cd[0-9]+)', dev) for dev in os.listdir(dev_dir)]): dvd_device = os.path.join(dev_dir, devices.group(0)) self.osutil.mount_dvd(max_retry=1, chk_err=False, dvd_device=dvd_device, mount_point=mount_point) found = os.path.isfile(os.path.join(mount_point, VMM_CONF_FILE_NAME)) if found: self.start_scvmm_agent(mount_point=mount_point) break else: self.osutil.umount_dvd(chk_err=False, mount_point=mount_point) return found def start_scvmm_agent(self, mount_point=None): logger.info("Starting Microsoft System Center VMM Initialization " "Process") if mount_point is None: mount_point = conf.get_dvd_mount_point() startup_script = os.path.join(mount_point, VMM_STARTUP_SCRIPT_NAME) with open(os.devnull, 'w') as devnull: subprocess.Popen(["/bin/bash", startup_script, "-p " + mount_point], stdout=devnull, stderr=devnull) def run(self): if self.detect_scvmm_env(): logger.info("Exiting") time.sleep(300) sys.exit(0) Azure-WALinuxAgent-a976115/azurelinuxagent/ga/000077500000000000000000000000001510742556200212065ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/azurelinuxagent/ga/__init__.py000066400000000000000000000011661510742556200233230ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # Azure-WALinuxAgent-a976115/azurelinuxagent/ga/agent_update_handler.py000066400000000000000000000327731510742556200257310ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ from azurelinuxagent.common import conf, logger from azurelinuxagent.common.event import add_event, WALAEventOperation from azurelinuxagent.common.exception import AgentUpgradeExitException, AgentUpdateError, AgentFamilyMissingError from azurelinuxagent.common.future import ustr from azurelinuxagent.common.protocol.restapi import VMAgentUpdateStatuses, VMAgentUpdateStatus, VERSION_0 from azurelinuxagent.common.utils import textutil from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from azurelinuxagent.common.version import get_daemon_version, CURRENT_VERSION from azurelinuxagent.ga.guestagent import GuestAgentUpdateUtil from azurelinuxagent.ga.rsm_version_updater import RSMVersionUpdater from azurelinuxagent.ga.self_update_version_updater import SelfUpdateVersionUpdater class UpdateMode(object): """ Enum for Update modes """ RSM = "RSM" SelfUpdate = "SelfUpdate" def get_agent_update_handler(protocol): return AgentUpdateHandler(protocol) class AgentUpdateHandler(object): """ This class handles two type of agent updates. Handler initializes the updater to SelfUpdateVersionUpdater and switch to appropriate updater based on below conditions: RSM update: This update requested by RSM and contract between CRP and agent is we get following properties in the goal state: version: it will have what version to update isVersionFromRSM: True if the version is from RSM deployment. isVMEnabledForRSMUpgrades: True if the VM is enabled for RSM upgrades. fromVersion: This property specifies the version to update from. It is populated only for downgrade requests and subsequent goal states thereafter, until an upgrade request. if vm enabled for RSM upgrades, we use RSM update path. But if requested update is not by rsm deployment( if isVersionFromRSM:False) we ignore the update. Self update: We fallback to this if above condition not met. This update to the largest version available in the manifest. Also, we use self-update for initial update due to [1][2] Note: Self-update don't support downgrade. [1] New vms that are enrolled into RSM, they get isVMEnabledForRSMUpgrades as True and isVersionFromRSM as False in first goal state. As per RSM update flow mentioned above, we don't apply the update if isVersionFromRSM is false. Consequently, new vms remain on pre-installed agent until RSM drives a new version update. In the meantime, agent may process the extensions with the baked version. This can potentially lead to issues due to incompatibility. [2] If current version is N, and we are deploying N+1. We find an issue on N+1 and remove N+1 from PIR. If CRP created the initial goal state for a new vm before the delete, the version in the goal state would be N+1; If the agent starts processing the goal state after the deleting, it won't find N+1 and update will fail and the vm will use baked version. Handler updates the state if current update mode is changed from last update mode(RSM or Self-Update) on new goal state. Once handler decides which updater to use, then updater does following steps: 1. Retrieve the agent version from the goal state. 2. Check if we allowed to update for that version. 3. Log the update message. 4. Purge the extra agents from disk. 5. Download the new agent. 6. Proceed with update. [Note: 1.0.8.147 is the minimum supported version of HGPA which will have the isVersionFromRSM and isVMEnabledForRSMUpgrades properties in vmsettings.] """ def __init__(self, protocol): self._protocol = protocol self._gs_id = "unknown" self._ga_family_type = conf.get_autoupdate_gafamily() self._daemon_version = self._get_daemon_version_for_update() self._last_attempted_update_error_msg = "" # Restore the state of rsm update. Default to self-update if last update is not with RSM or if agent doing initial update if not GuestAgentUpdateUtil.is_last_update_with_rsm() or GuestAgentUpdateUtil.is_initial_update(): self._updater = SelfUpdateVersionUpdater(self._gs_id) else: self._updater = RSMVersionUpdater(self._gs_id, self._daemon_version) @staticmethod def _get_daemon_version_for_update(): daemon_version = get_daemon_version() if daemon_version != FlexibleVersion(VERSION_0): return daemon_version # We return 0.0.0.0 if daemon version is not specified. In that case, # use the min version as 2.2.53 as we started setting the daemon version starting 2.2.53. return FlexibleVersion("2.2.53") def _get_agent_family_manifest(self, goal_state): """ Get the agent_family from last GS for the given family Returns: first entry of Manifest Exception if no manifests found in the last GS and log it only on new goal state """ family = self._ga_family_type agent_families = goal_state.extensions_goal_state.agent_families family_found = False agent_family_manifests = [] for m in agent_families: if m.name == family: family_found = True if len(m.uris) > 0: agent_family_manifests.append(m) if not family_found: raise AgentFamilyMissingError(u"Agent family: {0} not found in the goal state: {1}, skipping agent update \n" u"[Note: This error is permanent for this goal state and Will not log same error until we receive new goal state]".format(family, self._gs_id)) if len(agent_family_manifests) == 0: raise AgentFamilyMissingError( u"No manifest links found for agent family: {0} for goal state: {1}, skipping agent update \n" u"[Note: This error is permanent for this goal state and will not log same error until we receive new goal state]".format( family, self._gs_id)) return agent_family_manifests[0] def get_current_update_mode(self): """ Returns current update mode whether RSM or Self-Update """ if isinstance(self._updater, RSMVersionUpdater): return UpdateMode.RSM else: return UpdateMode.SelfUpdate def run(self, goal_state, ext_gs_updated): try: # If auto update is disabled, we don't proceed with update if not conf.get_auto_update_to_latest_version(): self._last_attempted_update_error_msg = "Auto update is disabled, skipping agent update" return # Update the state only on new goal state if ext_gs_updated: # Reset the last reported update state on new goal state before we attempt update otherwise we keep reporting the last update error if any self._last_attempted_update_error_msg = "" self._gs_id = goal_state.extensions_goal_state.id self._updater.sync_new_gs_id(self._gs_id) agent_family = self._get_agent_family_manifest(goal_state) # Always agent uses self-update for initial update regardless vm enrolled into RSM or not # So ignoring the check for updater switch for the initial goal state/update if not GuestAgentUpdateUtil.is_initial_update(): # Updater will return True or False if we need to switch the updater # If self-updater receives RSM update enabled, it will switch to RSM updater # If RSM updater receives RSM update disabled, it will switch to self-update # No change in updater if GS not updated is_rsm_update_enabled = self._updater.is_rsm_update_enabled(agent_family, ext_gs_updated) if not is_rsm_update_enabled and isinstance(self._updater, RSMVersionUpdater): msg = "VM not enabled for RSM updates, switching to self-update mode" logger.info(msg) add_event(op=WALAEventOperation.AgentUpgrade, message=msg, log_event=False) self._updater = SelfUpdateVersionUpdater(self._gs_id) GuestAgentUpdateUtil.remove_rsm_update_state_file() if is_rsm_update_enabled and isinstance(self._updater, SelfUpdateVersionUpdater): msg = "VM enabled for RSM updates, switching to RSM update mode" logger.info(msg) add_event(op=WALAEventOperation.AgentUpgrade, message=msg, log_event=False) self._updater = RSMVersionUpdater(self._gs_id, self._daemon_version) GuestAgentUpdateUtil.save_rsm_update_state_file() # If updater is changed in previous step, we allow update as it consider as first attempt. If not, it checks below condition # RSM checks new goal state; self-update checks manifest download interval if not self._updater.is_update_allowed_this_time(ext_gs_updated): return self._updater.retrieve_agent_version(agent_family, goal_state) if not self._updater.is_retrieved_version_allowed_to_update(agent_family): return self._updater.log_new_agent_update_message() agent = self._updater.download_and_get_new_agent(self._protocol, agent_family, goal_state) # Below condition is to break the update loop if new agent is in bad state in previous attempts # If the bad agent update already attempted 3 times, we don't want to continue with update anymore. # Otherewise we allow the update by increment the update attempt count and clear the bad state to make good agent # [Note: As a result, it is breaking contract between RSM and agent, we may NOT honor the RSM retries for that version] if agent.get_update_attempt_count() >= 3: msg = "Attempted enough update retries for version: {0} but still agent not recovered from bad state. So, we stop updating to this version".format(str(agent.version)) raise AgentUpdateError(msg) else: agent.clear_error() agent.inc_update_attempt_count() msg = "Agent update attempt count: {0} for version: {1}".format(agent.get_update_attempt_count(), str(agent.version)) logger.info(msg) add_event(op=WALAEventOperation.AgentUpgrade, message=msg, log_event=False) self._updater.purge_extra_agents_from_disk() self._updater.proceed_with_update() except Exception as err: log_error = True if isinstance(err, AgentUpgradeExitException): raise err elif isinstance(err, AgentUpdateError): error_msg = ustr(err) elif isinstance(err, AgentFamilyMissingError): error_msg = ustr(err) # Agent family missing error is permanent in the given goal state, so we don't want to log it on every iteration of main loop if there is no new goal state log_error = ext_gs_updated else: error_msg = "Unable to update Agent: {0}".format(textutil.format_exception(err)) if log_error: error_msg = "[{0}]{1}".format(self.get_current_update_mode(), error_msg) logger.warn(error_msg) add_event(op=WALAEventOperation.AgentUpgrade, is_success=False, message=error_msg, log_event=False) self._last_attempted_update_error_msg = error_msg # save initial update state when agent is doing first update finally: if GuestAgentUpdateUtil.is_initial_update(): GuestAgentUpdateUtil.save_initial_update_state_file() def get_vmagent_update_status(self): """ This function gets the VMAgent update status as per the last attempted update. Returns: None if fail to report or update never attempted with rsm version specified in GS Note: We report the status only when vm enrolled into RSM """ try: if self.get_current_update_mode() == UpdateMode.RSM: if not self._last_attempted_update_error_msg: status = VMAgentUpdateStatuses.Success code = 0 else: status = VMAgentUpdateStatuses.Error code = 1 return VMAgentUpdateStatus(expected_version=str(CURRENT_VERSION), status=status, code=code, message=self._last_attempted_update_error_msg) except Exception as err: msg = "Unable to report agent update status: {0}".format(textutil.format_exception(err)) logger.warn(msg) add_event(op=WALAEventOperation.AgentUpgrade, is_success=False, message=msg, log_event=True) return NoneAzure-WALinuxAgent-a976115/azurelinuxagent/ga/cgroupapi.py000066400000000000000000001147771510742556200235720ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import json import os import re import shutil import subprocess import threading import uuid from azurelinuxagent.common import logger from azurelinuxagent.common.event import WALAEventOperation, add_event from azurelinuxagent.ga.cgroupstelemetry import CGroupsTelemetry from azurelinuxagent.ga.cpucontroller import _CpuController, CpuControllerV1, CpuControllerV2 from azurelinuxagent.ga.memorycontroller import MemoryControllerV1, MemoryControllerV2 from azurelinuxagent.common.conf import get_agent_pid_file_path from azurelinuxagent.common.exception import CGroupsException, ExtensionErrorCodes, ExtensionError, \ ExtensionOperationError from azurelinuxagent.common.future import ustr from azurelinuxagent.common.osutil import systemd from azurelinuxagent.common.utils import fileutil, shellutil from azurelinuxagent.ga.extensionprocessutil import handle_process_completion, read_output, \ TELEMETRY_MESSAGE_MAX_LEN from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from azurelinuxagent.common.version import get_distro CGROUP_FILE_SYSTEM_ROOT = '/sys/fs/cgroup' EXTENSION_SLICE_PREFIX = "azure-vmextensions" def log_cgroup_info(formatted_string, op=WALAEventOperation.CGroupsInfo, send_event=True): logger.info("[CGI] " + formatted_string) if send_event: add_event(op=op, message=formatted_string) def log_cgroup_warning(formatted_string, op=WALAEventOperation.CGroupsInfo, send_event=True): logger.info("[CGW] " + formatted_string) # log as INFO for now, in the future it should be logged as WARNING if send_event: add_event(op=op, message=formatted_string, is_success=False, log_event=False) class CGroupUtil(object): """ Cgroup utility methods which are independent of systemd cgroup api. """ @staticmethod def distro_supported(): distro_info = get_distro() distro_name = distro_info[0] try: distro_version = FlexibleVersion(distro_info[1]) except ValueError: return False return (distro_name.lower() == 'ubuntu' and distro_version.major >= 16) or \ (distro_name.lower() in ('centos', 'redhat') and distro_version.major == 8) or \ (distro_name.lower() == 'rhel' and distro_version.major == 9) or \ (distro_name.lower() == 'azurelinux' and distro_version.major == 3) @staticmethod def get_extension_slice_name(extension_name, old_slice=False): # The old slice makes it difficult for user to override the limits because they need to place drop-in files on every upgrade if extension slice is different for each version. # old slice includes .- # new slice without version . if not old_slice: extension_name = extension_name.rsplit("-", 1)[0] # Since '-' is used as a separator in systemd unit names, we replace it with '_' to prevent side-effects. return EXTENSION_SLICE_PREFIX + "-" + extension_name.replace('-', '_') + ".slice" @staticmethod def get_daemon_pid(): return int(fileutil.read_file(get_agent_pid_file_path()).strip()) @staticmethod def _foreach_legacy_cgroup(operation): """ Previous versions of the daemon (2.2.31-2.2.40) wrote their PID to /sys/fs/cgroup/{cpu,memory}/WALinuxAgent/WALinuxAgent; starting from version 2.2.41 we track the agent service in walinuxagent.service instead of WALinuxAgent/WALinuxAgent. Also, when running under systemd, the PIDs should not be explicitly moved to the cgroup filesystem. The older daemons would incorrectly do that under certain conditions. This method checks for the existence of the legacy cgroups and, if the daemon's PID has been added to them, executes the given operation on the cgroups. After this check, the method attempts to remove the legacy cgroups. :param operation: The function to execute on each legacy cgroup. It must take 2 arguments: the controller and the daemon's PID """ legacy_cgroups = [] for controller in ['cpu', 'memory']: cgroup = os.path.join(CGROUP_FILE_SYSTEM_ROOT, controller, "WALinuxAgent", "WALinuxAgent") if os.path.exists(cgroup): log_cgroup_info('Found legacy cgroup {0}'.format(cgroup), send_event=False) legacy_cgroups.append((controller, cgroup)) try: for controller, cgroup in legacy_cgroups: procs_file = os.path.join(cgroup, "cgroup.procs") if os.path.exists(procs_file): procs_file_contents = fileutil.read_file(procs_file).strip() daemon_pid = CGroupUtil.get_daemon_pid() if ustr(daemon_pid) in procs_file_contents: operation(controller, daemon_pid) finally: for _, cgroup in legacy_cgroups: log_cgroup_info('Removing {0}'.format(cgroup), send_event=False) shutil.rmtree(cgroup, ignore_errors=True) return len(legacy_cgroups) @staticmethod def get_current_cpu_quota(unit_name): """ Calculate the CPU percentage from CPUQuotaPerSecUSec for given unit. Params: cpu_quota_per_sec_usec (str): The value of CPUQuotaPerSecUSec (e.g., "1s", "500ms", "500us", or "infinity"). Returns: str: CPU percentage, or 'infinity' or 'unknown' if we can't determine the value. """ try: cpu_quota_per_sec_usec = systemd.get_unit_property(unit_name, "CPUQuotaPerSecUSec").strip().lower() if cpu_quota_per_sec_usec == "infinity": return cpu_quota_per_sec_usec # No limit on CPU usage # Parse the value based on the suffix elif cpu_quota_per_sec_usec.endswith("us"): # Directly use the microseconds value cpu_quota_us = float(cpu_quota_per_sec_usec[:-2]) elif cpu_quota_per_sec_usec.endswith("ms"): # Convert milliseconds to microseconds cpu_quota_us = float(cpu_quota_per_sec_usec[:-2]) * 1000 elif cpu_quota_per_sec_usec.endswith("s"): # Convert seconds to microseconds cpu_quota_us = float(cpu_quota_per_sec_usec[:-1]) * 1000000 else: raise ValueError("Invalid format. Expected 's', 'ms', 'us', or 'infinity'.") # Calculate CPU percentage cpu_percentage = (cpu_quota_us / 1000000) * 100 return "{0:g}%".format(cpu_percentage) # :g Removes trailing zeros after decimal point except Exception as e: log_cgroup_warning("Error parsing current CPUQuotaPerSecUSec: {0}".format(ustr(e))) return "unknown" @staticmethod def has_cpu_quota(unit_name): """ Returns True if quota set for the unit. """ cpu_quota_percentage = CGroupUtil.get_current_cpu_quota(unit_name) has_quota = cpu_quota_percentage not in ("infinity", "unknown") return has_quota @staticmethod def cleanup_legacy_cgroups(): """ Previous versions of the daemon (2.2.31-2.2.40) wrote their PID to /sys/fs/cgroup/{cpu,memory}/WALinuxAgent/WALinuxAgent; starting from version 2.2.41 we track the agent service in walinuxagent.service instead of WALinuxAgent/WALinuxAgent. If we find that any of the legacy groups include the PID of the daemon then we need to disable data collection for this instance (under systemd, moving PIDs across the cgroup file system can produce unpredictable results) """ return CGroupUtil._foreach_legacy_cgroup(lambda *_: None) class SystemdRunError(CGroupsException): """ Raised when systemd-run fails """ def __init__(self, msg=None): super(SystemdRunError, self).__init__(msg) class InvalidCgroupMountpointException(CGroupsException): """ Raised when the cgroup mountpoint is invalid. """ def __init__(self, msg=None): super(InvalidCgroupMountpointException, self).__init__(msg) def create_cgroup_api(): """ Determines which version of Cgroup should be used for resource enforcement and monitoring by the Agent and returns the corresponding Api. Uses 'stat -f --format=%T /sys/fs/cgroup' to get the cgroup hierarchy in use. If the result is 'cgroup2fs', cgroup v2 is being used. If the result is 'tmpfs', cgroup v1 or a hybrid mode is being used. If the result of 'stat -f --format=%T /sys/fs/cgroup/unified' is 'cgroup2fs', then hybrid mode is being used. Raises exception if cgroup filesystem mountpoint is not '/sys/fs/cgroup', or an unknown mode is detected. Also raises exception if hybrid mode is detected and there are controllers available to be enabled in the unified hierarchy (the agent does not support cgroups if there are controllers simultaneously attached to v1 and v2 hierarchies). """ if not os.path.exists(CGROUP_FILE_SYSTEM_ROOT): v1_mount_point = shellutil.run_command(['findmnt', '-t', 'cgroup', '--noheadings']) v2_mount_point = shellutil.run_command(['findmnt', '-t', 'cgroup2', '--noheadings']) raise InvalidCgroupMountpointException("Expected cgroup filesystem to be mounted at '{0}', but it is not.\n v1 mount point: \n{1}\n v2 mount point: \n{2}".format(CGROUP_FILE_SYSTEM_ROOT, v1_mount_point, v2_mount_point)) root_hierarchy_mode = shellutil.run_command(["stat", "-f", "--format=%T", CGROUP_FILE_SYSTEM_ROOT]).rstrip() if root_hierarchy_mode == "cgroup2fs": return SystemdCgroupApiv2() elif root_hierarchy_mode == "tmpfs": # Check if a hybrid mode is being used unified_hierarchy_path = os.path.join(CGROUP_FILE_SYSTEM_ROOT, "unified") if os.path.exists(unified_hierarchy_path) and shellutil.run_command(["stat", "-f", "--format=%T", unified_hierarchy_path]).rstrip() == "cgroup2fs": # Hybrid mode is being used. Check if any controllers are available to be enabled in the unified hierarchy. available_unified_controllers_file = os.path.join(unified_hierarchy_path, "cgroup.controllers") if os.path.exists(available_unified_controllers_file): available_unified_controllers = fileutil.read_file(available_unified_controllers_file).rstrip() if available_unified_controllers != "": raise CGroupsException("Detected hybrid cgroup mode, but there are controllers available to be enabled in unified hierarchy: {0}".format(available_unified_controllers)) cgroup_api_v1 = SystemdCgroupApiv1() # Previously the agent supported users mounting cgroup v1 controllers in locations other than the systemd # default ('/sys/fs/cgroup'). The agent no longer supports this scenario. If any agent supported controller is # mounted in a location other than the systemd default, raise Exception. if not cgroup_api_v1.are_mountpoints_systemd_created(): raise InvalidCgroupMountpointException("Expected cgroup controllers to be mounted at '{0}', but at least one is not. v1 mount points: \n{1}".format(CGROUP_FILE_SYSTEM_ROOT, json.dumps(cgroup_api_v1.get_controller_mountpoints()))) return cgroup_api_v1 raise CGroupsException("{0} has an unexpected file type: {1}".format(CGROUP_FILE_SYSTEM_ROOT, root_hierarchy_mode)) class _SystemdCgroupApi(object): """ Cgroup interface via systemd. Contains common api implementations between cgroup v1 and v2. """ def __init__(self): self._systemd_run_commands = [] self._systemd_run_commands_lock = threading.RLock() def get_cgroup_version(self): """ Returns the version of the cgroup hierarchy in use. """ return NotImplementedError() def get_systemd_run_commands(self): """ Returns a list of the systemd-run commands currently running (given as PIDs) """ with self._systemd_run_commands_lock: return self._systemd_run_commands[:] def get_unit_cgroup(self, unit_name, cgroup_name): """ Cgroup version specific. Returns a representation of the unit cgroup. :param unit_name: The unit to return the cgroup of. :param cgroup_name: A name to represent the cgroup. Used for logging/tracking purposes. """ raise NotImplementedError() def get_cgroup_from_relative_path(self, relative_path, cgroup_name): """ Cgroup version specific. Returns a representation of the cgroup at the provided relative path. :param relative_path: The relative path to return the cgroup of. :param cgroup_name: A name to represent the cgroup. Used for logging/tracking purposes. """ raise NotImplementedError() def get_process_cgroup(self, process_id, cgroup_name): """ Cgroup version specific. Returns a representation of the process' cgroup. :param process_id: A numeric PID to return the cgroup of, or the string "self" to return the cgroup of the current process. :param cgroup_name: A name to represent the cgroup. Used for logging/tracking purposes. """ raise NotImplementedError() def log_root_paths(self): """ Cgroup version specific. Logs the root paths of the cgroup filesystem/controllers. """ raise NotImplementedError() def can_enforce_cpu(self): """ Cgroup version specific. Returns if controller can be used for enforcement """ raise NotImplementedError() def start_extension_command(self, extension_name, command, cmd_name, timeout, shell, cwd, env, stdout, stderr, error_code=ExtensionErrorCodes.PluginUnknownFailure): scope = "{0}_{1}".format(cmd_name, uuid.uuid4()) extension_slice_name = CGroupUtil.get_extension_slice_name(extension_name) with self._systemd_run_commands_lock: process = subprocess.Popen( # pylint: disable=W1509 # Some distros like ubuntu20 by default cpu and memory accounting enabled. Thus create nested cgroups under the extension slice # So disabling CPU and Memory accounting prevents from creating nested cgroups, so that all the counters will be present in extension Cgroup # since slice unit file configured with accounting enabled. "systemd-run --property=CPUAccounting=no --property=MemoryAccounting=no --unit={0} --scope --slice={1} {2}".format(scope, extension_slice_name, command), shell=shell, cwd=cwd, stdout=stdout, stderr=stderr, env=env, preexec_fn=os.setsid) # We start systemd-run with shell == True so process.pid is the shell's pid, not the pid for systemd-run self._systemd_run_commands.append(process.pid) scope_name = scope + '.scope' log_cgroup_info("Started extension in unit '{0}'".format(scope_name), send_event=False) cpu_controller = None try: cgroup_relative_path = os.path.join('azure.slice/azure-vmextensions.slice', extension_slice_name) cgroup = self.get_cgroup_from_relative_path(cgroup_relative_path, extension_name) has_cpu_quota = CGroupUtil.has_cpu_quota(extension_slice_name) for controller in cgroup.get_controllers(): if isinstance(controller, _CpuController): cpu_controller = controller if has_cpu_quota: cpu_controller.track_throttle_time(True) # CPU controller track the throttle time only when CPU quota is set CGroupsTelemetry.track_cgroup_controller(controller) except IOError as e: if e.errno == 2: # 'No such file or directory' log_cgroup_info("The extension command already completed; will not track resource usage", send_event=False) log_cgroup_info("Failed to start tracking resource usage for the extension: {0}".format(ustr(e)), send_event=False) except Exception as e: log_cgroup_info("Failed to start tracking resource usage for the extension: {0}".format(ustr(e)), send_event=False) # Wait for process completion or timeout try: return handle_process_completion(process=process, command=command, timeout=timeout, stdout=stdout, stderr=stderr, error_code=error_code, cpu_controller=cpu_controller) except ExtensionError as e: # The extension didn't terminate successfully. Determine whether it was due to systemd errors or # extension errors. if not self._is_systemd_failure(scope, stderr): # There was an extension error; it either timed out or returned a non-zero exit code. Re-raise the error raise # There was an issue with systemd-run. We need to log it and retry the extension without systemd. process_output = read_output(stdout, stderr) # Reset the stdout and stderr stdout.truncate(0) stderr.truncate(0) if isinstance(e, ExtensionOperationError): # no-member: Instance of 'ExtensionError' has no 'exit_code' member (no-member) - Disabled: e is actually an ExtensionOperationError err_msg = 'Systemd process exited with code %s and output %s' % ( e.exit_code, process_output) # pylint: disable=no-member else: err_msg = "Systemd timed-out, output: %s" % process_output raise SystemdRunError(err_msg) finally: with self._systemd_run_commands_lock: self._systemd_run_commands.remove(process.pid) @staticmethod def _is_systemd_failure(scope_name, stderr): stderr.seek(0) stderr = ustr(stderr.read(TELEMETRY_MESSAGE_MAX_LEN), encoding='utf-8', errors='backslashreplace') unit_not_found = "Unit {0} not found.".format(scope_name) return unit_not_found in stderr or scope_name not in stderr class SystemdCgroupApiv1(_SystemdCgroupApi): """ Cgroup v1 interface via systemd """ def __init__(self): super(SystemdCgroupApiv1, self).__init__() self._cgroup_mountpoints = self._get_controller_mountpoints() @staticmethod def _get_controller_mountpoints(): """ In v1, each controller is mounted at a different path. Use findmnt to get each path. the output of findmnt is similar to $ findmnt -t cgroup --noheadings /sys/fs/cgroup/systemd cgroup cgroup rw,nosuid,nodev,noexec,relatime,xattr,name=systemd /sys/fs/cgroup/memory cgroup cgroup rw,nosuid,nodev,noexec,relatime,memory /sys/fs/cgroup/cpu,cpuacct cgroup cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct etc Returns a dictionary of the controller-path mappings. The dictionary only includes the controllers which are supported by the agent. """ mount_points = {} for line in shellutil.run_command(['findmnt', '-t', 'cgroup', '--noheadings']).splitlines(): # In v2, we match only the systemd default mountpoint ('/sys/fs/cgroup'). In v1, we match any path. This # is because the agent previously supported users mounting controllers at locations other than the systemd # default in v1. match = re.search(r'(?P\S+\/(?P\S+))\s+cgroup', line) if match is not None: path = match.group('path') controller = match.group('controller') if controller is not None and path is not None and controller in CgroupV1.get_supported_controller_names(): mount_points[controller] = path return mount_points def get_cgroup_version(self): """ Returns the version of the cgroup hierarchy in use. """ return "v1" def get_controller_mountpoints(self): """ Returns a dictionary of controller-mountpoint mappings. """ return self._cgroup_mountpoints def are_mountpoints_systemd_created(self): """ Systemd mounts each controller at '/sys/fs/cgroup/'. Returns True if all mounted controllers which are supported by the agent have mountpoints which match this pattern, False otherwise. The agent does not support cgroup usage if the default root systemd mountpoint (/sys/fs/cgroup) is not used. This method is used to check if any users are using non-systemd mountpoints. If they are, the agent drop-in files will be cleaned up in cgroupconfigurator. """ for controller, mount_point in self._cgroup_mountpoints.items(): if mount_point != os.path.join(CGROUP_FILE_SYSTEM_ROOT, controller): return False return True @staticmethod def _get_process_relative_controller_paths(process_id): """ Returns the relative paths of the cgroup for the given process as a dict of controller-path mappings. The result only includes controllers which are supported. The contents of the /proc/{process_id}/cgroup file are similar to # cat /proc/1218/cgroup 10:memory:/system.slice/walinuxagent.service 3:cpu,cpuacct:/system.slice/walinuxagent.service etc :param process_id: A numeric PID to return the relative paths of, or the string "self" to return the relative paths of the current process. """ conroller_relative_paths = {} for line in fileutil.read_file("/proc/{0}/cgroup".format(process_id)).splitlines(): match = re.match(r'\d+:(?P.+):(?P.+)', line) if match is not None: controller = match.group('controller') path = match.group('path').lstrip('/') if match.group('path') != '/' else None if path is not None and controller in CgroupV1.get_supported_controller_names(): conroller_relative_paths[controller] = path return conroller_relative_paths def get_unit_cgroup(self, unit_name, cgroup_name): unit_cgroup_relative_path = systemd.get_unit_property(unit_name, "ControlGroup") unit_controller_paths = {} for controller, mountpoint in self._cgroup_mountpoints.items(): unit_controller_paths[controller] = os.path.join(mountpoint, unit_cgroup_relative_path[1:]) return CgroupV1(cgroup_name=cgroup_name, controller_mountpoints=self._cgroup_mountpoints, controller_paths=unit_controller_paths) def get_cgroup_from_relative_path(self, relative_path, cgroup_name): controller_paths = {} for controller, mountpoint in self._cgroup_mountpoints.items(): controller_paths[controller] = os.path.join(mountpoint, relative_path) return CgroupV1(cgroup_name=cgroup_name, controller_mountpoints=self._cgroup_mountpoints, controller_paths=controller_paths) def get_process_cgroup(self, process_id, cgroup_name): relative_controller_paths = self._get_process_relative_controller_paths(process_id) process_controller_paths = {} for controller, mountpoint in self._cgroup_mountpoints.items(): relative_controller_path = relative_controller_paths.get(controller) if relative_controller_path is not None: process_controller_paths[controller] = os.path.join(mountpoint, relative_controller_path) return CgroupV1(cgroup_name=cgroup_name, controller_mountpoints=self._cgroup_mountpoints, controller_paths=process_controller_paths) def log_root_paths(self): for controller in CgroupV1.get_supported_controller_names(): mount_point = self._cgroup_mountpoints.get(controller) if mount_point is None: log_cgroup_info("The {0} controller is not mounted".format(controller)) else: log_cgroup_info("The {0} controller is mounted at {1}".format(controller, mount_point)) def can_enforce_cpu(self): return CgroupV1.CPU_CONTROLLER in self._cgroup_mountpoints class SystemdCgroupApiv2(_SystemdCgroupApi): """ Cgroup v2 interface via systemd """ def __init__(self): super(SystemdCgroupApiv2, self).__init__() self._root_cgroup_path = self._get_root_cgroup_path() self._controllers_enabled_at_root = self._get_controllers_enabled_at_root(self._root_cgroup_path) if self._root_cgroup_path != "" else [] @staticmethod def _get_root_cgroup_path(): """ In v2, there is a unified mount point shared by all controllers. Use findmnt to get the unified mount point. The output of findmnt is similar to $ findmnt -t cgroup2 --noheadings /sys/fs/cgroup cgroup2 cgroup2 rw,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot Returns empty string if the root cgroup cannot be determined from the output above. """ # for line in shellutil.run_command(['findmnt', '-t', 'cgroup2', '--noheadings']).splitlines(): # Systemd mounts the cgroup filesystem at '/sys/fs/cgroup'. The agent does not support cgroups if the # filesystem is mounted elsewhere, so search specifically for '/sys/fs/cgroup' in the findmnt output. match = re.search(r'(?P\/sys\/fs\/cgroup)\s+cgroup2', line) if match is not None: root_cgroup_path = match.group('path') if root_cgroup_path is not None: return root_cgroup_path return "" def get_cgroup_version(self): """ Returns the version of the cgroup hierarchy in use. """ return "v2" def get_root_cgroup_path(self): """ Returns the unified cgroup mountpoint. """ return self._root_cgroup_path @staticmethod def _get_controllers_enabled_at_root(root_cgroup_path): """ Returns a list of the controllers enabled at the root cgroup. The cgroup.subtree_control file at the root shows a space separated list of the controllers which are enabled to control resource distribution from the root cgroup to its children. If a controller is listed here, then that controller is available to enable in children cgroups. Returns only the enabled controllers which are supported by the agent. $ cat /sys/fs/cgroup/cgroup.subtree_control cpuset cpu io memory hugetlb pids rdma misc """ enabled_controllers_file = os.path.join(root_cgroup_path, 'cgroup.subtree_control') if os.path.exists(enabled_controllers_file): controllers_enabled_at_root = fileutil.read_file(enabled_controllers_file).rstrip().split() return list(set(controllers_enabled_at_root) & set(CgroupV2.get_supported_controller_names())) return [] @staticmethod def _get_process_relative_cgroup_path(process_id): """ Returns the relative path of the cgroup for the given process. The contents of the /proc/{process_id}/cgroup file are similar to # cat /proc/1218/cgroup 0::/azure.slice/walinuxagent.service :param process_id: A numeric PID to return the relative path of, or the string "self" to return the relative path of the current process. """ relative_path = "" for line in fileutil.read_file("/proc/{0}/cgroup".format(process_id)).splitlines(): match = re.match(r'0::(?P\S+)', line) if match is not None: relative_path = match.group('path').lstrip('/') if match.group('path') != '/' else "" return relative_path def get_unit_cgroup(self, unit_name, cgroup_name): unit_cgroup_relative_path = systemd.get_unit_property(unit_name, "ControlGroup") unit_cgroup_path = "" if self._root_cgroup_path != "": unit_cgroup_path = os.path.join(self._root_cgroup_path, unit_cgroup_relative_path[1:]) return CgroupV2(cgroup_name=cgroup_name, root_cgroup_path=self._root_cgroup_path, cgroup_path=unit_cgroup_path, enabled_controllers=self._controllers_enabled_at_root) def get_cgroup_from_relative_path(self, relative_path, cgroup_name): cgroup_path = "" if self._root_cgroup_path != "": cgroup_path = os.path.join(self._root_cgroup_path, relative_path) return CgroupV2(cgroup_name=cgroup_name, root_cgroup_path=self._root_cgroup_path, cgroup_path=cgroup_path, enabled_controllers=self._controllers_enabled_at_root) def get_process_cgroup(self, process_id, cgroup_name): relative_path = self._get_process_relative_cgroup_path(process_id) cgroup_path = "" if self._root_cgroup_path != "": cgroup_path = os.path.join(self._root_cgroup_path, relative_path) return CgroupV2(cgroup_name=cgroup_name, root_cgroup_path=self._root_cgroup_path, cgroup_path=cgroup_path, enabled_controllers=self._controllers_enabled_at_root) def log_root_paths(self): log_cgroup_info("The root cgroup path is {0}".format(self._root_cgroup_path)) for controller in CgroupV2.get_supported_controller_names(): if controller in self._controllers_enabled_at_root: log_cgroup_info("The {0} controller is enabled at the root cgroup".format(controller)) else: log_cgroup_info("The {0} controller is not enabled at the root cgroup".format(controller)) def can_enforce_cpu(self): return CgroupV2.CPU_CONTROLLER in self._controllers_enabled_at_root class Cgroup(object): MEMORY_CONTROLLER = "memory" def __init__(self, cgroup_name): self._cgroup_name = cgroup_name @staticmethod def get_supported_controller_names(): """ Cgroup version specific. Returns a list of the controllers which the agent supports as strings. """ raise NotImplementedError() def check_in_expected_slice(self, expected_slice): """ Cgroup version specific. Returns True if the cgroup is in the expected slice, False otherwise. :param expected_slice: The slice the cgroup is expected to be in. """ raise NotImplementedError() def get_controllers(self, expected_relative_path=None): """ Cgroup version specific. Returns a list of the agent supported controllers which are mounted/enabled for the cgroup. :param expected_relative_path: The expected relative path of the cgroup. If provided, only controllers mounted at this expected path will be returned. """ raise NotImplementedError() def get_processes(self): """ Cgroup version specific. Returns a list of all the process ids in the cgroup. """ raise NotImplementedError() class CgroupV1(Cgroup): CPU_CONTROLLER = "cpu,cpuacct" def __init__(self, cgroup_name, controller_mountpoints, controller_paths): """ :param cgroup_name: The name of the cgroup. Used for logging/tracking purposes. :param controller_mountpoints: A dictionary of controller-mountpoint mappings for each agent supported controller which is mounted. :param controller_paths: A dictionary of controller-path mappings for each agent supported controller which is mounted. The path represents the absolute path of the controller. """ super(CgroupV1, self).__init__(cgroup_name=cgroup_name) self._controller_mountpoints = controller_mountpoints self._controller_paths = controller_paths @staticmethod def get_supported_controller_names(): return [CgroupV1.CPU_CONTROLLER, CgroupV1.MEMORY_CONTROLLER] def check_in_expected_slice(self, expected_slice): in_expected_slice = True for controller, path in self._controller_paths.items(): if expected_slice not in path: log_cgroup_warning("The {0} controller for the {1} cgroup is not mounted in the expected slice. Expected slice: {2}. Actual controller path: {3}".format(controller, self._cgroup_name, expected_slice, path), send_event=False) in_expected_slice = False return in_expected_slice def get_controllers(self, expected_relative_path=None): controllers = [] for supported_controller_name in self.get_supported_controller_names(): controller = None controller_path = self._controller_paths.get(supported_controller_name) controller_mountpoint = self._controller_mountpoints.get(supported_controller_name) if controller_mountpoint is None: # Do not send telemetry here. We already have telemetry for unmounted controllers in cgroup init log_cgroup_warning("{0} controller is not mounted; will not track".format(supported_controller_name), send_event=False) continue if controller_path is None: log_cgroup_warning("{0} is not mounted for the {1} cgroup; will not track".format(supported_controller_name, self._cgroup_name)) continue if expected_relative_path is not None: expected_path = os.path.join(controller_mountpoint, expected_relative_path) if controller_path != expected_path: log_cgroup_warning("The {0} controller is not mounted at the expected path for the {1} cgroup; will not track. Actual cgroup path:[{2}] Expected:[{3}]".format(supported_controller_name, self._cgroup_name, controller_path, expected_path)) continue if supported_controller_name == self.CPU_CONTROLLER: controller = CpuControllerV1(self._cgroup_name, controller_path) elif supported_controller_name == self.MEMORY_CONTROLLER: controller = MemoryControllerV1(self._cgroup_name, controller_path) if controller is not None: controllers.append(controller) return controllers def get_controller_procs_path(self, controller): controller_path = self._controller_paths.get(controller) if controller_path is not None and controller_path != "": return os.path.join(controller_path, "cgroup.procs") return "" def get_processes(self): pids = set() for controller in self._controller_paths.keys(): procs_path = self.get_controller_procs_path(controller) if os.path.exists(procs_path): with open(procs_path, "r") as cgroup_procs: for pid in cgroup_procs.read().split(): pids.add(int(pid)) return list(pids) class CgroupV2(Cgroup): CPU_CONTROLLER = "cpu" def __init__(self, cgroup_name, root_cgroup_path, cgroup_path, enabled_controllers): """ :param cgroup_name: The name of the cgroup. Used for logging/tracking purposes. :param root_cgroup_path: A string representing the root cgroup path. String can be empty. :param cgroup_path: A string representing the absolute cgroup path. String can be empty. :param enabled_controllers: A list of strings representing the agent supported controllers enabled at the root cgroup. """ super(CgroupV2, self).__init__(cgroup_name) self._root_cgroup_path = root_cgroup_path self._cgroup_path = cgroup_path self._enabled_controllers = enabled_controllers @staticmethod def get_supported_controller_names(): return [CgroupV2.CPU_CONTROLLER, CgroupV2.MEMORY_CONTROLLER] def check_in_expected_slice(self, expected_slice): if expected_slice not in self._cgroup_path: log_cgroup_warning("The {0} cgroup is not in the expected slice. Expected slice: {1}. Actual cgroup path: {2}".format(self._cgroup_name, expected_slice, self._cgroup_path), send_event=False) return False return True def get_controllers(self, expected_relative_path=None): controllers = [] for supported_controller_name in self.get_supported_controller_names(): controller = None if supported_controller_name not in self._enabled_controllers: # Do not send telemetry here. We already have telemetry for disabled controllers in cgroup init log_cgroup_warning("{0} controller is not enabled; will not track".format(supported_controller_name), send_event=False) continue if self._cgroup_path == "": log_cgroup_warning("Cgroup path for {0} cannot be determined; will not track".format(self._cgroup_name)) continue if expected_relative_path is not None: expected_path = os.path.join(self._root_cgroup_path, expected_relative_path) if self._cgroup_path != expected_path: log_cgroup_warning( "The {0} cgroup is not mounted at the expected path; will not track. Actual cgroup path:[{1}] Expected:[{2}]".format( self._cgroup_name, self._cgroup_path, expected_path)) continue if supported_controller_name == self.CPU_CONTROLLER: controller = CpuControllerV2(self._cgroup_name, self._cgroup_path) elif supported_controller_name == self.MEMORY_CONTROLLER: controller = MemoryControllerV2(self._cgroup_name, self._cgroup_path) if controller is not None: controllers.append(controller) return controllers def get_procs_path(self): if self._cgroup_path != "": return os.path.join(self._cgroup_path, "cgroup.procs") return "" def get_processes(self): pids = set() procs_path = self.get_procs_path() if os.path.exists(procs_path): with open(procs_path, "r") as cgroup_procs: for pid in cgroup_procs.read().split(): pids.add(int(pid)) return list(pids) Azure-WALinuxAgent-a976115/azurelinuxagent/ga/cgroupconfigurator.py000066400000000000000000001572711510742556200255170ustar00rootroot00000000000000# -*- encoding: utf-8 -*- # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import glob import json import os import re import subprocess import threading from azurelinuxagent.common import conf from azurelinuxagent.common import logger from azurelinuxagent.ga.cgroupcontroller import AGENT_NAME_TELEMETRY, MetricsCounter from azurelinuxagent.ga.cgroupapi import SystemdRunError, EXTENSION_SLICE_PREFIX, CGroupUtil, SystemdCgroupApiv2, \ log_cgroup_info, log_cgroup_warning, create_cgroup_api, InvalidCgroupMountpointException from azurelinuxagent.ga.cgroupstelemetry import CGroupsTelemetry from azurelinuxagent.ga.cpucontroller import _CpuController from azurelinuxagent.ga.memorycontroller import _MemoryController from azurelinuxagent.common.exception import ExtensionErrorCodes, CGroupsException, AgentMemoryExceededException from azurelinuxagent.common.future import ustr from azurelinuxagent.common.osutil import systemd from azurelinuxagent.common.version import get_distro from azurelinuxagent.common.utils import shellutil, fileutil from azurelinuxagent.ga.extensionprocessutil import handle_process_completion from azurelinuxagent.common.event import add_event, WALAEventOperation AZURE_SLICE = "azure.slice" _AZURE_SLICE_CONTENTS = """ [Unit] Description=Slice for Azure VM Agent and Extensions DefaultDependencies=no Before=slices.target """ _VMEXTENSIONS_SLICE = EXTENSION_SLICE_PREFIX + ".slice" _AZURE_VMEXTENSIONS_SLICE = AZURE_SLICE + "/" + _VMEXTENSIONS_SLICE _VMEXTENSIONS_SLICE_CONTENTS = """ [Unit] Description=Slice for Azure VM Extensions DefaultDependencies=no Before=slices.target [Slice] CPUAccounting=yes MemoryAccounting=yes """ _EXTENSION_SLICE_CONTENTS = """ [Unit] Description=Slice for Azure VM extension {extension_name} DefaultDependencies=no Before=slices.target [Slice] CPUAccounting=yes CPUQuota={cpu_quota} MemoryAccounting=yes """ LOGCOLLECTOR_SLICE = "azure-walinuxagent-logcollector.slice" # More info on resource limits properties in systemd here: # https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/resource_management_guide/sec-modifying_control_groups LOGCOLLECTOR_CPU_QUOTA_FOR_V1_AND_V2 = "5%" LOGCOLLECTOR_MEMORY_THROTTLE_LIMIT_FOR_V2 = "170M" LOGCOLLECTOR_MAX_THROTTLED_EVENTS_FOR_V2 = 10 LOGCOLLECTOR_ANON_MEMORY_LIMIT_FOR_V1_AND_V2 = 25 * 1024 ** 2 # 25Mb LOGCOLLECTOR_CACHE_MEMORY_LIMIT_FOR_V1_AND_V2 = 155 * 1024 ** 2 # 155Mb _AGENT_DROP_IN_FILE_SLICE = "10-Slice.conf" _AGENT_DROP_IN_FILE_SLICE_CONTENTS = """ # This drop-in unit file was created by the Azure VM Agent. # Do not edit. [Service] Slice=azure.slice """ _DROP_IN_FILE_CPU_ACCOUNTING = "11-CPUAccounting.conf" _DROP_IN_FILE_CPU_ACCOUNTING_CONTENTS = """ # This drop-in unit file was created by the Azure VM Agent. # Do not edit. [Service] CPUAccounting=yes """ _DROP_IN_FILE_CPU_QUOTA = "12-CPUQuota.conf" _DROP_IN_FILE_CPU_QUOTA_CONTENTS_FORMAT = """ # This drop-in unit file was created by the Azure VM Agent. # Do not edit. [Service] CPUQuota={0} """ _DROP_IN_FILE_MEMORY_ACCOUNTING = "13-MemoryAccounting.conf" _DROP_IN_FILE_MEMORY_ACCOUNTING_CONTENTS = """ # This drop-in unit file was created by the Azure VM Agent. # Do not edit. [Service] MemoryAccounting=yes """ class DisableCgroups(object): ALL = "all" AGENT = "agent" EXTENSIONS = "extensions" class CGroupConfigurator(object): """ This class implements the high-level operations on CGroups (e.g. initialization, creation, etc) NOTE: with the exception of start_extension_command, none of the methods in this class raise exceptions (cgroup operations should not block extensions) """ class _Impl(object): def __init__(self): self._initialized = False self._cgroups_supported = False self._agent_cgroups_enabled = False self._extensions_cgroups_enabled = False self._cgroups_api = None self._agent_cgroup = None self._agent_memory_metrics = None self._check_cgroups_lock = threading.RLock() # Protect the check_cgroups which is called from Monitor thread and main loop. self._unexpected_processes = {} def initialize(self): try: if self._initialized: return # check whether cgroup monitoring is supported on the current distro self._cgroups_supported = self._check_cgroups_supported() if not self._cgroups_supported: # If a distro is not supported, attempt to clean up any existing drop in files in case it was # previously supported. It is necessary to cleanup in this scenario in case the OS hits any bugs on # the kernel related to cgroups. log_cgroup_info("Agent will reset the quotas in case cgroup usage went from enabled to disabled") self._reset_agent_cgroup_setup() return # We check the agent unit 'Slice' property before setting up azure.slice. This check is done first # because the agent's Slice unit property will be 'azure.slice' if the slice drop-in file exists, even # though systemd has not moved the agent to azure.slice yet. Systemd will only move the agent to # azure.slice after a vm restart. agent_unit_name = systemd.get_agent_unit_name() agent_slice = systemd.get_unit_property(agent_unit_name, "Slice") if agent_slice not in (AZURE_SLICE, "system.slice"): log_cgroup_warning("The agent is within an unexpected slice: {0}".format(agent_slice)) return # Before agent setup, cleanup the old agent setup (drop-in files) since new agent uses different approach(systemctl) to setup cgroups. log_cgroup_info("Cleaning up old agent setup (drop-in files), if any") self._cleanup_old_agent_setup() # Notes about slice setup: # For machines where daemon version did not already create azure.slice, the # agent creates azure.slice and the agent unit Slice drop-in file(without daemon-reload), but systemd does not move the agent # unit to azure.slice until vm restart. It is ok to enable cgroup usage in this case if agent is # running in system.slice. self._setup_azure_slice() # Log mount points/root paths for cgroup controllers self._cgroups_api.log_root_paths() # Get agent cgroup self._agent_cgroup = self._cgroups_api.get_unit_cgroup(unit_name=agent_unit_name, cgroup_name=AGENT_NAME_TELEMETRY) if conf.get_cgroup_disable_on_process_check_failure() and self._check_fails_if_processes_found_in_agent_cgroup_before_enable(agent_slice): reason = "Found unexpected processes in the agent cgroup before agent enable cgroups." self.disable(reason, DisableCgroups.ALL) return # Get controllers to track agent_controllers = self._agent_cgroup.get_controllers(expected_relative_path=os.path.join(agent_slice, agent_unit_name)) if len(agent_controllers) > 0: self.enable() self._enable_accounting(agent_unit_name) for controller in agent_controllers: for prop in controller.get_unit_properties(): log_cgroup_info('Agent {0} unit property value: {1}'.format(prop, systemd.get_unit_property(systemd.get_agent_unit_name(), prop))) if isinstance(controller, _CpuController) and self._cgroups_api.can_enforce_cpu(): self._set_cpu_quota(agent_unit_name, conf.get_agent_cpu_quota()) controller.track_throttle_time(True) # CPU controller track the throttle time only when CPU quota is set elif isinstance(controller, _MemoryController): self._agent_memory_metrics = controller CGroupsTelemetry.track_cgroup_controller(controller) except Exception as exception: log_cgroup_warning("Error initializing cgroups: {0}".format(ustr(exception))) finally: log_cgroup_info('Agent cgroups enabled: {0}'.format(self._agent_cgroups_enabled)) self._initialized = True if self._cgroups_api is not None and not self._cgroups_api.can_enforce_cpu(): # If agent cgroups are not enabled or quotas not enabled, reset the quota for the agent unit log_cgroup_info("Reset CPU quota if agent cgroups were not enabled for enforcement") self._reset_cpu_quota(systemd.get_agent_unit_name()) def _check_cgroups_supported(self): distro_supported = CGroupUtil.distro_supported() if not distro_supported: log_cgroup_info("Cgroups is not currently supported on {0}".format(get_distro()), send_event=True) return False if not systemd.is_systemd(): log_cgroup_warning("systemd was not detected on {0}".format(get_distro()), send_event=True) log_cgroup_info("Cgroups won't be supported on non-systemd systems", send_event=True) return False if not self._check_no_legacy_cgroups(): log_cgroup_warning("The daemon's PID was added to a legacy cgroup; will not enable cgroups.", send_event=True) return False try: self._cgroups_api = create_cgroup_api() log_cgroup_info("Using cgroup {0} for resource enforcement and monitoring".format(self._cgroups_api.get_cgroup_version())) except InvalidCgroupMountpointException as e: # Systemd mounts the cgroup file system at '/sys/fs/cgroup'. Previously, the agent supported cgroup # usage if a user mounted the cgroup filesystem elsewhere. The agent no longer supports that # scenario. Cleanup any existing drop in files in case the agent previously supported cgroups on # this machine. log_cgroup_warning( "The agent does not support cgroups if the default systemd mountpoint is not being used: {0}".format( ustr(e)), send_event=True) return False except CGroupsException as e: log_cgroup_warning("Unable to determine which cgroup version to use: {0}".format(ustr(e)), send_event=True) return False if self.using_cgroup_v2(): log_cgroup_info("Only resource monitoring is currently supported on cgroup v2", send_event=True) return True @staticmethod def _check_no_legacy_cgroups(): """ Older versions of the daemon (2.2.31-2.2.40) wrote their PID to /sys/fs/cgroup/{cpu,memory}/WALinuxAgent/WALinuxAgent. When running under systemd this could produce invalid resource usage data. Cgroups should not be enabled under this condition. """ legacy_cgroups = CGroupUtil.cleanup_legacy_cgroups() if legacy_cgroups > 0: return False return True @staticmethod def _cleanup_old_agent_setup(): """ New agent switching to use systemctl cmd instead of drop-files for desired configuration. So, cleaning up the old drop-in files. We will keep cleanup code for few agents, until we determine all vms moved to new agent version. """ # Older agents used to create this slice, but it was never used. Cleanup the file. CGroupConfigurator._Impl._cleanup_unit_file("/etc/systemd/system/system-walinuxagent.extensions.slice") unit_file_install_path = systemd.get_unit_file_install_path() logcollector_slice = os.path.join(unit_file_install_path, LOGCOLLECTOR_SLICE) agent_drop_in_path = systemd.get_agent_drop_in_path() agent_drop_in_file_cpu_accounting = os.path.join(agent_drop_in_path, _DROP_IN_FILE_CPU_ACCOUNTING) agent_drop_in_file_memory_accounting = os.path.join(agent_drop_in_path, _DROP_IN_FILE_MEMORY_ACCOUNTING) agent_drop_in_file_cpu_quota = os.path.join(agent_drop_in_path, _DROP_IN_FILE_CPU_QUOTA) # New agent will setup limits for scope instead slice, so removing existing logcollector slice. CGroupConfigurator._Impl._cleanup_unit_file(logcollector_slice) # Cleanup the old drop-in files, new agent will use systemdctl set-property to enable accounting and limits CGroupConfigurator._Impl._cleanup_unit_file(agent_drop_in_file_cpu_accounting) CGroupConfigurator._Impl._cleanup_unit_file(agent_drop_in_file_memory_accounting) CGroupConfigurator._Impl._cleanup_unit_file(agent_drop_in_file_cpu_quota) @staticmethod def _setup_azure_slice(): """ The agent creates "azure.slice" for use by extensions and the agent. The agent runs under "azure.slice" directly and each extension runs under its own slice ("Microsoft.CPlat.Extension.slice" in the example below). All the slices for extensions are grouped under "vmextensions.slice". Example: -.slice ├─user.slice ├─system.slice └─azure.slice ├─walinuxagent.service │ ├─5759 /usr/bin/python3 -u /usr/sbin/waagent -daemon │ └─5764 python3 -u bin/WALinuxAgent-2.2.53-py2.7.egg -run-exthandlers └─azure-vmextensions.slice └─Microsoft.CPlat.Extension.slice └─5894 /usr/bin/python3 /var/lib/waagent/Microsoft.CPlat.Extension-1.0.0.0/enable.py This method ensures that the "azure" and "vmextensions" slices are created. Setup should create those slices under /lib/systemd/system; but if they do not exist, __ensure_azure_slices_exist will create them. """ unit_file_install_path = systemd.get_unit_file_install_path() azure_slice = os.path.join(unit_file_install_path, AZURE_SLICE) vmextensions_slice = os.path.join(unit_file_install_path, _VMEXTENSIONS_SLICE) agent_unit_file = systemd.get_agent_unit_file() agent_drop_in_path = systemd.get_agent_drop_in_path() agent_drop_in_file_slice = os.path.join(agent_drop_in_path, _AGENT_DROP_IN_FILE_SLICE) files_to_create = [] if not os.path.exists(azure_slice): files_to_create.append((azure_slice, _AZURE_SLICE_CONTENTS)) if not os.path.exists(vmextensions_slice): files_to_create.append((vmextensions_slice, _VMEXTENSIONS_SLICE_CONTENTS)) if fileutil.findre_in_file(agent_unit_file, r"Slice=") is not None: CGroupConfigurator._Impl._cleanup_unit_file(agent_drop_in_file_slice) else: if not os.path.exists(agent_drop_in_file_slice): files_to_create.append((agent_drop_in_file_slice, _AGENT_DROP_IN_FILE_SLICE_CONTENTS)) if len(files_to_create) > 0: # create the unit files, but if 1 fails remove all and return try: for path, contents in files_to_create: CGroupConfigurator._Impl._create_unit_file(path, contents) except Exception as exception: log_cgroup_warning("Failed to create unit files for the azure slice: {0}".format(ustr(exception))) for unit_file in files_to_create: CGroupConfigurator._Impl._cleanup_unit_file(unit_file) return def _reset_agent_cgroup_setup(self): try: agent_drop_in_path = systemd.get_agent_drop_in_path() if os.path.exists(agent_drop_in_path) and os.path.isdir(agent_drop_in_path) and len(os.listdir(agent_drop_in_path)) > 0: files_to_cleanup = [] agent_drop_in_file_slice = os.path.join(agent_drop_in_path, _AGENT_DROP_IN_FILE_SLICE) if os.path.exists(agent_drop_in_file_slice): files_to_cleanup.append(agent_drop_in_file_slice) agent_drop_in_file_cpu_accounting = os.path.join(agent_drop_in_path, _DROP_IN_FILE_CPU_ACCOUNTING) if os.path.exists(agent_drop_in_file_cpu_accounting): files_to_cleanup.append(agent_drop_in_file_cpu_accounting) agent_drop_in_file_memory_accounting = os.path.join(agent_drop_in_path, _DROP_IN_FILE_MEMORY_ACCOUNTING) if os.path.exists(agent_drop_in_file_memory_accounting): files_to_cleanup.append(agent_drop_in_file_memory_accounting) agent_drop_in_file_cpu_quota = os.path.join(agent_drop_in_path, _DROP_IN_FILE_CPU_QUOTA) if os.path.exists(agent_drop_in_file_cpu_quota): files_to_cleanup.append(agent_drop_in_file_cpu_quota) if len(files_to_cleanup) > 0: log_cgroup_info("Found drop-in files; attempting agent cgroup setup cleanup", send_event=False) self._cleanup_all_files(files_to_cleanup) self._reset_cpu_quota(systemd.get_agent_unit_name()) except Exception as err: logger.warn("Error while resetting the quotas: {0}".format(err)) @staticmethod def _enable_accounting(unit_name): """ Enable CPU and Memory accounting for the unit """ try: # since we don't use daemon-reload and drop-files for accounting, so it will be enabled with systemctl set-property accounting_properties = ("CPUAccounting", "MemoryAccounting") values = ("yes", "yes") log_cgroup_info("Enabling accounting properties for the agent: {0}".format(accounting_properties)) systemd.set_unit_run_time_properties(unit_name, accounting_properties, values) except Exception as exception: log_cgroup_warning("Failed to set accounting properties for the agent: {0}".format(ustr(exception))) # W0238: Unused private member `_Impl.__create_unit_file(path, contents)` (unused-private-member) @staticmethod def _create_unit_file(path, contents): # pylint: disable=unused-private-member parent, _ = os.path.split(path) if not os.path.exists(parent): fileutil.mkdir(parent, mode=0o755) exists = os.path.exists(path) fileutil.write_file(path, contents) log_cgroup_info("{0} {1}".format("Updated" if exists else "Created", path)) # W0238: Unused private member `_Impl.__cleanup_unit_file(path)` (unused-private-member) @staticmethod def _cleanup_unit_file(path): # pylint: disable=unused-private-member if os.path.exists(path): try: os.remove(path) log_cgroup_info("Removed {0}".format(path)) except Exception as exception: log_cgroup_warning("Failed to remove {0}: {1}".format(path, ustr(exception))) @staticmethod def _cleanup_all_files(files_to_cleanup): for path in files_to_cleanup: if os.path.exists(path): try: os.remove(path) log_cgroup_info("Removed {0}".format(path)) except Exception as exception: log_cgroup_warning("Failed to remove {0}: {1}".format(path, ustr(exception))) @staticmethod def _create_all_files(files_to_create): # create the unit files, but if 1 fails remove all and return try: for path, contents in files_to_create: CGroupConfigurator._Impl._create_unit_file(path, contents) except Exception as exception: log_cgroup_warning("Failed to create unit files : {0}".format(ustr(exception))) for unit_file in files_to_create: CGroupConfigurator._Impl._cleanup_unit_file(unit_file) return def supported(self): return self._cgroups_supported def enabled(self): return self._agent_cgroups_enabled or self._extensions_cgroups_enabled def agent_enabled(self): return self._agent_cgroups_enabled def extensions_enabled(self): return self._extensions_cgroups_enabled def using_cgroup_v2(self): return isinstance(self._cgroups_api, SystemdCgroupApiv2) def enable(self): if not self.supported(): raise CGroupsException( "Attempted to enable cgroups, but they are not supported on the current platform") self._agent_cgroups_enabled = True self._extensions_cgroups_enabled = True def disable(self, reason, disable_cgroups): """ TODO: This method needs a refactor. We should not disable the cgroups if we fail to reset the agent's cgroup quota. Today we disable the cgroups even if we fail to reset te agent cgroup quota, as a result, extensions may run with agent limits, which is not good. Other side if we don't disable the cgroups, we end up calling the reset until systemd is recovered from error. If systemd error is connection timed out, it's just adding significant delay to the extension execution. """ try: if disable_cgroups == DisableCgroups.ALL: # disable all # Reset quotas self._reset_cpu_quota(systemd.get_agent_unit_name()) extension_services = self.get_extension_services_list() for extension in extension_services: log_cgroup_info("Resetting extension : {0} and it's services: {1} Quota".format(extension, extension_services[extension]), send_event=False) self.reset_extension_quota(extension_name=extension) self.reset_extension_services_quota(extension_services[extension]) CGroupsTelemetry.reset() self._agent_cgroups_enabled = False self._extensions_cgroups_enabled = False elif disable_cgroups == DisableCgroups.AGENT: # disable agent self._reset_cpu_quota(systemd.get_agent_unit_name()) agent_controllers = self._agent_cgroup.get_controllers() for controller in agent_controllers: if isinstance(controller, _CpuController): CGroupsTelemetry.stop_tracking(controller) break self._agent_cgroups_enabled = False log_cgroup_warning("Disabling resource usage monitoring. Reason: {0}".format(reason), op=WALAEventOperation.CGroupsDisabled) except Exception as exception: log_cgroup_warning("Error disabling cgroups: {0}".format(ustr(exception))) def _set_cpu_quota(self, unit_name, quota): """ Sets CPU quota to the given percentage (100% == 1 CPU) NOTE: This is done using a systemtcl set-property --runtime; any local overrides in /etc folder on the VM will take precedence over this setting. """ if self._cgroups_api.can_enforce_cpu(): quota_percentage = "{0}%".format(quota) log_cgroup_info("Setting {0}'s CPUQuota to {1}".format(unit_name, quota_percentage)) CGroupConfigurator._Impl._try_set_cpu_quota(unit_name, quota_percentage) def _reset_cpu_quota(self, unit_name): """ Removes any CPUQuota on the agent NOTE: This resets the quota on the agent's default dropin file; any local overrides on the VM will take precedence over this setting. """ log_cgroup_info("Resetting {0}'s CPUQuota".format(unit_name), send_event=False) if CGroupConfigurator._Impl._try_set_cpu_quota(unit_name, "infinity"): # systemd expresses no-quota as infinity, following the same convention try: log_cgroup_info('Current CPUQuota: {0}'.format(systemd.get_unit_property(unit_name, "CPUQuotaPerSecUSec"))) except Exception as e: log_cgroup_warning('Failed to get current CPUQuotaPerSecUSec after reset: {0}'.format(ustr(e))) # W0238: Unused private member `_Impl.__try_set_cpu_quota(quota)` (unused-private-member) @staticmethod def _try_set_cpu_quota(unit_name, quota): # pylint: disable=unused-private-member try: current_cpu_quota = CGroupUtil.get_current_cpu_quota(unit_name) if current_cpu_quota == quota: return True quota = quota if quota != "infinity" else "" # no-quota expressed as empty string while setting property systemd.set_unit_run_time_property(unit_name, "CPUQuota", quota) except Exception as exception: log_cgroup_warning('Failed to set CPUQuota: {0}'.format(ustr(exception))) return False return True def _check_fails_if_processes_found_in_agent_cgroup_before_enable(self, agent_slice): """ This check ensures that before we enable the agent's cgroups, there are no unexpected processes in the agent's cgroup already. The issue we observed that long running extension processes may be in agent cgroups if agent goes this cycle enabled(1)->disabled(2)->enabled(3). 1. Agent cgroups enabled in some version 2. Disabled agent cgroups due to check_cgroups regular check. Once we disable the cgroups we don't run the extensions in it's own slice, so they will be in agent cgroups. 3. When ext_hanlder restart and enable the cgroups again, already running processes from step 2 still be in agent cgroups. This may cause the extensions run with agent limit. """ if agent_slice not in (AZURE_SLICE, "system.slice"): return False try: log_cgroup_info("Checking for unexpected processes in the agent's cgroup before enabling cgroups") self._check_processes_in_agent_cgroup(True) except CGroupsException as exception: log_cgroup_warning(ustr(exception)) return True return False def check_cgroups(self, cgroup_metrics): self._check_cgroups_lock.acquire() try: if not self.enabled(): return errors = [] process_check_success = False try: self._check_processes_in_agent_cgroup(False) process_check_success = True except CGroupsException as exception: errors.append(exception) quota_check_success = False try: if cgroup_metrics: self._check_agent_throttled_time(cgroup_metrics) quota_check_success = True except CGroupsException as exception: errors.append(exception) reason = "Check on cgroups failed:\n{0}".format("\n".join([ustr(e) for e in errors])) if not process_check_success and conf.get_cgroup_disable_on_process_check_failure(): self.disable(reason, DisableCgroups.ALL) if not quota_check_success and conf.get_cgroup_disable_on_quota_check_failure(): self.disable(reason, DisableCgroups.AGENT) finally: self._check_cgroups_lock.release() def _check_processes_in_agent_cgroup(self, report_immediately): """ Verifies that the agent's cgroup includes only the current process, its parent, commands started using shellutil and instances of systemd-run (those processes correspond, respectively, to the extension handler, the daemon, commands started by the extension handler, and the systemd-run commands used to start extensions on their own cgroup). Other processes started by the agent (e.g. extensions) and processes not started by the agent (e.g. services installed by extensions) are reported as unexpected, since they should belong to their own cgroup. Raises a CGroupsException only when current unexpected process seen last time. report_immediately - flag to switch to old behavior and report immediately if any unexpected process found. Note: Process check was added as conservative approach before cgroups feature stable. Now it's producing noise due to race issues, some of those issues are extra process before systemd move to new cgroup or process about to die. So now changing the behavior to raise an issue only when we see the same unexpected process on last check. Later we will remove the check if no issues reported. """ current_unexpected = {} agent_cgroup_proc_names = [] report = [] try: daemon = os.getppid() extension_handler = os.getpid() agent_commands = set() agent_commands.update(shellutil.get_running_commands()) systemd_run_commands = set() systemd_run_commands.update(self._cgroups_api.get_systemd_run_commands()) agent_cgroup_proccesses = self._agent_cgroup.get_processes() # get the running commands again in case new commands started or completed while we were fetching the processes in the cgroup; agent_commands.update(shellutil.get_running_commands()) systemd_run_commands.update(self._cgroups_api.get_systemd_run_commands()) for process in agent_cgroup_proccesses: agent_cgroup_proc_names.append(self._format_process(process)) # Note that the agent uses systemd-run to start extensions; systemd-run belongs to the agent cgroup, though the extensions don't. if process in (daemon, extension_handler) or process in systemd_run_commands: continue # check shell systemd_run process if above process check didn't catch it if self._check_systemd_run_process(process): continue # systemd_run_commands contains the shell that started systemd-run, so we also need to check for the parent if self._get_parent(process) in systemd_run_commands and self._get_command( process) == 'systemd-run': continue # check if the process is a command started by the agent or a descendant of one of those commands current = process while current != 0 and current not in agent_commands: current = self._get_parent(current) # Verify if Process started by agent based on the marker found in process environment or process is in Zombie state. # If so, consider it as valid process in agent cgroup. if current == 0 and not (self._is_process_descendant_of_the_agent(process) or self._is_zombie_process(process)): current_unexpected[process] = self._format_process(process) if report_immediately: report = current_unexpected.values() else: for process in current_unexpected: if process in self._unexpected_processes: report.append(current_unexpected[process]) if len(report) >= 5: # collect just a small sample break self._unexpected_processes = current_unexpected except Exception as exception: log_cgroup_warning("Error checking the processes in the agent's cgroup: {0}".format(ustr(exception))) if len(report) > 0: self._report_agent_cgroups_procs(agent_cgroup_proc_names, report) raise CGroupsException("The agent's cgroup includes unexpected processes: {0}".format(report)) def get_logcollector_unit_properties(self): """ Returns the systemd unit properties for the log collector process. Each property should be explicitly set (even if already included in the log collector slice) for the log collector process to run in the transient scope directory with the expected accounting and limits. """ logcollector_properties = ["--property=CPUAccounting=yes", "--property=MemoryAccounting=yes", "--property=CPUQuota={0}".format(LOGCOLLECTOR_CPU_QUOTA_FOR_V1_AND_V2)] if not self.using_cgroup_v2(): return logcollector_properties # Memory throttling limit is used when running log collector on v2 machines using the 'MemoryHigh' property. # We do not use a systemd property to enforce memory on V1 because it invokes the OOM killer if the limit # is exceeded. logcollector_properties.append("--property=MemoryHigh={0}".format(LOGCOLLECTOR_MEMORY_THROTTLE_LIMIT_FOR_V2)) return logcollector_properties @staticmethod def _get_command(pid): try: with open('/proc/{0}/comm'.format(pid), "r") as file_: comm = file_.read() if comm and comm[-1] == '\x00': # if null-terminated, remove the null comm = comm[:-1] return comm.rstrip() except Exception: return "UNKNOWN" @staticmethod def _format_process(pid): """ Formats the given PID as a string containing the PID and the corresponding command line truncated to 64 chars """ try: cmdline = '/proc/{0}/cmdline'.format(pid) if os.path.exists(cmdline): with open(cmdline, "r") as cmdline_file: return "[PID: {0}] {1:64.64}".format(pid, cmdline_file.read()) except Exception: pass return "[PID: {0}] UNKNOWN".format(pid) @staticmethod def _is_process_descendant_of_the_agent(pid): """ Returns True if the process is descendant of the agent by looking at the env flag(AZURE_GUEST_AGENT_PARENT_PROCESS_NAME) that we set when the process starts otherwise False. """ try: env = '/proc/{0}/environ'.format(pid) if os.path.exists(env): with open(env, "r") as env_file: environ = env_file.read() if environ and environ[-1] == '\x00': environ = environ[:-1] return "{0}={1}".format(shellutil.PARENT_PROCESS_NAME, shellutil.AZURE_GUEST_AGENT) in environ except Exception: pass return False @staticmethod def _is_zombie_process(pid): """ Returns True if process is in Zombie state otherwise False. Ex: cat /proc/18171/stat 18171 (python3) S 18103 18103 18103 0 -1 4194624 57736 64902 0 3 """ try: stat = '/proc/{0}/stat'.format(pid) if os.path.exists(stat): with open(stat, "r") as stat_file: return stat_file.read().split()[2] == 'Z' except Exception: pass return False @staticmethod def _check_systemd_run_process(process): """ Returns True if process is shell systemd-run process started by agent otherwise False. Ex: sh,7345 -c systemd-run --unit=enable_7c5cab19-eb79-4661-95d9-9e5091bd5ae0 --scope --slice=azure-vmextensions-Microsoft.OSTCExtensions.VMAccessForLinux_1.5.11.slice /var/lib/waagent/Microsoft.OSTCExtensions.VMAccessForLinux-1.5.11/processes.sh """ try: process_name = "UNKNOWN" cmdline = '/proc/{0}/cmdline'.format(process) if os.path.exists(cmdline): with open(cmdline, "r") as cmdline_file: process_name = "{0}".format(cmdline_file.read()) match = re.search(r'systemd-run.*--unit=.*--scope.*--slice=azure-vmextensions.*', process_name) if match is not None: return True except Exception: pass return False @staticmethod def _report_agent_cgroups_procs(agent_cgroup_proc_names, unexpected): for proc_name in unexpected: if 'UNKNOWN' in proc_name: msg = "Agent includes following processes when UNKNOWN process found: {0}".format("\n".join([ustr(proc) for proc in agent_cgroup_proc_names])) add_event(op=WALAEventOperation.CGroupsInfo, message=msg) @staticmethod def _check_agent_throttled_time(cgroup_metrics): for metric in cgroup_metrics: if metric.instance == AGENT_NAME_TELEMETRY and metric.counter == MetricsCounter.THROTTLED_TIME: if metric.value > conf.get_agent_cpu_throttled_time_threshold(): raise CGroupsException("The agent has been throttled for {0} seconds".format(metric.value)) def check_agent_memory_usage(self): if self.enabled() and self._agent_memory_metrics is not None: metrics = self._agent_memory_metrics.get_tracked_metrics() current_usage = 0 for metric in metrics: if metric.counter == MetricsCounter.TOTAL_MEM_USAGE: current_usage += metric.value elif metric.counter == MetricsCounter.SWAP_MEM_USAGE: current_usage += metric.value if current_usage > conf.get_agent_memory_quota(): raise AgentMemoryExceededException("The agent memory limit {0} bytes exceeded. The current reported usage is {1} bytes.".format(conf.get_agent_memory_quota(), current_usage)) @staticmethod def _get_parent(pid): """ Returns the parent of the given process. If the parent cannot be determined returns 0 (which is the PID for the scheduler) """ try: stat = '/proc/{0}/stat'.format(pid) if os.path.exists(stat): with open(stat, "r") as stat_file: return int(stat_file.read().split()[3]) except Exception: pass return 0 def start_tracking_unit_cgroups(self, unit_name): if self.enabled(): try: cgroup = self._cgroups_api.get_unit_cgroup(unit_name, unit_name) controllers = cgroup.get_controllers() has_cpu_quota = CGroupUtil.has_cpu_quota(unit_name) for controller in controllers: if isinstance(controller, _CpuController) and has_cpu_quota: controller.track_throttle_time(True) # CPU controller track the throttle time only when CPU quota is set CGroupsTelemetry.track_cgroup_controller(controller) except Exception as exception: log_cgroup_info("Failed to start tracking resource usage for the extension: {0}".format(ustr(exception)), send_event=False) def stop_tracking_unit_cgroups(self, unit_name): if self.enabled(): try: cgroup = self._cgroups_api.get_unit_cgroup(unit_name, unit_name) controllers = cgroup.get_controllers() for controller in controllers: CGroupsTelemetry.stop_tracking(controller) except Exception as exception: log_cgroup_info("Failed to stop tracking resource usage for the extension service: {0}".format(ustr(exception)), send_event=False) def stop_tracking_extension_cgroups(self, extension_name): if self.enabled(): try: extension_slice_name = CGroupUtil.get_extension_slice_name(extension_name) cgroup_relative_path = os.path.join(_AZURE_VMEXTENSIONS_SLICE, extension_slice_name) cgroup = self._cgroups_api.get_cgroup_from_relative_path(relative_path=cgroup_relative_path, cgroup_name=extension_name) controllers = cgroup.get_controllers() for controller in controllers: CGroupsTelemetry.stop_tracking(controller) except Exception as exception: log_cgroup_info("Failed to stop tracking resource usage for the extension service: {0}".format(ustr(exception)), send_event=False) def start_extension_command(self, extension_name, command, cmd_name, timeout, shell, cwd, env, stdout, stderr, error_code=ExtensionErrorCodes.PluginUnknownFailure): """ Starts a command (install/enable/etc) for an extension and adds the command's PID to the extension's cgroup :param extension_name: The extension executing the command :param command: The command to invoke :param cmd_name: The type of the command(enable, install, etc.) :param timeout: Number of seconds to wait for command completion :param cwd: The working directory for the command :param env: The environment to pass to the command's process :param stdout: File object to redirect stdout to :param stderr: File object to redirect stderr to :param stderr: File object to redirect stderr to :param error_code: Extension error code to raise in case of error """ if self.enabled(): try: return self._cgroups_api.start_extension_command(extension_name, command, cmd_name, timeout, shell=shell, cwd=cwd, env=env, stdout=stdout, stderr=stderr, error_code=error_code) except SystemdRunError as exception: reason = 'Failed to start {0} using systemd-run, will try invoking the extension directly. Error: {1}'.format( extension_name, ustr(exception)) self.disable(reason, DisableCgroups.ALL) # fall-through and re-invoke the extension # subprocess-popen-preexec-fn Disabled: code is not multi-threaded process = subprocess.Popen(command, shell=shell, cwd=cwd, env=env, stdout=stdout, stderr=stderr, preexec_fn=os.setsid) # pylint: disable=W1509 return handle_process_completion(process=process, command=command, timeout=timeout, stdout=stdout, stderr=stderr, error_code=error_code) @staticmethod def _get_unit_properties_requiring_update(unit_name, cpu_quota=""): """ Check if the cgroups setup is completed for the unit and return the properties that need an update. """ properties_to_update = () properties_values = () cpu_accounting = systemd.get_unit_property(unit_name, "CPUAccounting") if cpu_accounting != "yes": properties_to_update += ("CPUAccounting",) properties_values += ("yes",) memory_accounting = systemd.get_unit_property(unit_name, "MemoryAccounting") if memory_accounting != "yes": properties_to_update += ("MemoryAccounting",) properties_values += ("yes",) current_cpu_quota = CGroupUtil.get_current_cpu_quota(unit_name) if current_cpu_quota != cpu_quota: properties_to_update += ("CPUQuota",) # no-quota expressed as empty string while setting property cpu_quota = cpu_quota if cpu_quota != "infinity" else "" properties_values += (cpu_quota,) return properties_to_update, properties_values def setup_extension_slice(self, extension_name, cpu_quota): """ Each extension runs under its own slice (Ex "Microsoft.CPlat.Extension.slice"). All the slices for extensions are grouped under "azure-vmextensions.slice. This method ensures that the desired configuration created for the extension slice using systemdctl set-property. TODO: set memory quotas """ if self.enabled(): extension_slice = CGroupUtil.get_extension_slice_name(extension_name) try: # clean up the old slice from the disk, new agent use systemdctl set-property unit_file_install_path = systemd.get_unit_file_install_path() extension_slice_path = os.path.join(unit_file_install_path, extension_slice) CGroupConfigurator._Impl._cleanup_unit_file(extension_slice_path) # clean up the old-old slice(includes version in the name) from the disk old_extension_slice_path = os.path.join(unit_file_install_path, CGroupUtil.get_extension_slice_name(extension_name, old_slice=True)) if os.path.exists(old_extension_slice_path): CGroupConfigurator._Impl._cleanup_unit_file(old_extension_slice_path) cpu_quota = "{0}%".format( cpu_quota) if cpu_quota is not None and self._cgroups_api.can_enforce_cpu() else "infinity" # following systemd convention for no-quota (infinity) properties_to_update, properties_values = self._get_unit_properties_requiring_update(extension_slice, cpu_quota) if len(properties_to_update) > 0: if cpu_quota == "infinity": log_cgroup_info("CPUQuota not set for {0}".format(extension_name)) else: log_cgroup_info("Setting {0}'s CPUQuota to {1}".format(extension_name, cpu_quota)) log_cgroup_info("Setting up the resource properties: {0} for {1}".format(properties_to_update, extension_slice)) systemd.set_unit_run_time_properties(extension_slice, properties_to_update, properties_values) except Exception as exception: log_cgroup_warning("Failed to set the extension {0} slice and quotas: {1}".format(extension_slice, ustr(exception))) def reset_extension_quota(self, extension_name): """ Removes any CPUQuota on the extension NOTE: This resets the quota on the extension's slice; any local overrides on the VM will take precedence over this setting. TODO: reset memory quotas """ if self.enabled(): try: self._reset_cpu_quota(CGroupUtil.get_extension_slice_name(extension_name)) except Exception as exception: log_cgroup_warning('Failed to reset for {0}: {1}'.format(extension_name, ustr(exception))) def set_extension_services_cpu_memory_quota(self, services_list): """ Each extension service will have name, systemd path and it's quotas. This method ensure limits set with systemtctl at runtime TODO: set memory quotas """ if self.enabled() and services_list is not None: for service in services_list: service_name = service.get('name', None) unit_file_path = systemd.get_unit_file_install_path() if service_name is not None and unit_file_path is not None: # remove drop files from disk, new agent use systemdctl set-property files_to_remove = [] drop_in_path = os.path.join(unit_file_path, "{0}.d".format(service_name)) drop_in_file_cpu_accounting = os.path.join(drop_in_path, _DROP_IN_FILE_CPU_ACCOUNTING) files_to_remove.append(drop_in_file_cpu_accounting) drop_in_file_memory_accounting = os.path.join(drop_in_path, _DROP_IN_FILE_MEMORY_ACCOUNTING) files_to_remove.append(drop_in_file_memory_accounting) drop_in_file_cpu_quota = os.path.join(drop_in_path, _DROP_IN_FILE_CPU_QUOTA) files_to_remove.append(drop_in_file_cpu_quota) self._cleanup_all_files(files_to_remove) cpu_quota = service.get('cpuQuotaPercentage') cpu_quota = "{0}%".format(cpu_quota) if cpu_quota is not None and self._cgroups_api.can_enforce_cpu() else "infinity" # following systemd convention for no-quota (infinity) try: properties_to_update, properties_values = self._get_unit_properties_requiring_update(service_name, cpu_quota) except Exception as exception: log_cgroup_warning("Failed to get the properties to update for {0}: {1}".format(service_name, ustr(exception))) # when we fail to get the properties to update, we will skip the set-property and continue for next service continue # If systemd is unaware of extension services and not loaded in the system yet, we get error while setting quotas. Hence, added unit loaded check. if systemd.is_unit_loaded(service_name) and len(properties_to_update) > 0: if cpu_quota != "infinity": log_cgroup_info("Setting {0}'s CPUQuota to {1}".format(service_name, cpu_quota)) else: log_cgroup_info("CPUQuota not set for {0}".format(service_name)) log_cgroup_info("Setting up resource properties: {0} for {1}" .format(properties_to_update, service_name)) try: systemd.set_unit_run_time_properties(service_name, properties_to_update, properties_values) except Exception as exception: log_cgroup_warning("Failed to set the quotas for {0}: {1}".format(service_name, ustr(exception))) def reset_extension_services_quota(self, services_list): """ Removes any CPUQuota on the extension service NOTE: This resets the quota on the extension service's default; any local overrides on the VM will take precedence over this setting. TODO: reset memory quotas """ if self.enabled() and services_list is not None: service_name = None try: for service in services_list: service_name = service.get('name', None) if service_name is not None and systemd.is_unit_loaded(service_name): self._reset_cpu_quota(service_name) except Exception as exception: log_cgroup_warning('Failed to reset for {0} : {1}'.format(service_name, ustr(exception))) def stop_tracking_extension_services_cgroups(self, services_list): """ Remove the cgroup entry from the tracked groups to stop tracking. """ if self.enabled() and services_list is not None: for service in services_list: service_name = service.get('name', None) if service_name is not None: self.stop_tracking_unit_cgroups(service_name) def start_tracking_extension_services_cgroups(self, services_list): """ Add the cgroup entry to start tracking the services cgroups. """ if self.enabled() and services_list is not None: for service in services_list: service_name = service.get('name', None) if service_name is not None: self.start_tracking_unit_cgroups(service_name) @staticmethod def get_extension_services_list(): """ ResourceLimits for extensions are coming from /HandlerManifest.json file. Use this pattern to determine all the installed extension HandlerManifest files and read the extension services if ResourceLimits are present. """ extensions_services = {} for manifest_path in glob.iglob(os.path.join(conf.get_lib_dir(), "*/HandlerManifest.json")): match = re.search("(?P[\\w+\\.-]+).HandlerManifest\\.json", manifest_path) if match is not None: extensions_name = match.group('extname') if not extensions_name.startswith('WALinuxAgent'): try: data = json.loads(fileutil.read_file(manifest_path)) resource_limits = data[0].get('resourceLimits', None) services = resource_limits.get('services') if resource_limits else None extensions_services[extensions_name] = services except (IOError, OSError) as e: log_cgroup_warning( 'Failed to load manifest file ({0}): {1}'.format(manifest_path, e.strerror)) except ValueError: log_cgroup_warning('Malformed manifest file ({0}).'.format(manifest_path)) return extensions_services # unique instance for the singleton _instance = None @staticmethod def get_instance(): if CGroupConfigurator._instance is None: CGroupConfigurator._instance = CGroupConfigurator._Impl() return CGroupConfigurator._instance Azure-WALinuxAgent-a976115/azurelinuxagent/ga/cgroupcontroller.py000066400000000000000000000151461510742556200251720ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import errno import glob import os from datetime import timedelta from azurelinuxagent.common import logger, conf from azurelinuxagent.common.exception import CGroupsException from azurelinuxagent.common.future import ustr from azurelinuxagent.common.utils import fileutil _REPORT_EVERY_HOUR = timedelta(hours=1) _DEFAULT_REPORT_PERIOD = timedelta(seconds=conf.get_cgroup_check_period()) AGENT_NAME_TELEMETRY = "walinuxagent.service" # Name used for telemetry; it needs to be consistent even if the name of the service changes AGENT_LOG_COLLECTOR = "azure-walinuxagent-logcollector" class CounterNotFound(Exception): pass class MetricValue(object): """ Class for defining all the required metric fields to send telemetry. """ def __init__(self, category, counter, instance, value, report_period=_DEFAULT_REPORT_PERIOD): self._category = category self._counter = counter self._instance = instance self._value = value self._report_period = report_period @property def category(self): return self._category @property def counter(self): return self._counter @property def instance(self): return self._instance @property def value(self): return self._value @property def report_period(self): return self._report_period class MetricsCategory(object): MEMORY_CATEGORY = "Memory" CPU_CATEGORY = "CPU" class MetricsCounter(object): PROCESSOR_PERCENT_TIME = "% Processor Time" THROTTLED_TIME = "Throttled Time (s)" TOTAL_MEM_USAGE = "Total Memory Usage (B)" ANON_MEM_USAGE = "Anon Memory Usage (B)" CACHE_MEM_USAGE = "Cache Memory Usage (B)" MAX_MEM_USAGE = "Max Memory Usage (B)" SWAP_MEM_USAGE = "Swap Memory Usage (B)" MEM_THROTTLED = "Total Memory Throttled Events" AVAILABLE_MEM = "Available Memory (MB)" USED_MEM = "Used Memory (MB)" class _CgroupController(object): def __init__(self, name, cgroup_path): """ Initialize _data collection for the controller :param: name: Name of the CGroup :param: cgroup_path: Path of the controller :return: """ self.name = name self.path = cgroup_path def __str__(self): return "{0} [{1}]".format(self.name, self.path) def _get_cgroup_file(self, file_name): return os.path.join(self.path, file_name) def _get_file_contents(self, file_name): """ Retrieve the contents of file. :param str file_name: Name of file within that metric controller :return: Entire contents of the file :rtype: str """ parameter_file = self._get_cgroup_file(file_name) return fileutil.read_file(parameter_file) def _get_parameters(self, parameter_name, first_line_only=False): """ Retrieve the values of a parameter from a controller. Returns a list of values in the file. :param first_line_only: return only the first line. :param str parameter_name: Name of file within that metric controller :return: The first line of the file, without line terminator :rtype: [str] """ result = [] try: values = self._get_file_contents(parameter_name).splitlines() result = values[0] if first_line_only else values except IndexError: parameter_filename = self._get_cgroup_file(parameter_name) logger.error("File {0} is empty but should not be".format(parameter_filename)) raise CGroupsException("File {0} is empty but should not be".format(parameter_filename)) except Exception as e: if isinstance(e, (IOError, OSError)) and e.errno == errno.ENOENT: # pylint: disable=E1101 raise e parameter_filename = self._get_cgroup_file(parameter_name) raise CGroupsException("Exception while attempting to read {0}".format(parameter_filename), e) return result def is_active(self): """ Returns True if any processes belong to the cgroup. In v1, cgroup.procs returns a list of the thread group IDs belong to the cgroup. In v2, cgroup.procs returns a list of the process IDs belonging to the cgroup. """ try: def _found_cgroup_procs(file): try: procs = fileutil.read_file(file).splitlines() if len(procs) > 0: return True except (IOError, OSError) as e: if e.errno == errno.ENOENT: # only suppressing file not found exceptions. pass else: raise return False # In v1, the cgroup.procs file is present in the service/slice cgroup directory. if _found_cgroup_procs(os.path.join(self.path, "cgroup.procs")): return True # In v2, the cgroup.procs file is present in the scope cgroup for extensions for cgroup_file in glob.iglob(os.path.join(self.path, "*/cgroup.procs")): if _found_cgroup_procs(cgroup_file): return True except Exception as e: logger.periodic_warn(logger.EVERY_HALF_HOUR, 'Could not get list of procs from "cgroup.procs" file in the cgroup: {0}.' ' Internal error: {1}'.format(self.path, ustr(e))) return False def get_tracked_metrics(self): """ Retrieves the current value of the metrics tracked for this controller/cgroup and returns them as an array. """ raise NotImplementedError() def get_unit_properties(self): """ Returns a list of the unit properties to collect for the controller. """ raise NotImplementedError() def get_controller_type(self): """ Returns the type of the controller. Example: CPU, Memory, etc. """ raise NotImplementedError() Azure-WALinuxAgent-a976115/azurelinuxagent/ga/cgroupstelemetry.py000066400000000000000000000102321510742556200251730ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import errno import threading from azurelinuxagent.common import logger from azurelinuxagent.ga.cpucontroller import _CpuController from azurelinuxagent.common.future import ustr class CGroupsTelemetry(object): """ """ _tracked = {} _rlock = threading.RLock() @staticmethod def _get_tracking_id(cgroup_controller): controller_type = cgroup_controller.get_controller_type() # Since the path is same for all controllers in v2, we need to differentiate to track them separately tracking_id = "{0}:{1}".format(controller_type, cgroup_controller.path) return tracking_id @staticmethod def track_cgroup_controller(cgroup_controller): """ Adds the given item to the dictionary of tracked cgroup controllers """ if isinstance(cgroup_controller, _CpuController): # set the current cpu usage cgroup_controller.initialize_cpu_usage() with CGroupsTelemetry._rlock: tracking_id = CGroupsTelemetry._get_tracking_id(cgroup_controller) if not CGroupsTelemetry.is_tracked(tracking_id): CGroupsTelemetry._tracked[tracking_id] = cgroup_controller logger.info("Started tracking {0} cgroup {1}", cgroup_controller.get_controller_type(), cgroup_controller) @staticmethod def is_tracked(path): """ Returns true if the given item is in the list of tracked items O(1) operation. """ with CGroupsTelemetry._rlock: if path in CGroupsTelemetry._tracked: return True return False @staticmethod def stop_tracking(cgroup): """ Stop tracking the cgroups for the given path """ with CGroupsTelemetry._rlock: tracking_id = CGroupsTelemetry._get_tracking_id(cgroup) if tracking_id in CGroupsTelemetry._tracked: CGroupsTelemetry._tracked.pop(tracking_id) logger.info("Stopped tracking {0} cgroup {1}", cgroup.get_controller_type(), cgroup) @staticmethod def poll_all_tracked(): metrics = [] inactive_controllers = [] with CGroupsTelemetry._rlock: for controller in CGroupsTelemetry._tracked.values(): try: metrics.extend(controller.get_tracked_metrics()) except Exception as e: # There can be scenarios when the CGroup has been deleted by the time we are fetching the values # from it. This would raise IOError with file entry not found (ERRNO: 2). We do not want to log # every occurrences of such case as it would be very verbose. We do want to log all the other # exceptions which could occur, which is why we do a periodic log for all the other errors. if not isinstance(e, (IOError, OSError)) or e.errno != errno.ENOENT: # pylint: disable=E1101 logger.periodic_warn(logger.EVERY_HOUR, '[PERIODIC] Could not collect metrics for cgroup ' '{0}. Error : {1}'.format(controller.name, ustr(e))) if not controller.is_active(): inactive_controllers.append(controller) for inactive_controller in inactive_controllers: CGroupsTelemetry.stop_tracking(inactive_controller) return metrics @staticmethod def reset(): with CGroupsTelemetry._rlock: CGroupsTelemetry._tracked.clear() # emptying the dictionaryAzure-WALinuxAgent-a976115/azurelinuxagent/ga/collect_logs.py000066400000000000000000000420231510742556200242320ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import datetime import os import sys import threading import time from azurelinuxagent.ga import logcollector, cgroupconfigurator import azurelinuxagent.common.conf as conf from azurelinuxagent.common import logger from azurelinuxagent.ga.cgroupcontroller import MetricsCounter from azurelinuxagent.common.event import elapsed_milliseconds, add_event, WALAEventOperation from azurelinuxagent.common.future import ustr, UTC from azurelinuxagent.ga.interfaces import ThreadHandlerInterface from azurelinuxagent.ga.logcollector import COMPRESSED_ARCHIVE_PATH, GRACEFUL_KILL_ERRCODE from azurelinuxagent.ga.cgroupconfigurator import CGroupConfigurator, LOGCOLLECTOR_ANON_MEMORY_LIMIT_FOR_V1_AND_V2, LOGCOLLECTOR_CACHE_MEMORY_LIMIT_FOR_V1_AND_V2, LOGCOLLECTOR_MAX_THROTTLED_EVENTS_FOR_V2 from azurelinuxagent.common.protocol.util import get_protocol_util from azurelinuxagent.common.utils import shellutil from azurelinuxagent.common.utils.shellutil import CommandError from azurelinuxagent.common.version import PY_VERSION_MAJOR, PY_VERSION_MINOR, AGENT_NAME, CURRENT_VERSION def get_collect_logs_handler(): return CollectLogsHandler() def is_log_collection_allowed(): # There are three conditions that need to be met in order to allow periodic log collection: # 1) It should be enabled in the configuration. # 2) The system must be using cgroups to manage services - needed for resource limiting of the log collection. The # agent currently fully supports resource limiting for v1, but only supports log collector resource limiting for v2 # if enabled via configuration. # This condition is True if either: # a. cgroup usage in the agent is enabled; OR # b. the machine is using cgroup v2 and v2 resource limiting is enabled in the configuration. # 3) The python version must be greater than 2.6 in order to support the ZipFile library used when collecting. conf_enabled = conf.get_collect_logs() cgroups_enabled = CGroupConfigurator.get_instance().enabled() cgroup_v2_resource_limiting_enabled = CGroupConfigurator.get_instance().using_cgroup_v2() and conf.get_enable_cgroup_v2_resource_limiting() supported_python = PY_VERSION_MINOR >= 6 if PY_VERSION_MAJOR == 2 else PY_VERSION_MAJOR == 3 is_allowed = conf_enabled and (cgroups_enabled or cgroup_v2_resource_limiting_enabled) and supported_python msg = "Checking if log collection is allowed at this time [{0}]. All three conditions must be met: " \ "1. configuration enabled [{1}], " \ "2. cgroups v1 enabled [{2}] OR cgroups v2 is in use and v2 resource limiting configuration enabled [{3}], " \ "3. python supported: [{4}]".format(is_allowed, conf_enabled, cgroups_enabled, cgroup_v2_resource_limiting_enabled, supported_python) logger.info(msg) add_event( name=AGENT_NAME, version=CURRENT_VERSION, op=WALAEventOperation.LogCollection, is_success=is_allowed, message=msg, log_event=False) return is_allowed class CollectLogsHandler(ThreadHandlerInterface): """ Periodically collects and uploads logs from the VM to the host. """ _THREAD_NAME = "CollectLogsHandler" __CGROUPS_FLAG_ENV_VARIABLE = "_AZURE_GUEST_AGENT_LOG_COLLECTOR_MONITOR_CGROUPS_" @staticmethod def get_thread_name(): return CollectLogsHandler._THREAD_NAME @staticmethod def enable_monitor_cgroups_check(): os.environ[CollectLogsHandler.__CGROUPS_FLAG_ENV_VARIABLE] = "1" @staticmethod def disable_monitor_cgroups_check(): if CollectLogsHandler.__CGROUPS_FLAG_ENV_VARIABLE in os.environ: del os.environ[CollectLogsHandler.__CGROUPS_FLAG_ENV_VARIABLE] @staticmethod def is_enabled_monitor_cgroups_check(): if CollectLogsHandler.__CGROUPS_FLAG_ENV_VARIABLE in os.environ: return os.environ[CollectLogsHandler.__CGROUPS_FLAG_ENV_VARIABLE] == "1" return False def __init__(self): self.protocol = None self.protocol_util = None self.event_thread = None self.should_run = True self.last_state = None self.period = conf.get_collect_logs_period() self.log_collector_cgroup_path_validation_errors = 0 def run(self): self.start() def keep_alive(self): return self.should_run def is_alive(self): return self.event_thread.is_alive() def start(self): self.event_thread = threading.Thread(target=self.daemon) self.event_thread.daemon = True self.event_thread.name = self.get_thread_name() self.event_thread.start() def join(self): self.event_thread.join() def stopped(self): return not self.should_run def stop(self): self.should_run = False if self.is_alive(): try: self.join() except RuntimeError: pass def init_protocols(self): # The initialization of ProtocolUtil for the log collection thread should be done within the thread itself # rather than initializing it in the ExtHandler thread. This is done to avoid any concurrency issues as each # thread would now have its own ProtocolUtil object as per the SingletonPerThread model. self.protocol_util = get_protocol_util() self.protocol = self.protocol_util.get_protocol() def daemon(self): # Delay the first collector on start up to give short lived VMs (that might be dead before the second # collection has a chance to run) an opportunity to do produce meaningful logs to collect. time.sleep(conf.get_log_collector_initial_delay()) try: CollectLogsHandler.enable_monitor_cgroups_check() if self.protocol_util is None or self.protocol is None: self.init_protocols() while not self.stopped(): try: self.collect_and_send_logs() except Exception as e: logger.error("An error occurred in the log collection thread main loop; " "will skip the current iteration.\n{0}", ustr(e)) finally: time.sleep(self.period) except Exception as e: logger.error("An error occurred in the log collection thread; will exit the thread.\n{0}", ustr(e)) finally: CollectLogsHandler.disable_monitor_cgroups_check() def collect_and_send_logs(self): if self._collect_logs(): self._send_logs() def _collect_logs(self): logger.info("Starting log collection...") # Invoke the command line tool in the agent to collect logs. The --scope option starts the process as a systemd # transient scope unit. The --property option is used to set systemd memory and cpu properties on the scope. systemd_cmd = [ "systemd-run", "--unit={0}".format(logcollector.CGROUPS_UNIT), "--slice={0}".format(cgroupconfigurator.LOGCOLLECTOR_SLICE), "--scope" ] + CGroupConfigurator.get_instance().get_logcollector_unit_properties() # The log tool is invoked from the current agent's egg with the command line option collect_logs_cmd = [sys.executable, "-u", sys.argv[0], "-collect-logs"] final_command = systemd_cmd + collect_logs_cmd def exec_command(): start_time = datetime.datetime.now(UTC) success = False msg = None try: shellutil.run_command(final_command, log_error=False) duration = elapsed_milliseconds(start_time) archive_size = os.path.getsize(COMPRESSED_ARCHIVE_PATH) msg = "Successfully collected logs. Archive size: {0} b, elapsed time: {1} ms.".format(archive_size, duration) logger.info(msg) success = True # reset the error count self.log_collector_cgroup_path_validation_errors = 0 return True except Exception as e: duration = elapsed_milliseconds(start_time) err_msg = ustr(e) if isinstance(e, CommandError): # pylint has limited (i.e. no) awareness of control flow w.r.t. typing. we disable=no-member # here because we know e must be a CommandError but pylint still considers the case where # e is a different type of exception. err_msg = ustr("Log Collector exited with code {0}").format(e.returncode) # pylint: disable=no-member if e.returncode == logcollector.UNEXPECTED_CGROUP_PATH_ERRCODE: # pylint: disable=no-member self.log_collector_cgroup_path_validation_errors += 1 if self.log_collector_cgroup_path_validation_errors < logcollector.LOG_COLLECTOR_CGROUP_PATH_VALIDATION_MAX_FAILURES: logger.info("Log collector cgroup is not in the expected path, will attempt log collection in next iteration.") else: logger.info("Disabling periodic log collection until service restart due to cgroup not in expected path in multiple runs.") self.stop() if e.returncode == logcollector.INVALID_CGROUPS_ERRCODE: # pylint: disable=no-member logger.info("Disabling periodic log collection until service restart due to process error.") self.stop() # When the log collector memory limit is exceeded, Agent gracefully exit the process with this error code. # Stop the periodic operation because it seems to be persistent. elif e.returncode == logcollector.GRACEFUL_KILL_ERRCODE: # pylint: disable=no-member logger.info("Disabling periodic log collection until service restart due to exceeded process memory limit.") self.stop() else: logger.info(err_msg) msg = "Failed to collect logs. Elapsed time: {0} ms. Error: {1}".format(duration, err_msg) # No need to log to the local log since we logged stdout, stderr from the process. return False finally: add_event( name=AGENT_NAME, version=CURRENT_VERSION, op=WALAEventOperation.LogCollection, is_success=success, message=msg, log_event=False) return exec_command() def _send_logs(self): msg = None success = False try: with open(COMPRESSED_ARCHIVE_PATH, "rb") as fh: archive_content = fh.read() self.protocol.upload_logs(archive_content) msg = "Successfully uploaded logs." logger.info(msg) success = True except Exception as e: msg = "Failed to upload logs. Error: {0}".format(ustr(e)) logger.warn(msg) finally: add_event( name=AGENT_NAME, version=CURRENT_VERSION, op=WALAEventOperation.LogCollection, is_success=success, message=msg, log_event=False) def get_log_collector_monitor_handler(controllers): return LogCollectorMonitorHandler(controllers) class LogCollectorMonitorHandler(ThreadHandlerInterface): """ Periodically monitor and checks the Log collector Cgroups and sends telemetry to Kusto. """ _THREAD_NAME = "LogCollectorMonitorHandler" @staticmethod def get_thread_name(): return LogCollectorMonitorHandler._THREAD_NAME def __init__(self, controllers): self.event_thread = None self.should_run = True self.period = 2 # Log collector monitor runs every 2 secs. self.controllers = controllers self.max_recorded_metrics = {} self.__should_log_metrics = conf.get_cgroup_log_metrics() def run(self): self.start() def stop(self): self.should_run = False if self.is_alive(): self.join() def join(self): self.event_thread.join() def stopped(self): return not self.should_run def is_alive(self): return self.event_thread is not None and self.event_thread.is_alive() def start(self): self.event_thread = threading.Thread(target=self.daemon) self.event_thread.daemon = True self.event_thread.name = self.get_thread_name() self.event_thread.start() def daemon(self): try: while not self.stopped(): try: metrics = self._poll_resource_usage() if self.__should_log_metrics: self._log_metrics(metrics) self._verify_memory_limit(metrics) except Exception as e: logger.error("An error occurred in the log collection monitor thread loop; " "will skip the current iteration.\n{0}", ustr(e)) finally: time.sleep(self.period) except Exception as e: logger.error( "An error occurred in the MonitorLogCollectorCgroupsHandler thread; will exit the thread.\n{0}", ustr(e)) def get_max_recorded_metrics(self): return self.max_recorded_metrics def _poll_resource_usage(self): metrics = [] for controller in self.controllers: metrics.extend(controller.get_tracked_metrics()) for metric in metrics: current_max = self.max_recorded_metrics.get(metric.counter) self.max_recorded_metrics[metric.counter] = metric.value if current_max is None else max(current_max, metric.value) return metrics def _log_metrics(self, metrics): for metric in metrics: logger.info("Metric {0}/{1} [{2}] = {3}".format(metric.category, metric.counter, metric.instance, metric.value)) def _verify_memory_limit(self, metrics): current_anon_and_swap_usage = 0 current_cache_usage = 0 memory_throttled_events = 0 for metric in metrics: if metric.counter == MetricsCounter.ANON_MEM_USAGE: current_anon_and_swap_usage += metric.value elif metric.counter == MetricsCounter.SWAP_MEM_USAGE: current_anon_and_swap_usage += metric.value elif metric.counter == MetricsCounter.CACHE_MEM_USAGE: current_cache_usage = metric.value elif metric.counter == MetricsCounter.MEM_THROTTLED: memory_throttled_events = metric.value mem_limit_exceeded = False if current_anon_and_swap_usage > LOGCOLLECTOR_ANON_MEMORY_LIMIT_FOR_V1_AND_V2: mem_limit_exceeded = True msg = "Log collector anon + swap memory limit {0} bytes exceeded. The reported usage is {1} bytes.".format(LOGCOLLECTOR_ANON_MEMORY_LIMIT_FOR_V1_AND_V2, current_anon_and_swap_usage) logger.info(msg) add_event(name=AGENT_NAME, version=CURRENT_VERSION, op=WALAEventOperation.LogCollection, message=msg) if current_cache_usage > LOGCOLLECTOR_CACHE_MEMORY_LIMIT_FOR_V1_AND_V2: mem_limit_exceeded = True msg = "Log collector cache memory limit {0} bytes exceeded. The reported usage is {1} bytes.".format(LOGCOLLECTOR_CACHE_MEMORY_LIMIT_FOR_V1_AND_V2, current_cache_usage) logger.info(msg) add_event(name=AGENT_NAME, version=CURRENT_VERSION, op=WALAEventOperation.LogCollection, message=msg) if memory_throttled_events > LOGCOLLECTOR_MAX_THROTTLED_EVENTS_FOR_V2: mem_limit_exceeded = True msg = "Log collector memory throttled events limit {0} exceeded. The reported number of throttled events is {1}.".format(LOGCOLLECTOR_MAX_THROTTLED_EVENTS_FOR_V2, memory_throttled_events) logger.info(msg) add_event(name=AGENT_NAME, version=CURRENT_VERSION, op=WALAEventOperation.LogCollection, message=msg) if mem_limit_exceeded: os._exit(GRACEFUL_KILL_ERRCODE) Azure-WALinuxAgent-a976115/azurelinuxagent/ga/collect_telemetry_events.py000066400000000000000000000746721510742556200267030ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import datetime import json import os import re import threading import time from collections import defaultdict import azurelinuxagent.common.logger as logger from azurelinuxagent.common import conf from azurelinuxagent.common.agent_supported_feature import get_supported_feature_by_name, SupportedFeatureNames from azurelinuxagent.common.event import EVENTS_DIRECTORY, TELEMETRY_LOG_EVENT_ID, \ TELEMETRY_LOG_PROVIDER_ID, add_event, WALAEventOperation, add_log_event, get_event_logger, \ CollectOrReportEventDebugInfo, EVENT_FILE_REGEX, parse_event, redact_event_msg from azurelinuxagent.common.exception import InvalidExtensionEventError, ServiceStoppedError, EventError from azurelinuxagent.common.future import ustr, is_file_not_found_error, UTC from azurelinuxagent.common.utils.textutil import redact_sas_token from azurelinuxagent.ga.interfaces import ThreadHandlerInterface from azurelinuxagent.common.telemetryevent import TelemetryEvent, TelemetryEventParam, \ GuestAgentGenericLogsSchema, GuestAgentExtensionEventsSchema from azurelinuxagent.common.utils import textutil from azurelinuxagent.ga.exthandlers import HANDLER_NAME_PATTERN from azurelinuxagent.ga.periodic_operation import PeriodicOperation # Event file specific retries and delays. NUM_OF_EVENT_FILE_RETRIES = 3 EVENT_FILE_RETRY_DELAY = 1 # seconds def get_collect_telemetry_events_handler(send_telemetry_events_handler): return CollectTelemetryEventsHandler(send_telemetry_events_handler) class ExtensionEventSchema(object): """ Class for defining the schema for Extension Events. Sample Extension Event Example: { "Version":"1.0.0.23", "Timestamp":"2018-01-02T22:08:12.510696Z" //(time in UTC (ISO-8601 standard), "TaskName":"TestRun" //Open for publishers, "EventLevel":"Critical/Error/Warning/Verbose/Informational/LogAlways", "Message": "Successful test" //(max 3K, 3072 characters), "EventPid":"1", "EventTid":"2", "OperationId":"Guid (str)" } From next version(2.10+) we accept integer values for EventPid and EventTid fields. But we still support string type for backward compatability """ Version = "Version" Timestamp = "Timestamp" TaskName = "TaskName" EventLevel = "EventLevel" Message = "Message" EventPid = "EventPid" EventTid = "EventTid" OperationId = "OperationId" class _ProcessExtensionEvents(PeriodicOperation): """ Periodic operation for collecting extension telemetry events and enqueueing them for the SendTelemetryHandler thread. """ _EXTENSION_EVENT_COLLECTION_PERIOD = datetime.timedelta(seconds=conf.get_etp_collection_period()) _EXTENSION_EVENT_FILE_NAME_REGEX = re.compile(r"^(\d+)\.json$", re.IGNORECASE) # Limits _MAX_NUMBER_OF_EVENTS_PER_EXTENSION_PER_PERIOD = 360 _EXTENSION_EVENT_FILE_MAX_SIZE = 4 * 1024 * 1024 # 4 MB = 4 * 1,048,576 Bytes _EXTENSION_EVENT_MAX_SIZE = 1024 * 6 # 6Kb or 6144 characters. Limit for the whole event. Prevent oversized events. _EXTENSION_EVENT_MAX_MSG_LEN = 1024 * 3 # 3Kb or 3072 chars. _EXTENSION_EVENT_REQUIRED_FIELDS = [attr.lower() for attr in dir(ExtensionEventSchema) if not callable(getattr(ExtensionEventSchema, attr)) and not attr.startswith("__")] def __init__(self, send_telemetry_events_handler): super(_ProcessExtensionEvents, self).__init__(_ProcessExtensionEvents._EXTENSION_EVENT_COLLECTION_PERIOD) self._send_telemetry_events_handler = send_telemetry_events_handler def _operation(self): if self._send_telemetry_events_handler.stopped(): logger.warn("{0} service is not running, skipping current iteration".format( self._send_telemetry_events_handler.get_thread_name())) return delete_all_event_files = True extension_handler_with_event_dirs = [] try: extension_handler_with_event_dirs = self._get_extension_events_dir_with_handler_name(conf.get_ext_log_dir()) if not extension_handler_with_event_dirs: logger.verbose("No Extension events directory exist") return for extension_handler_with_event_dir in extension_handler_with_event_dirs: handler_name = extension_handler_with_event_dir[0] handler_event_dir_path = extension_handler_with_event_dir[1] self._capture_extension_events(handler_name, handler_event_dir_path) except ServiceStoppedError: # Since the service stopped, we should not delete the extension files and retry sending them whenever # the telemetry service comes back up delete_all_event_files = False except Exception as error: msg = "Unknown error occurred when trying to collect extension events:{0}".format( textutil.format_exception(error)) add_event(op=WALAEventOperation.ExtensionTelemetryEventProcessing, message=msg, is_success=False) finally: # Always ensure that the events directory are being deleted each run except when Telemetry Service is stopped, # even if we run into an error and dont process them this run. if delete_all_event_files: self._ensure_all_events_directories_empty(extension_handler_with_event_dirs) @staticmethod def _get_extension_events_dir_with_handler_name(extension_log_dir): """ Get the full path to events directory for all extension handlers that have one :param extension_log_dir: Base log directory for all extensions :return: A list of full paths of existing events directory for all handlers """ extension_handler_with_event_dirs = [] for ext_handler_name in os.listdir(extension_log_dir): # Check if its an Extension directory if not os.path.isdir(os.path.join(extension_log_dir, ext_handler_name)) \ or re.match(HANDLER_NAME_PATTERN, ext_handler_name) is None: continue # Check if EVENTS_DIRECTORY directory exists extension_event_dir = os.path.join(extension_log_dir, ext_handler_name, EVENTS_DIRECTORY) if os.path.exists(extension_event_dir): extension_handler_with_event_dirs.append((ext_handler_name, extension_event_dir)) return extension_handler_with_event_dirs def _event_file_size_allowed(self, event_file_path): event_file_size = os.stat(event_file_path).st_size if event_file_size > self._EXTENSION_EVENT_FILE_MAX_SIZE: convert_to_mb = lambda x: (1.0 * x) / (1000 * 1000) msg = "Skipping file: {0} as its size is {1:.2f} Mb > Max size allowed {2:.1f} Mb".format( event_file_path, convert_to_mb(event_file_size), convert_to_mb(self._EXTENSION_EVENT_FILE_MAX_SIZE)) logger.warn(msg) add_log_event(level=logger.LogLevel.WARNING, message=msg, forced=True) return False return True def _capture_extension_events(self, handler_name, handler_event_dir_path): """ Capture Extension events and add them to the events_list :param handler_name: Complete Handler Name. Eg: Microsoft.CPlat.Core.RunCommandLinux :param handler_event_dir_path: Full path. Eg: '/var/log/azure/Microsoft.CPlat.Core.RunCommandLinux/events' """ # Filter out the files that do not follow the pre-defined EXTENSION_EVENT_FILE_NAME_REGEX event_files = [event_file for event_file in os.listdir(handler_event_dir_path) if re.match(self._EXTENSION_EVENT_FILE_NAME_REGEX, event_file) is not None] # Pick the latest files first, we'll discard older events if len(events) > MAX_EVENT_COUNT event_files.sort(reverse=True) captured_extension_events_count = 0 dropped_events_with_error_count = defaultdict(int) try: for event_file in event_files: event_file_path = os.path.join(handler_event_dir_path, event_file) try: logger.verbose("Processing event file: {0}", event_file_path) if not self._event_file_size_allowed(event_file_path): continue # We support multiple events in a file, read the file and parse events. captured_extension_events_count = self._enqueue_events_and_get_count(handler_name, event_file_path, captured_extension_events_count, dropped_events_with_error_count) # We only allow MAX_NUMBER_OF_EVENTS_PER_EXTENSION_PER_PERIOD=300 maximum events per period per handler if captured_extension_events_count >= self._MAX_NUMBER_OF_EVENTS_PER_EXTENSION_PER_PERIOD: msg = "Reached max count for the extension: {0}; Max Limit: {1}. Skipping the rest.".format( handler_name, self._MAX_NUMBER_OF_EVENTS_PER_EXTENSION_PER_PERIOD) logger.warn(msg) add_log_event(level=logger.LogLevel.WARNING, message=msg, forced=True) break except ServiceStoppedError: # Not logging here as already logged once, re-raising # Since we already started processing this file, deleting it as we could've already sent some events out # This is a trade-off between data replication vs data loss. raise except Exception as error: msg = "Failed to process event file {0}:{1}".format(event_file, textutil.format_exception(error)) logger.warn(msg) add_log_event(level=logger.LogLevel.WARNING, message=msg, forced=True) finally: # Todo: We should delete files after ensuring that we sent the data to Wireserver successfully # from our end rather than deleting first and sending later. This is to ensure the data reliability # of the agent telemetry pipeline. os.remove(event_file_path) finally: if dropped_events_with_error_count: msg = "Dropped events for Extension: {0}; Details:\n\t{1}".format(handler_name, '\n\t'.join( ["Reason: {0}; Dropped Count: {1}".format(k, v) for k, v in dropped_events_with_error_count.items()])) logger.warn(msg) add_log_event(level=logger.LogLevel.WARNING, message=msg, forced=True) if captured_extension_events_count > 0: logger.info("Collected {0} events for extension: {1}".format(captured_extension_events_count, handler_name)) @staticmethod def _ensure_all_events_directories_empty(extension_events_directories): if not extension_events_directories: return for extension_handler_with_event_dir in extension_events_directories: event_dir_path = extension_handler_with_event_dir[1] if not os.path.exists(event_dir_path): return log_err = True # Delete any residue files in the events directory for residue_file in os.listdir(event_dir_path): try: os.remove(os.path.join(event_dir_path, residue_file)) except Exception as error: # Only log the first error once per handler per run to keep the logfile clean if log_err: logger.error("Failed to completely clear the {0} directory. Exception: {1}", event_dir_path, ustr(error)) log_err = False @staticmethod def _read_event_file(event_file_path): """ Read the event file and return the data. :param event_file_path: Full path of the event file. :return: Event data in list or string format. """ # Retry for reading the event file in case file is modified while reading # We except FileNotFoundError and ValueError to handle the case where the file is deleted or modified while reading error_count = 0 while True: try: # Read event file and decode it properly with open(event_file_path, "rb") as event_file_descriptor: event_data = event_file_descriptor.read().decode("utf-8") # Parse the string and get the list of events return json.loads(event_data) except Exception as e: if is_file_not_found_error(e) or isinstance(e, ValueError): error_count += 1 if error_count >= NUM_OF_EVENT_FILE_RETRIES: raise else: raise time.sleep(EVENT_FILE_RETRY_DELAY) def _enqueue_events_and_get_count(self, handler_name, event_file_path, captured_events_count, dropped_events_with_error_count): event_file_time = datetime.datetime.fromtimestamp(os.path.getmtime(event_file_path)).replace(tzinfo=UTC) events = self._read_event_file(event_file_path) # We allow multiple events in a file but there can be an instance where the file only has a single # JSON event and not a list. Handling that condition too if not isinstance(events, list): events = [events] for event in events: try: self._send_telemetry_events_handler.enqueue_event( self._parse_telemetry_event(handler_name, event, event_file_time) ) captured_events_count += 1 except InvalidExtensionEventError as invalid_error: # These are the errors thrown if there's an error parsing the event. We want to report these back to the # extension publishers so that they are aware of the issues. # The error messages are all static messages, we will use this to create a dict and emit an event at the # end of each run to notify if there were any errors parsing events for the extension dropped_events_with_error_count[ustr(invalid_error)] += 1 except ServiceStoppedError as stopped_error: logger.error( "Unable to enqueue events as service stopped: {0}. Stopping collecting extension events".format( ustr(stopped_error))) raise except Exception as error: logger.warn("Unable to parse and transmit event, error: {0}".format(error)) if captured_events_count >= self._MAX_NUMBER_OF_EVENTS_PER_EXTENSION_PER_PERIOD: break return captured_events_count def _parse_telemetry_event(self, handler_name, extension_unparsed_event, event_file_time): """ Parse the Json event file and convert it to TelemetryEvent object with the required data. :return: Complete TelemetryEvent with all required fields filled up properly. Raises if event breaches contract. """ extension_event = self._parse_event_and_ensure_it_is_valid(extension_unparsed_event) # Create a telemetry event, add all common parameters to the event # and then overwrite all the common params with extension events params if same event = TelemetryEvent(TELEMETRY_LOG_EVENT_ID, TELEMETRY_LOG_PROVIDER_ID) event.file_type = "json" CollectTelemetryEventsHandler.add_common_params_to_telemetry_event(event, event_file_time) replace_or_add_params = { GuestAgentGenericLogsSchema.EventName: "{0}-{1}".format(handler_name, extension_event[ ExtensionEventSchema.Version.lower()]), GuestAgentGenericLogsSchema.CapabilityUsed: extension_event[ExtensionEventSchema.EventLevel.lower()], GuestAgentGenericLogsSchema.TaskName: extension_event[ExtensionEventSchema.TaskName.lower()], GuestAgentGenericLogsSchema.Context1: extension_event[ExtensionEventSchema.Message.lower()], GuestAgentGenericLogsSchema.Context2: extension_event[ExtensionEventSchema.Timestamp.lower()], GuestAgentGenericLogsSchema.Context3: extension_event[ExtensionEventSchema.OperationId.lower()], GuestAgentGenericLogsSchema.EventPid: extension_event[ExtensionEventSchema.EventPid.lower()], GuestAgentGenericLogsSchema.EventTid: extension_event[ExtensionEventSchema.EventTid.lower()] } self._replace_or_add_param_in_event(event, replace_or_add_params) return event def _parse_event_and_ensure_it_is_valid(self, extension_event): """ Parse the Json event from file. Raise InvalidExtensionEventError if the event breaches pre-set contract. :param extension_event: The json event from file :return: Verified Json event that qualifies the contract. """ def _clean_value(k, v): if v is not None: if isinstance(v, int): if k.lower() in [ExtensionEventSchema.EventPid.lower(), ExtensionEventSchema.EventTid.lower()]: return str(v) unredacted = v.strip() # redact the sas token from the event return redact_sas_token(unredacted) return v event_size = 0 key_err_msg = "{0}: {1} not found" # Convert the dict to all lower keys to avoid schema confusion. # Only pick the params that we care about and skip the rest. event = dict((k.lower(), _clean_value(k, v)) for k, v in extension_event.items() if k.lower() in self._EXTENSION_EVENT_REQUIRED_FIELDS) # Trim message and only pick the first 3k chars message_key = ExtensionEventSchema.Message.lower() if message_key in event: event[message_key] = event[message_key][:self._EXTENSION_EVENT_MAX_MSG_LEN] else: raise InvalidExtensionEventError( key_err_msg.format(InvalidExtensionEventError.MissingKeyError, ExtensionEventSchema.Message)) if not event[message_key]: raise InvalidExtensionEventError( "{0}: {1} should not be empty".format(InvalidExtensionEventError.EmptyMessageError, ExtensionEventSchema.Message)) for required_key in self._EXTENSION_EVENT_REQUIRED_FIELDS: # If all required keys not in event then raise if required_key not in event: raise InvalidExtensionEventError( key_err_msg.format(InvalidExtensionEventError.MissingKeyError, required_key)) # If the event_size > _EXTENSION_EVENT_MAX_SIZE=6k, then raise if event_size > self._EXTENSION_EVENT_MAX_SIZE: raise InvalidExtensionEventError( "{0}: max event size allowed: {1}".format(InvalidExtensionEventError.OversizeEventError, self._EXTENSION_EVENT_MAX_SIZE)) event_size += len(event[required_key]) return event @staticmethod def _replace_or_add_param_in_event(event, replace_or_add_params): for param in event.parameters: if param.name in replace_or_add_params: param.value = replace_or_add_params.pop(param.name) if not replace_or_add_params: # All values replaced, return return # Add the remaining params to the event for param_name in replace_or_add_params: event.parameters.append(TelemetryEventParam(param_name, replace_or_add_params[param_name])) class _CollectAndEnqueueEvents(PeriodicOperation): """ Periodic operation to collect telemetry events located in the events folder and enqueue them for the SendTelemetryHandler thread. """ _EVENT_COLLECTION_PERIOD = datetime.timedelta(minutes=1) def __init__(self, send_telemetry_events_handler): super(_CollectAndEnqueueEvents, self).__init__(_CollectAndEnqueueEvents._EVENT_COLLECTION_PERIOD) self._send_telemetry_events_handler = send_telemetry_events_handler def _operation(self): """ Periodically send any events located in the events folder """ try: if self._send_telemetry_events_handler.stopped(): logger.warn("{0} service is not running, skipping iteration.".format( self._send_telemetry_events_handler.get_thread_name())) return self.process_events() except Exception as error: err_msg = "Failure in collecting telemetry events: {0}".format(ustr(error)) add_event(op=WALAEventOperation.UnhandledError, message=err_msg, is_success=False) def process_events(self): """ Returns a list of events that need to be sent to the telemetry pipeline and deletes the corresponding files from the events directory. """ event_directory_full_path = os.path.join(conf.get_lib_dir(), EVENTS_DIRECTORY) event_files = os.listdir(event_directory_full_path) debug_info = CollectOrReportEventDebugInfo(operation=CollectOrReportEventDebugInfo.OP_COLLECT) for event_file in event_files: try: match = EVENT_FILE_REGEX.search(event_file) if match is None: continue event_file_path = os.path.join(event_directory_full_path, event_file) try: logger.verbose("Processing event file: {0}", event_file_path) event = self._read_and_parse_event_file(event_file_path) redact_event_msg(event) # "legacy" events are events produced by previous versions of the agent (<= 2.2.46) and extensions; # they do not include all the telemetry fields, so we add them here is_legacy_event = match.group('agent_event') is None if is_legacy_event: # We'll use the file creation time for the event's timestamp event_file_creation_time_epoch = os.path.getmtime(event_file_path) event_file_creation_time = datetime.datetime.fromtimestamp(event_file_creation_time_epoch).replace(tzinfo=UTC) if event.is_extension_event(): _CollectAndEnqueueEvents._trim_legacy_extension_event_parameters(event) CollectTelemetryEventsHandler.add_common_params_to_telemetry_event(event, event_file_creation_time) else: _CollectAndEnqueueEvents._update_legacy_agent_event(event, event_file_creation_time) self._send_telemetry_events_handler.enqueue_event(event) finally: # Todo: We should delete files after ensuring that we sent the data to Wireserver successfully # from our end rather than deleting first and sending later. This is to ensure the data reliability # of the agent telemetry pipeline. if os.path.exists(event_file_path): os.remove(event_file_path) except ServiceStoppedError as stopped_error: logger.error( "Unable to enqueue events as service stopped: {0}, skipping events collection".format( ustr(stopped_error))) except UnicodeError as uni_err: debug_info.update_unicode_error(uni_err) except Exception as error: debug_info.update_op_error(error) debug_info.report_debug_info() @staticmethod def _read_and_parse_event_file(event_file_path): """ Read the event file and parse it to a TelemetryEvent object. :param event_file_path: Full path of the event file. :return: TelemetryEvent object. """ # Retry for reading the event file in case file is modified while reading # We except FileNotFoundError and ValueError to handle the case where the file is deleted or modified while reading error_count = 0 while True: try: with open(event_file_path, "rb") as event_fd: event_data = event_fd.read().decode("utf-8") return parse_event(event_data) except Exception as e: if is_file_not_found_error(e) or isinstance(e, ValueError): error_count += 1 if error_count >= NUM_OF_EVENT_FILE_RETRIES: raise else: raise EventError("Error parsing event: {0}".format(ustr(e))) time.sleep(EVENT_FILE_RETRY_DELAY) @staticmethod def _update_legacy_agent_event(event, event_creation_time): # Ensure that if an agent event is missing a field from the schema defined since 2.2.47, the missing fields # will be appended, ensuring the event schema is complete before the event is reported. new_event = TelemetryEvent() new_event.parameters = [] CollectTelemetryEventsHandler.add_common_params_to_telemetry_event(new_event, event_creation_time) event_params = dict([(param.name, param.value) for param in event.parameters]) new_event_params = dict([(param.name, param.value) for param in new_event.parameters]) missing_params = set(new_event_params.keys()).difference(set(event_params.keys())) params_to_add = [] for param_name in missing_params: params_to_add.append(TelemetryEventParam(param_name, new_event_params[param_name])) event.parameters.extend(params_to_add) @staticmethod def _trim_legacy_extension_event_parameters(event): """ This method is called for extension events before they are sent out. Per the agreement with extension publishers, the parameters that belong to extensions and will be reported intact are Name, Version, Operation, OperationSuccess, Message, and Duration. Since there is nothing preventing extensions to instantiate other fields (which belong to the agent), we call this method to ensure the rest of the parameters are trimmed since they will be replaced with values coming from the agent. :param event: Extension event to trim. :return: Trimmed extension event; containing only extension-specific parameters. """ params_to_keep = dict.fromkeys([ GuestAgentExtensionEventsSchema.Name, GuestAgentExtensionEventsSchema.Version, GuestAgentExtensionEventsSchema.Operation, GuestAgentExtensionEventsSchema.OperationSuccess, GuestAgentExtensionEventsSchema.Message, GuestAgentExtensionEventsSchema.Duration ]) trimmed_params = [] for param in event.parameters: if param.name in params_to_keep: trimmed_params.append(param) event.parameters = trimmed_params class CollectTelemetryEventsHandler(ThreadHandlerInterface): """ This Handler takes care of fetching the Extension Telemetry events from the {extension_events_dir} and sends it to Kusto for advanced debuggability. """ _THREAD_NAME = "TelemetryEventsCollector" def __init__(self, send_telemetry_events_handler): self.should_run = True self.thread = None self._send_telemetry_events_handler = send_telemetry_events_handler @staticmethod def get_thread_name(): return CollectTelemetryEventsHandler._THREAD_NAME def run(self): logger.info("Start Extension Telemetry service.") self.start() def is_alive(self): return self.thread is not None and self.thread.is_alive() def start(self): self.thread = threading.Thread(target=self.daemon) self.thread.daemon = True self.thread.name = CollectTelemetryEventsHandler.get_thread_name() self.thread.start() def stop(self): """ Stop server communication and join the thread to main thread. """ self.should_run = False if self.is_alive(): self.thread.join() def stopped(self): return not self.should_run def daemon(self): periodic_operations = [ _CollectAndEnqueueEvents(self._send_telemetry_events_handler) ] is_etp_enabled = get_supported_feature_by_name(SupportedFeatureNames.ExtensionTelemetryPipeline).is_supported logger.info("Extension Telemetry pipeline enabled: {0}".format(is_etp_enabled)) if is_etp_enabled: periodic_operations.append(_ProcessExtensionEvents(self._send_telemetry_events_handler)) logger.info("Successfully started the {0} thread".format(self.get_thread_name())) while not self.stopped(): try: for periodic_op in periodic_operations: periodic_op.run() except Exception as error: logger.warn( "An error occurred in the Telemetry Extension thread main loop; will skip the current iteration.\n{0}", ustr(error)) finally: PeriodicOperation.sleep_until_next_operation(periodic_operations) @staticmethod def add_common_params_to_telemetry_event(event, event_time): reporter = get_event_logger() reporter.add_common_event_parameters(event, event_time)Azure-WALinuxAgent-a976115/azurelinuxagent/ga/cpucontroller.py000066400000000000000000000321551510742556200244610ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import errno import os import re from azurelinuxagent.common.exception import CGroupsException from azurelinuxagent.common.future import ustr from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.utils import fileutil from azurelinuxagent.ga.cgroupcontroller import _CgroupController, MetricValue, MetricsCategory, MetricsCounter re_v1_user_system_times = re.compile(r'user (\d+)\nsystem (\d+)\n') re_v2_usage_time = re.compile(r'[\s\S]*usage_usec (\d+)[\s\S]*') class _CpuController(_CgroupController): def __init__(self, name, cgroup_path): super(_CpuController, self).__init__(name, cgroup_path) self._osutil = get_osutil() self._previous_cgroup_cpu = None self._previous_system_cpu = None self._current_cgroup_cpu = None self._current_system_cpu = None self._previous_throttled_time = None self._current_throttled_time = None self._track_throttle_time = False def _get_cpu_stat_counter(self, counter_name): """ Gets the value for the provided counter in cpu.stat """ try: with open(os.path.join(self.path, 'cpu.stat')) as cpu_stat: # # Sample file v1: # # cat cpu.stat # nr_periods 51660 # nr_throttled 19461 # throttled_time 1529590856339 # # Sample file v2 # # cat cpu.stat # usage_usec 200161503 # user_usec 199388368 # system_usec 773134 # core_sched.force_idle_usec 0 # nr_periods 40059 # nr_throttled 40022 # throttled_usec 3565247992 # nr_bursts 0 # burst_usec 0 # for line in cpu_stat: match = re.match(r'{0}\s+(\d+)'.format(counter_name), line) if match is not None: return int(match.groups()[0]) raise Exception("Cannot find {0}".format(counter_name)) except (IOError, OSError) as e: if e.errno == errno.ENOENT: return 0 raise CGroupsException("Failed to read cpu.stat: {0}".format(ustr(e))) except Exception as e: raise CGroupsException("Failed to read cpu.stat: {0}".format(ustr(e))) def _cpu_usage_initialized(self): """ Returns True if cpu usage has been initialized, False otherwise. """ return self._current_cgroup_cpu is not None and self._current_system_cpu is not None def initialize_cpu_usage(self): """ Sets the initial values of CPU usage. This function must be invoked before calling get_cpu_usage(). """ raise NotImplementedError() def get_cpu_usage(self): """ Computes the CPU used by the cgroup since the last call to this function. The usage is measured as a percentage of utilization of 1 core in the system. For example, using 1 core all of the time on a 4-core system would be reported as 100%. NOTE: initialize_cpu_usage() must be invoked before calling get_cpu_usage() """ raise NotImplementedError() def get_cpu_throttled_time(self, read_previous_throttled_time=True): """ Computes the throttled time (in seconds) since the last call to this function. NOTE: initialize_cpu_usage() must be invoked before calling this function Compute only current throttled time if read_previous_throttled_time set to False """ raise NotImplementedError() def get_tracked_metrics(self): # Note: If the current cpu usage is less than the previous usage (metric is negative), then an empty array will # be returned and the agent won't track the metrics. tracked = [] cpu_usage = self.get_cpu_usage() if cpu_usage >= float(0): tracked.append(MetricValue(MetricsCategory.CPU_CATEGORY, MetricsCounter.PROCESSOR_PERCENT_TIME, self.name, cpu_usage)) if self._track_throttle_time: throttled_time = self.get_cpu_throttled_time() if cpu_usage >= float(0) and throttled_time >= float(0): tracked.append(MetricValue(MetricsCategory.CPU_CATEGORY, MetricsCounter.THROTTLED_TIME, self.name, throttled_time)) return tracked def get_unit_properties(self): return ["CPUAccounting", "CPUQuotaPerSecUSec"] def get_controller_type(self): return "cpu" def track_throttle_time(self, track_throttle_time): """ Set whether the controller should track the throttle time or not. This is useful when the controller is used for tracking CPU usage only. """ self._track_throttle_time = track_throttle_time class CpuControllerV1(_CpuController): def initialize_cpu_usage(self): if self._cpu_usage_initialized(): raise CGroupsException("initialize_cpu_usage() should be invoked only once") self._current_cgroup_cpu = self._get_cpu_ticks(allow_no_such_file_or_directory_error=True) self._current_system_cpu = self._osutil.get_total_cpu_ticks_since_boot() def _get_cpu_ticks(self, allow_no_such_file_or_directory_error=False): """ Returns the number of USER_HZ of CPU time (user and system) consumed by this cgroup. If allow_no_such_file_or_directory_error is set to True and cpuacct.stat does not exist the function returns 0; this is useful when the function can be called before the cgroup has been created. """ try: cpuacct_stat = self._get_file_contents('cpuacct.stat') except Exception as e: if not isinstance(e, (IOError, OSError)) or e.errno != errno.ENOENT: # pylint: disable=E1101 raise CGroupsException("Failed to read cpuacct.stat: {0}".format(ustr(e))) if not allow_no_such_file_or_directory_error: raise e cpuacct_stat = None cpu_ticks = 0 if cpuacct_stat is not None: # # Sample file: # # cat /sys/fs/cgroup/cpuacct/azure.slice/walinuxagent.service/cpuacct.stat # user 10190 # system 3160 # match = re_v1_user_system_times.match(cpuacct_stat) if not match: raise CGroupsException("The contents of {0} are invalid: {1}".format(self._get_cgroup_file('cpuacct.stat'), cpuacct_stat)) cpu_ticks = int(match.groups()[0]) + int(match.groups()[1]) return cpu_ticks def get_cpu_usage(self): if not self._cpu_usage_initialized(): raise CGroupsException("initialize_cpu_usage() must be invoked before the first call to get_cpu_usage()") self._previous_cgroup_cpu = self._current_cgroup_cpu self._previous_system_cpu = self._current_system_cpu self._current_cgroup_cpu = self._get_cpu_ticks() self._current_system_cpu = self._osutil.get_total_cpu_ticks_since_boot() cgroup_delta = self._current_cgroup_cpu - self._previous_cgroup_cpu system_delta = max(1, self._current_system_cpu - self._previous_system_cpu) return round(100.0 * self._osutil.get_processor_cores() * float(cgroup_delta) / float(system_delta), 3) def get_cpu_throttled_time(self, read_previous_throttled_time=True): # Throttled time is reported in nanoseconds in v1 if not read_previous_throttled_time: return float(self._get_cpu_stat_counter(counter_name='throttled_time') / 1E9) if not self._cpu_usage_initialized(): raise CGroupsException("initialize_cpu_usage() must be invoked before the first call to get_cpu_throttled_time()") if self._current_throttled_time is None: self._current_throttled_time = self._get_cpu_stat_counter(counter_name='throttled_time') self._previous_throttled_time = self._current_throttled_time self._current_throttled_time = self._get_cpu_stat_counter(counter_name='throttled_time') return round(float(self._current_throttled_time - self._previous_throttled_time) / 1E9, 3) class CpuControllerV2(_CpuController): @staticmethod def get_system_uptime(): """ Get the uptime of the system (including time spent in suspend) in seconds. /proc/uptime contains two numbers (values in seconds): the uptime of the system (including time spent in suspend) and the amount of time spent in the idle process: # cat /proc/uptime 365380.48 722644.81 :return: System uptime in seconds :rtype: float """ uptime_contents = fileutil.read_file('/proc/uptime').split() return float(uptime_contents[0]) def _get_system_usage(self): try: return self.get_system_uptime() except (OSError, IOError) as e: raise CGroupsException("Couldn't read /proc/uptime: {0}".format(ustr(e))) except Exception as e: raise CGroupsException("Couldn't parse /proc/uptime: {0}".format(ustr(e))) def initialize_cpu_usage(self): if self._cpu_usage_initialized(): raise CGroupsException("initialize_cpu_usage() should be invoked only once") self._current_cgroup_cpu = self._get_cpu_time(allow_no_such_file_or_directory_error=True) self._current_system_cpu = self._get_system_usage() def _get_cpu_time(self, allow_no_such_file_or_directory_error=False): """ Returns the CPU time (user and system) consumed by this cgroup in seconds. If allow_no_such_file_or_directory_error is set to True and cpu.stat does not exist the function returns 0; this is useful when the function can be called before the cgroup has been created. """ try: cpu_stat = self._get_file_contents('cpu.stat') except Exception as e: if not isinstance(e, (IOError, OSError)) or e.errno != errno.ENOENT: # pylint: disable=E1101 raise CGroupsException("Failed to read cpu.stat: {0}".format(ustr(e))) if not allow_no_such_file_or_directory_error: raise e cpu_stat = None cpu_time = 0 if cpu_stat is not None: # # Sample file: # # cat /sys/fs/cgroup/azure.slice/azure-walinuxagent.slice/azure-walinuxagent-logcollector.slice/collect-logs.scope/cpu.stat # usage_usec 1990707 # user_usec 1939858 # system_usec 50848 # core_sched.force_idle_usec 0 # nr_periods 397 # nr_throttled 397 # throttled_usec 37994949 # nr_bursts 0 # burst_usec 0 # match = re_v2_usage_time.match(cpu_stat) if not match: raise CGroupsException("The contents of {0} are invalid: {1}".format(self._get_cgroup_file('cpu.stat'), cpu_stat)) cpu_time = int(match.groups()[0]) / 1E6 return cpu_time def get_cpu_usage(self): if not self._cpu_usage_initialized(): raise CGroupsException("initialize_cpu_usage() must be invoked before the first call to get_cpu_usage()") self._previous_cgroup_cpu = self._current_cgroup_cpu self._previous_system_cpu = self._current_system_cpu self._current_cgroup_cpu = self._get_cpu_time() self._current_system_cpu = self._get_system_usage() cgroup_delta = self._current_cgroup_cpu - self._previous_cgroup_cpu system_delta = max(1.0, self._current_system_cpu - self._previous_system_cpu) return round(100.0 * float(cgroup_delta) / float(system_delta), 3) def get_cpu_throttled_time(self, read_previous_throttled_time=True): # Throttled time is reported in microseconds in v2 if not read_previous_throttled_time: return float(self._get_cpu_stat_counter(counter_name='throttled_usec') / 1E6) if not self._cpu_usage_initialized(): raise CGroupsException("initialize_cpu_usage() must be invoked before the first call to get_cpu_throttled_time()") if self._current_throttled_time is None: self._current_throttled_time = self._get_cpu_stat_counter(counter_name='throttled_usec') self._previous_throttled_time = self._current_throttled_time self._current_throttled_time = self._get_cpu_stat_counter(counter_name='throttled_usec') return round(float(self._current_throttled_time - self._previous_throttled_time) / 1E6, 3) Azure-WALinuxAgent-a976115/azurelinuxagent/ga/env.py000066400000000000000000000226161510742556200223570ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import datetime import re import socket import threading import azurelinuxagent.common.conf as conf import azurelinuxagent.common.logger as logger from azurelinuxagent.common.dhcp import get_dhcp_handler from azurelinuxagent.common import event from azurelinuxagent.common.event import WALAEventOperation, add_event from azurelinuxagent.common.future import UTC from azurelinuxagent.ga.firewall_manager import FirewallManager, FirewallStateError from azurelinuxagent.common.future import ustr from azurelinuxagent.ga.interfaces import ThreadHandlerInterface from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.protocol.util import get_protocol_util from azurelinuxagent.common.version import AGENT_NAME from azurelinuxagent.ga.periodic_operation import PeriodicOperation CACHE_PATTERNS = [ re.compile(r"^(.*)\.(\d+)\.(agentsManifest)$", re.IGNORECASE), re.compile(r"^(.*)\.(\d+)\.(manifest\.xml)$", re.IGNORECASE), re.compile(r"^(.*)\.(\d+)\.(xml)$", re.IGNORECASE) ] MAXIMUM_CACHED_FILES = 50 def get_env_handler(): return EnvHandler() class RemovePersistentNetworkRules(PeriodicOperation): def __init__(self, osutil): super(RemovePersistentNetworkRules, self).__init__(conf.get_remove_persistent_net_rules_period()) self.osutil = osutil def _operation(self): self.osutil.remove_rules_files() class MonitorDhcpClientRestart(PeriodicOperation): def __init__(self, osutil): super(MonitorDhcpClientRestart, self).__init__(conf.get_monitor_dhcp_client_restart_period()) self.osutil = osutil self.dhcp_handler = get_dhcp_handler() self.dhcp_handler.conf_routes() self.dhcp_warning_enabled = True self.dhcp_id_list = [] def _operation(self): if len(self.dhcp_id_list) == 0: self.dhcp_id_list = self._get_dhcp_client_pid() return if all(self.osutil.check_pid_alive(pid) for pid in self.dhcp_id_list): return new_pid = self._get_dhcp_client_pid() if len(new_pid) != 0 and new_pid != self.dhcp_id_list: logger.info("EnvMonitor: Detected dhcp client restart. Restoring routing table.") self.dhcp_handler.conf_routes() self.dhcp_id_list = new_pid def _get_dhcp_client_pid(self): pid = [] try: # return a sorted list since handle_dhclient_restart needs to compare the previous value with # the new value and the comparison should not be affected by the order of the items in the list pid = sorted(self.osutil.get_dhcp_pid()) if len(pid) == 0 and self.dhcp_warning_enabled: logger.warn("Dhcp client is not running.") except Exception as exception: if self.dhcp_warning_enabled: logger.error("Failed to get the PID of the DHCP client: {0}", ustr(exception)) self.dhcp_warning_enabled = len(pid) != 0 return pid class EnableFirewall(PeriodicOperation): def __init__(self, wire_server_address): super(EnableFirewall, self).__init__(conf.get_enable_firewall_period()) self._wire_server_address = wire_server_address self._firewall_manager = None # initialized on demand in the _operation method self._message_count = 0 self._report_after = datetime.datetime.now(UTC) def _operation(self): try: if self._firewall_manager is None: self._firewall_manager = FirewallManager.create(self._wire_server_address) try: if self._firewall_manager.check(): return # The firewall is configured correctly self._report(event.warn, "The firewall has not been setup. Will set it up.") except FirewallStateError as e: self._report(event.warn, "The firewall is not configured correctly. {0}. Will reset it. Current state:\n{1}", ustr(e), self._firewall_manager.get_state()) self._firewall_manager.remove() self._firewall_manager.setup() self._report(event.info, "The firewall was setup successfully:\n{0}", self._firewall_manager.get_state()) except Exception as e: self._report(event.warn, "An error occurred while setting up the firewall: {0}", ustr(e)) def _report(self, report_function, message, *args): # Report the first 3 messages, then stop reporting for 12 hours if datetime.datetime.now(UTC) < self._report_after: return self._message_count += 1 if self._message_count > 3: self._report_after = datetime.datetime.now(UTC) + datetime.timedelta(hours=12) self._message_count = 0 return report_function(WALAEventOperation.ResetFirewall, message, *args) class SetRootDeviceScsiTimeout(PeriodicOperation): def __init__(self, osutil): super(SetRootDeviceScsiTimeout, self).__init__(conf.get_root_device_scsi_timeout_period()) self._osutil = osutil def _operation(self): self._osutil.set_scsi_disks_timeout(conf.get_root_device_scsi_timeout()) class MonitorHostNameChanges(PeriodicOperation): def __init__(self, osutil): super(MonitorHostNameChanges, self).__init__(conf.get_monitor_hostname_period()) self._osutil = osutil self._hostname = self._osutil.get_hostname_record() def _operation(self): curr_hostname = socket.gethostname() if curr_hostname != self._hostname: logger.info("EnvMonitor: Detected hostname change: {0} -> {1}", self._hostname, curr_hostname) self._osutil.set_hostname(curr_hostname) try: self._osutil.publish_hostname(curr_hostname, recover_nic=True) except Exception as e: msg = "Error while publishing the hostname: {0}".format(e) add_event(AGENT_NAME, op=WALAEventOperation.HostnamePublishing, is_success=False, message=msg, log_event=False) self._hostname = curr_hostname class EnvHandler(ThreadHandlerInterface): """ Monitor changes to dhcp and hostname. If dhcp client process re-start has occurred, reset routes, dhcp with fabric. Monitor scsi disk. If new scsi disk found, set timeout """ _THREAD_NAME = "EnvHandler" @staticmethod def get_thread_name(): return EnvHandler._THREAD_NAME def __init__(self): self.stopped = True self.hostname = None self.env_thread = None def run(self): if not self.stopped: logger.info("Stop existing env monitor service.") self.stop() self.stopped = False logger.info("Starting env monitor service.") self.start() def is_alive(self): return self.env_thread.is_alive() def start(self): self.env_thread = threading.Thread(target=self.daemon) self.env_thread.daemon = True self.env_thread.name = self.get_thread_name() self.env_thread.start() def daemon(self): try: # The initialization of the protocol needs to be done within the environment thread itself rather # than initializing it in the ExtHandler thread. This is done to avoid any concurrency issues as each # thread would now have its own ProtocolUtil object as per the SingletonPerThread model. protocol_util = get_protocol_util() protocol = protocol_util.get_protocol() osutil = get_osutil() periodic_operations = [ RemovePersistentNetworkRules(osutil), MonitorDhcpClientRestart(osutil), ] if conf.enable_firewall(): periodic_operations.append(EnableFirewall(protocol.get_endpoint())) if conf.get_root_device_scsi_timeout() is not None: periodic_operations.append(SetRootDeviceScsiTimeout(osutil)) if conf.get_monitor_hostname(): periodic_operations.append(MonitorHostNameChanges(osutil)) while not self.stopped: try: for op in periodic_operations: op.run() except Exception as e: logger.error("An error occurred in the environment thread main loop; will skip the current iteration.\n{0}", ustr(e)) finally: PeriodicOperation.sleep_until_next_operation(periodic_operations) except Exception as e: logger.error("An error occurred in the environment thread; will exit the thread.\n{0}", ustr(e)) def stop(self): """ Stop server communication and join the thread to main thread. """ self.stopped = True if self.env_thread is not None: self.env_thread.join() Azure-WALinuxAgent-a976115/azurelinuxagent/ga/extensionprocessutil.py000066400000000000000000000215771510742556200261050ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright (c) Microsoft Corporation. All rights reserved. # Licensed under the Apache License, Version 2.0 (the "License"); # # You may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import signal import time from azurelinuxagent.common import conf from azurelinuxagent.common import logger from azurelinuxagent.common.event import WALAEventOperation, add_event from azurelinuxagent.common.exception import ExtensionErrorCodes, ExtensionOperationError, ExtensionError from azurelinuxagent.common.future import ustr TELEMETRY_MESSAGE_MAX_LEN = 3200 def wait_for_process_completion_or_timeout(process, timeout, cpu_controller): """ Utility function that waits for the process to complete within the given time frame. This function will terminate the process if when the given time frame elapses. :param process: Reference to a running process :param timeout: Number of seconds to wait for the process to complete before killing it :return: Two parameters: boolean for if the process timed out and the return code of the process (None if timed out) """ while timeout > 0 and process.poll() is None: time.sleep(1) timeout -= 1 return_code = None throttled_time = 0 if timeout == 0: throttled_time = get_cpu_throttled_time(cpu_controller) os.killpg(os.getpgid(process.pid), signal.SIGKILL) else: # process completed or forked; sleep 1 sec to give the child process (if any) a chance to start time.sleep(1) return_code = process.wait() return timeout == 0, return_code, throttled_time def handle_process_completion(process, command, timeout, stdout, stderr, error_code, cpu_controller=None): """ Utility function that waits for process completion and retrieves its output (stdout and stderr) if it completed before the timeout period. Otherwise, the process will get killed and an ExtensionError will be raised. In case the return code is non-zero, ExtensionError will be raised. :param process: Reference to a running process :param command: The extension command to run :param timeout: Number of seconds to wait before killing the process :param stdout: Must be a file since we seek on it when parsing the subprocess output :param stderr: Must be a file since we seek on it when parsing the subprocess outputs :param error_code: The error code to set if we raise an ExtensionError :param cpu_controller: References the cpu controller for the cgroup :return: """ # Wait for process completion or timeout timed_out, return_code, throttled_time = wait_for_process_completion_or_timeout(process, timeout, cpu_controller) process_output = read_output(stdout, stderr) if timed_out: if cpu_controller is not None: # Report CPUThrottledTime when timeout happens raise ExtensionError("Timeout({0});CPUThrottledTime({1}secs): {2}\n{3}".format(timeout, throttled_time, command, process_output), code=ExtensionErrorCodes.PluginHandlerScriptTimedout) raise ExtensionError("Timeout({0}): {1}\n{2}".format(timeout, command, process_output), code=ExtensionErrorCodes.PluginHandlerScriptTimedout) if return_code != 0: noexec_warning = "" if return_code == 126: # Permission denied noexec_path = _check_noexec() if noexec_path is not None: noexec_warning = "\nWARNING: {0} is mounted with the noexec flag, which can prevent execution of VM Extensions.".format(noexec_path) raise ExtensionOperationError( "Non-zero exit code: {0}, {1}{2}\n{3}".format(return_code, command, noexec_warning, process_output), code=error_code, exit_code=return_code) return process_output # # Collect a sample of errors while checking for the noexec flag. Consider removing this telemetry after a few releases. # _COLLECT_NOEXEC_ERRORS = True def _check_noexec(): """ Check if /var is mounted with the noexec flag. """ # W0603: Using the global statement (global-statement) # OK to disable; _COLLECT_NOEXEC_ERRORS is used only within _check_noexec, but needs to persist across calls. global _COLLECT_NOEXEC_ERRORS # pylint: disable=W0603 try: agent_dir = conf.get_lib_dir() with open('/proc/mounts', 'r') as f: while True: line = f.readline() if line == "": # EOF break # The mount point is on the second column, and the flags are on the fourth. e.g. # # # grep /var /proc/mounts # /dev/mapper/rootvg-varlv /var xfs rw,seclabel,noexec,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0 # columns = line.split() mount_point = columns[1] flags = columns[3] if agent_dir.startswith(mount_point) and "noexec" in flags: message = "The noexec flag is set on {0}. This can prevent extensions from executing.".format(mount_point) logger.warn(message) add_event(op=WALAEventOperation.NoExec, is_success=False, message=message) return mount_point except Exception as e: message = "Error while checking the noexec flag: {0}".format(e) logger.warn(message) if _COLLECT_NOEXEC_ERRORS: _COLLECT_NOEXEC_ERRORS = False add_event(op=WALAEventOperation.NoExec, is_success=False, log_event=False, message="Error while checking the noexec flag: {0}".format(e)) return None def read_output(stdout, stderr): """ Read the output of the process sent to stdout and stderr and trim them to the max appropriate length. :param stdout: File containing the stdout of the process :param stderr: File containing the stderr of the process :return: Returns the formatted concatenated stdout and stderr of the process """ try: stdout.seek(0) stderr.seek(0) stdout = ustr(stdout.read(TELEMETRY_MESSAGE_MAX_LEN), encoding='utf-8', errors='backslashreplace') stderr = ustr(stderr.read(TELEMETRY_MESSAGE_MAX_LEN), encoding='utf-8', errors='backslashreplace') return format_stdout_stderr(stdout, stderr) except Exception as e: return format_stdout_stderr("", "Cannot read stdout/stderr: {0}".format(ustr(e))) def format_stdout_stderr(stdout, stderr): """ Format stdout and stderr's output to make it suitable in telemetry. The goal is to maximize the amount of output given the constraints of telemetry. For example, if there is more stderr output than stdout output give more buffer space to stderr. :param str stdout: characters captured from stdout :param str stderr: characters captured from stderr :param int max_len: maximum length of the string to return :return: a string formatted with stdout and stderr that is less than or equal to max_len. :rtype: str """ template = "[stdout]\n{0}\n\n[stderr]\n{1}" # +6 == len("{0}") + len("{1}") max_len = TELEMETRY_MESSAGE_MAX_LEN max_len_each = int((max_len - len(template) + 6) / 2) if max_len_each <= 0: return '' def to_s(captured_stdout, stdout_offset, captured_stderr, stderr_offset): s = template.format(captured_stdout[stdout_offset:], captured_stderr[stderr_offset:]) return s if len(stdout) + len(stderr) < max_len: return to_s(stdout, 0, stderr, 0) elif len(stdout) < max_len_each: bonus = max_len_each - len(stdout) stderr_len = min(max_len_each + bonus, len(stderr)) return to_s(stdout, 0, stderr, -1*stderr_len) elif len(stderr) < max_len_each: bonus = max_len_each - len(stderr) stdout_len = min(max_len_each + bonus, len(stdout)) return to_s(stdout, -1*stdout_len, stderr, 0) else: return to_s(stdout, -1*max_len_each, stderr, -1*max_len_each) def get_cpu_throttled_time(cpu_controller): """ return the throttled time for the given cgroup. """ throttled_time = 0 if cpu_controller is not None: try: throttled_time = cpu_controller.get_cpu_throttled_time(read_previous_throttled_time=False) except Exception as e: logger.warn("Failed to get cpu throttled time for the extension: {0}", ustr(e)) return throttled_time Azure-WALinuxAgent-a976115/azurelinuxagent/ga/exthandlers.py000066400000000000000000004074661510742556200241220ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import copy import datetime import glob import json import os import re import shutil import stat import sys import tempfile import time import zipfile from collections import defaultdict from functools import partial from azurelinuxagent.common import conf from azurelinuxagent.common import logger from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.utils import fileutil from azurelinuxagent.common import version from azurelinuxagent.common import event from azurelinuxagent.common.agent_supported_feature import get_agent_supported_features_list_for_extensions, \ SupportedFeatureNames, get_supported_feature_by_name, get_agent_supported_features_list_for_crp from azurelinuxagent.common.utils.textutil import redact_sas_token from azurelinuxagent.ga.cgroupconfigurator import CGroupConfigurator from azurelinuxagent.ga.policy.policy_engine import ExtensionPolicyEngine from azurelinuxagent.common.datacontract import get_properties, set_properties from azurelinuxagent.common.errorstate import ErrorState from azurelinuxagent.common.event import add_event, elapsed_milliseconds, WALAEventOperation, \ add_periodic, EVENTS_DIRECTORY from azurelinuxagent.common.exception import ExtensionDownloadError, ExtensionError, ExtensionErrorCodes, \ ExtensionOperationError, ExtensionUpdateError, ProtocolError, ProtocolNotFoundError, ExtensionsGoalStateError, \ GoalStateAggregateStatusCodes, MultiConfigExtensionEnableError from azurelinuxagent.common.future import ustr, UTC, is_file_not_found_error from azurelinuxagent.common.protocol.extensions_goal_state import GoalStateSource from azurelinuxagent.common.protocol.restapi import ExtensionStatus, ExtensionSubStatus, Extension, ExtHandlerStatus, \ VMStatus, GoalStateAggregateStatus, ExtensionState, ExtensionRequestedState, ExtensionSettings from azurelinuxagent.common.utils import textutil from azurelinuxagent.common.utils.archive import ARCHIVE_DIRECTORY_NAME from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from azurelinuxagent.common.version import AGENT_NAME, CURRENT_VERSION from azurelinuxagent.ga.signature_validation_util import validate_handler_manifest_signing_info, SignatureValidationError, \ PackageValidationError, save_signature_validation_state, signature_validation_enabled, validate_signature _HANDLER_NAME_PATTERN = r'^([^-]+)' _HANDLER_VERSION_PATTERN = r'(\d+(?:\.\d+)*)' _HANDLER_PATTERN = _HANDLER_NAME_PATTERN + r"-" + _HANDLER_VERSION_PATTERN _HANDLER_PKG_PATTERN = re.compile(_HANDLER_PATTERN + r'\.zip$', re.IGNORECASE) _DEFAULT_EXT_TIMEOUT_MINUTES = 90 _VALID_HANDLER_STATUS = ['Ready', 'NotReady', "Installing", "Unresponsive"] HANDLER_NAME_PATTERN = re.compile(_HANDLER_NAME_PATTERN, re.IGNORECASE) HANDLER_COMPLETE_NAME_PATTERN = re.compile(_HANDLER_PATTERN + r'$', re.IGNORECASE) HANDLER_PKG_EXT = ".zip" # This is the default value for the env variables, whenever we call a command which is not an update scenario, we # set the env variable value to NOT_RUN to reduce ambiguity for the extension publishers NOT_RUN = "NOT_RUN" # Max size of individual status file _MAX_STATUS_FILE_SIZE_IN_BYTES = 128 * 1024 # 128K # Truncating length of fields. _MAX_STATUS_MESSAGE_LENGTH = 1024 # 1k message allowed to be shown in the portal. _MAX_SUBSTATUS_FIELD_LENGTH = 10 * 1024 # Making 10K; allowing fields to have enough debugging information.. _TRUNCATED_SUFFIX = u" ... [TRUNCATED]" # Status file specific retries and delays. _NUM_OF_STATUS_FILE_RETRIES = 5 _STATUS_FILE_RETRY_DELAY = 2 # seconds # This is the default sequence number we use when there are no settings available for Handlers _DEFAULT_SEQ_NO = "0" # For extension disallowed errors (e.g. blocked by policy, extensions disabled), this mapping is used to generate # user-friendly error messages and determine the appropriate terminal error code based on the blocked operation. # Format: {: (, )} # - The first element of the tuple is a user-friendly operation name included in error messages. # - The second element of the tuple is the CRP terminal error code for the operation. _EXT_DISALLOWED_ERROR_MAP = \ { ExtensionRequestedState.Enabled: ('run', ExtensionErrorCodes.PluginEnableProcessingFailed), # TODO: CRP does not currently have a terminal error code for uninstall. Once this code is added, use # it instead of PluginDisableProcessingFailed below. # # Note: currently, when uninstall is requested for an extension, CRP polls until the agent does not # report status for that extension, or until timeout is reached. In the case of an extension disallowed # error, agent reports failed status on behalf of the extension, which will cause CRP to poll for the full # timeout, instead of failing fast. ExtensionRequestedState.Uninstall: ('uninstall', ExtensionErrorCodes.PluginDisableProcessingFailed), # "Disable" is an internal operation, users are unaware of it. We surface the term "uninstall" instead. ExtensionRequestedState.Disabled: ('uninstall', ExtensionErrorCodes.PluginDisableProcessingFailed), } class ExtHandlerStatusValue(object): """ Statuses for Extension Handlers """ ready = "Ready" not_ready = "NotReady" class ExtensionStatusValue(object): """ Statuses for Extensions """ transitioning = "transitioning" warning = "warning" error = "error" success = "success" STRINGS = ['transitioning', 'warning', 'error', 'success'] _EXTENSION_TERMINAL_STATUSES = [ExtensionStatusValue.error, ExtensionStatusValue.success] class ExtCommandEnvVariable(object): Prefix = "AZURE_GUEST_AGENT" DisableReturnCode = "{0}_DISABLE_CMD_EXIT_CODE".format(Prefix) DisableReturnCodeMultipleExtensions = "{0}_DISABLE_CMD_EXIT_CODES_MULTIPLE_EXTENSIONS".format(Prefix) UninstallReturnCode = "{0}_UNINSTALL_CMD_EXIT_CODE".format(Prefix) ExtensionPath = "{0}_EXTENSION_PATH".format(Prefix) ExtensionVersion = "{0}_EXTENSION_VERSION".format(Prefix) ExtensionSeqNumber = "ConfigSequenceNumber" # At par with Windows Guest Agent ExtensionName = "ConfigExtensionName" UpdatingFromVersion = "{0}_UPDATING_FROM_VERSION".format(Prefix) WireProtocolAddress = "{0}_WIRE_PROTOCOL_ADDRESS".format(Prefix) ExtensionSupportedFeatures = "{0}_EXTENSION_SUPPORTED_FEATURES".format(Prefix) def validate_has_key(obj, key, full_key_path): if key not in obj: raise ExtensionStatusError(msg="Invalid status format by extension: Missing {0} key".format(full_key_path), code=ExtensionStatusError.StatusFileMalformed) def validate_in_range(val, valid_range, name): if val not in valid_range: raise ExtensionStatusError(msg="Invalid value {0} in range {1} at the node {2}".format(val, valid_range, name), code=ExtensionStatusError.StatusFileMalformed) def parse_formatted_message(formatted_message): if formatted_message is None: return None validate_has_key(formatted_message, 'lang', 'formattedMessage/lang') validate_has_key(formatted_message, 'message', 'formattedMessage/message') return formatted_message.get('message') def parse_ext_substatus(substatus): # Check extension sub status format validate_has_key(substatus, 'status', 'substatus/status') validate_in_range(substatus['status'], ExtensionStatusValue.STRINGS, 'substatus/status') status = ExtensionSubStatus() status.name = substatus.get('name') status.status = substatus.get('status') status.code = substatus.get('code', 0) formatted_message = substatus.get('formattedMessage') status.message = parse_formatted_message(formatted_message) return status def parse_ext_status(ext_status, data): if data is None: return if not isinstance(data, list): data_string = ustr(data)[:4096] raise ExtensionStatusError(msg="The extension status must be an array: {0}".format(data_string), code=ExtensionStatusError.StatusFileMalformed) if not data: return # Currently, only the first status will be reported data = data[0] # Check extension status format validate_has_key(data, 'status', 'status') status_data = data['status'] validate_has_key(status_data, 'status', 'status/status') status = status_data['status'] if status not in ExtensionStatusValue.STRINGS: status = ExtensionStatusValue.error applied_time = status_data.get('configurationAppliedTime') ext_status.configurationAppliedTime = applied_time ext_status.operation = status_data.get('operation') ext_status.status = status ext_status.code = status_data.get('code', 0) formatted_message = status_data.get('formattedMessage') ext_status.message = parse_formatted_message(formatted_message) substatus_list = status_data.get('substatus', []) # some extensions incorrectly report an empty substatus with a null value if substatus_list is None: substatus_list = [] for substatus in substatus_list: if substatus is not None: ext_status.substatusList.append(parse_ext_substatus(substatus)) def migrate_handler_state(): """ Migrate handler state and status (if they exist) from an agent-owned directory into the handler-owned config directory Notes: - The v2.0.x branch wrote all handler-related state into the handler-owned config directory (e.g., /var/lib/waagent/Microsoft.Azure.Extensions.LinuxAsm-2.0.1/config). - The v2.1.x branch original moved that state into an agent-owned handler state directory (e.g., /var/lib/waagent/handler_state). - This move can cause v2.1.x agents to multiply invoke a handler's install command. It also makes clean-up more difficult since the agent must remove the state as well as the handler directory. """ handler_state_path = os.path.join(conf.get_lib_dir(), "handler_state") if not os.path.isdir(handler_state_path): return for handler_path in glob.iglob(os.path.join(handler_state_path, "*")): handler = os.path.basename(handler_path) handler_config_path = os.path.join(conf.get_lib_dir(), handler, "config") if os.path.isdir(handler_config_path): for file in ("State", "Status"): # pylint: disable=redefined-builtin from_path = os.path.join(handler_state_path, handler, file.lower()) to_path = os.path.join(handler_config_path, "Handler" + file) if os.path.isfile(from_path) and not os.path.isfile(to_path): try: shutil.move(from_path, to_path) except Exception as e: logger.warn( "Exception occurred migrating {0} {1} file: {2}", handler, file, str(e)) try: shutil.rmtree(handler_state_path) except Exception as e: logger.warn("Exception occurred removing {0}: {1}", handler_state_path, str(e)) return class ExtHandlerState(object): NotInstalled = "NotInstalled" Installed = "Installed" Enabled = "Enabled" FailedUpgrade = "FailedUpgrade" class GoalStateStatus(object): """ This is an Enum to define the State of the GoalState as a whole. This is reported as part of the 'vmArtifactsAggregateStatus.goalStateAggregateStatus' in the status blob. Note: not to be confused with the State of the ExtHandler which reported as part of 'handlerAggregateStatus' """ Success = "Success" Failed = "Failed" # The following field is not used now but would be needed once Status reporting is moved to a separate thread. Initialize = "Initialize" Transitioning = "Transitioning" def get_exthandlers_handler(protocol): return ExtHandlersHandler(protocol) def list_agent_lib_directory(skip_agent_package=True, ignore_names=None): lib_dir = conf.get_lib_dir() for name in os.listdir(lib_dir): path = os.path.join(lib_dir, name) if ignore_names is not None and any(ignore_names) and name in ignore_names: continue if skip_agent_package and (version.is_agent_package(path) or version.is_agent_path(path)): continue yield name, path class ExtHandlersHandler(object): def __init__(self, protocol): self.protocol = protocol self.ext_handlers = None # Maintain a list of extension handler objects that are disallowed (e.g. blocked by policy, extensions disabled, etc.). # Extension status, if it exists, is always reported for the extensions in this list. List is reset for each goal state. self.__disallowed_ext_handlers = [] # The GoalState Aggregate status needs to report the last status of the GoalState. Since we only process # extensions on goal state change, we need to maintain its state. # Setting the status to None here. This would be overridden as soon as the first GoalState is processed self.__gs_aggregate_status = None # CRP Activity ID for the goal state that is being processed. Initialized once we start processing the goal state. self._gs_activity_id = '00000000-0000-0000-0000-000000000000' self.report_status_error_state = ErrorState() def __last_gs_unsupported(self): # Return if the last GoalState was unsupported return self.__gs_aggregate_status is not None and \ self.__gs_aggregate_status.status == GoalStateStatus.Failed and \ self.__gs_aggregate_status.code == GoalStateAggregateStatusCodes.GoalStateUnsupportedRequiredFeatures def run(self): try: gs = self.protocol.get_goal_state() egs = gs.extensions_goal_state self._gs_activity_id = egs.activity_id # self.ext_handlers needs to be initialized before returning, since status reporting depends on it; also # we make a deep copy of the extensions, since changes are made to self.ext_handlers while processing the extensions self.ext_handlers = copy.deepcopy(egs.extensions) if self._extensions_on_hold(): return utc_start = datetime.datetime.now(UTC) error = None message = "ProcessExtensionsGoalState started [{0} channel: {1} source: {2} activity: {3} correlation {4} created: {5}]".format( egs.id, egs.channel, egs.source, egs.activity_id, egs.correlation_id, egs.created_on_timestamp) logger.info('') logger.info(message) add_event(op=WALAEventOperation.ExtensionProcessing, message=message) try: self.__process_and_handle_extensions(egs.svd_sequence_number, egs.id) self._cleanup_outdated_handlers() except Exception as e: error = u"Error processing extensions:{0}".format(textutil.format_exception(e)) finally: duration = elapsed_milliseconds(utc_start) if error is None: message = 'ProcessExtensionsGoalState completed [{0} {1} ms]\n'.format(egs.id, duration) logger.info(message) else: message = 'ProcessExtensionsGoalState failed [{0} {1} ms]\n{2}'.format(egs.id, duration, error) logger.error(message) add_event(op=WALAEventOperation.ExtensionProcessing, is_success=(error is None), message=message, log_event=False, duration=duration) except Exception as error: msg = u"ProcessExtensionsInGoalState - Exception processing extension handlers:{0}".format(textutil.format_exception(error)) logger.error(msg) add_event(op=WALAEventOperation.ExtensionProcessing, is_success=False, message=msg, log_event=False) def __get_unsupported_features(self): required_features = self.protocol.get_goal_state().extensions_goal_state.required_features supported_features = get_agent_supported_features_list_for_crp() return [feature for feature in required_features if feature not in supported_features] def __process_and_handle_extensions(self, svd_sequence_number, goal_state_id): try: # Verify we satisfy all required features, if any. If not, report failure here itself, no need to process anything further. unsupported_features = self.__get_unsupported_features() if any(unsupported_features): msg = "Failing GS {0} as Unsupported features found: {1}".format(goal_state_id, ', '.join(unsupported_features)) logger.warn(msg) self.__gs_aggregate_status = GoalStateAggregateStatus(status=GoalStateStatus.Failed, seq_no=svd_sequence_number, code=GoalStateAggregateStatusCodes.GoalStateUnsupportedRequiredFeatures, message=msg) add_event(op=WALAEventOperation.GoalStateUnsupportedFeatures, is_success=False, message=msg, log_event=False) else: self.handle_ext_handlers(goal_state_id) self.__gs_aggregate_status = GoalStateAggregateStatus(status=GoalStateStatus.Success, seq_no=svd_sequence_number, code=GoalStateAggregateStatusCodes.Success, message="GoalState executed successfully") except Exception as error: msg = "Unexpected error when processing goal state:{0}".format(textutil.format_exception(error)) self.__gs_aggregate_status = GoalStateAggregateStatus(status=GoalStateStatus.Failed, seq_no=svd_sequence_number, code=GoalStateAggregateStatusCodes.GoalStateUnknownFailure, message=msg) logger.warn(msg) add_event(op=WALAEventOperation.ExtensionProcessing, is_success=False, message=msg, log_event=False) @staticmethod def get_ext_handler_instance_from_path(name, path, protocol, skip_handlers=None): if not os.path.isdir(path) or re.match(HANDLER_NAME_PATTERN, name) is None: return None separator = name.rfind('-') handler_name = name[0:separator] if skip_handlers is not None and handler_name in skip_handlers: # Handler in skip_handlers list, not parsing it return None eh = Extension(name=handler_name) eh.version = str(FlexibleVersion(name[separator + 1:])) return ExtHandlerInstance(eh, protocol) def _cleanup_outdated_handlers(self): # Skip cleanup if the previous GS was Unsupported if self.__last_gs_unsupported(): return handlers = [] pkgs = [] ext_handlers_in_gs = [ext_handler.name for ext_handler in self.ext_handlers] # Build a collection of uninstalled handlers and orphaned packages # Note: # -- An orphaned package is one without a corresponding handler # directory for item, path in list_agent_lib_directory(skip_agent_package=True): try: handler_instance = ExtHandlersHandler.get_ext_handler_instance_from_path(name=item, path=path, protocol=self.protocol, skip_handlers=ext_handlers_in_gs) if handler_instance is not None: # Since this handler name doesn't exist in the GS, marking it for deletion handlers.append(handler_instance) continue except Exception: continue if os.path.isfile(path) and \ not os.path.isdir(path[0:-len(HANDLER_PKG_EXT)]): if not re.match(_HANDLER_PKG_PATTERN, item): continue pkgs.append(path) # Then, remove the orphaned packages for pkg in pkgs: try: os.remove(pkg) logger.verbose("Removed orphaned extension package {0}".format(pkg)) except OSError as e: logger.warn("Failed to remove orphaned package {0}: {1}".format(pkg, e.strerror)) # Finally, remove the directories and packages of the orphaned handlers, i.e. Any extension directory that # is still in the FileSystem but not in the GoalState for handler in handlers: handler.remove_ext_handler() pkg = os.path.join(conf.get_lib_dir(), handler.get_full_name() + HANDLER_PKG_EXT) if os.path.isfile(pkg): try: os.remove(pkg) logger.verbose("Removed extension package {0}".format(pkg)) except OSError as e: logger.warn("Failed to remove extension package {0}: {1}".format(pkg, e.strerror)) def _extensions_on_hold(self): if conf.get_enable_overprovisioning(): if self.protocol.get_goal_state().extensions_goal_state.on_hold: msg = "Extension handling is on hold" logger.info(msg) add_event(op=WALAEventOperation.ExtensionProcessing, message=msg) return True return False @staticmethod def __get_dependency_level(tup): (extension, handler) = tup if extension is not None: return extension.dependency_level_sort_key(handler.state) return handler.dependency_level_sort_key() def __get_sorted_extensions_for_processing(self): all_extensions = [] for handler in self.ext_handlers: if any(handler.settings): all_extensions.extend([(ext, handler) for ext in handler.settings]) else: # We need to process the Handler even if no settings specified from CRP (legacy behavior) logger.info("No extension/run-time settings settings found for {0}".format(handler.name)) all_extensions.append((None, handler)) all_extensions.sort(key=self.__get_dependency_level) return all_extensions def handle_ext_handlers(self, goal_state_id): if not self.ext_handlers: logger.info("No extension handlers found, not processing anything.") return wait_until = datetime.datetime.now(UTC) + datetime.timedelta(minutes=_DEFAULT_EXT_TIMEOUT_MINUTES) all_extensions = self.__get_sorted_extensions_for_processing() # Since all_extensions are sorted based on sort_key, the last element would be the maximum based on the sort_key max_dep_level = self.__get_dependency_level(all_extensions[-1]) if any(all_extensions) else 0 depends_on_err_msg = None extensions_enabled = conf.get_extensions_enabled() # Instantiate policy engine, and use same engine to handle all extension handlers. If an error is thrown during # policy engine initialization, we block all extensions and report the error via handler status for each extension. # Save policy to history folder. policy_error = None try: gs_history = self.protocol.get_goal_state().history policy_engine = ExtensionPolicyEngine() if policy_engine is not None and policy_engine.policy_file_contents is not None and gs_history is not None: gs_history.save(policy_engine.policy_file_contents, "waagent_policy.json") except Exception as ex: policy_error = ex self.__disallowed_ext_handlers = [] for extension, ext_handler in all_extensions: handler_i = ExtHandlerInstance(ext_handler, self.protocol, extension=extension) # Get user-friendly operation name and terminal error code to use in status messages if extension is disallowed operation, error_code = _EXT_DISALLOWED_ERROR_MAP.get(ext_handler.state) # In case of extensions disabled, we skip processing extensions. But CRP is still waiting for some status # back for the skipped extensions. In order to propagate the status back to CRP, we will report status back # here with an error message. if not extensions_enabled: ext_full_name = handler_i.get_extension_full_name(extension) agent_conf_file_path = get_osutil().get_agent_conf_file_path() msg = "Extension '{0}' will not be processed since extension processing is disabled. To enable extension " \ "processing, set Extensions.Enabled=y in '{1}'".format(ext_full_name, agent_conf_file_path) self.__handle_ext_disallowed_error(handler_i, error_code, report_op=WALAEventOperation.ExtensionProcessing, message=msg, extension=extension) continue # If an error was thrown during policy engine initialization, skip further processing of the extension. # CRP is still waiting for status, so we report error status here. if policy_error is not None: msg = "Extension will not be processed: {0}".format(ustr(policy_error)) self.__handle_ext_disallowed_error(ext_handler_i=handler_i, error_code=error_code, report_op=WALAEventOperation.ExtensionPolicy, message=msg, extension=extension) continue # In case of depends-on errors, we skip processing extensions if there was an error processing dependent extensions. # But CRP is still waiting for some status back for the skipped extensions. In order to propagate the status back to CRP, # we will report status back here with the relevant error message for each of the dependent extension. if depends_on_err_msg is not None: # For MC extensions, report the HandlerStatus as is and create a new placeholder per extension if doesnt exist if handler_i.should_perform_multi_config_op(extension): # Ensure some handler status exists for the Handler, if not, set it here if handler_i.get_handler_status() is None: handler_i.set_handler_status(message=depends_on_err_msg, code=-1) handler_i.create_status_file(extension, status=ExtensionStatusValue.error, code=-1, operation=WALAEventOperation.ExtensionProcessing, message=depends_on_err_msg, overwrite=False) # For SC extensions, overwrite the HandlerStatus with the relevant message else: handler_i.set_handler_status(message=depends_on_err_msg, code=-1) continue # Invoke policy engine to determine if extension is allowed. # - if allowed: process the extension and get if it was successfully executed or not # - if disallowed: do not process the handler and report an error on behalf of the extension, dependent # extensions will also be blocked. extension_allowed = policy_engine.should_allow_extension(ext_handler.name) if not extension_allowed: msg = ( "Extension will not be processed: failed to {0} extension '{1}' because it is not specified " "as an allowed extension. To {0}, add the extension to the list of allowed extensions in the policy file ('{2}')." ).format(operation, ext_handler.name, conf.get_policy_file_path()) self.__handle_ext_disallowed_error(handler_i, error_code, report_op=WALAEventOperation.ExtensionPolicy, message=msg, extension=extension) extension_success = False else: extension_success = self.handle_ext_handler(handler_i, extension, goal_state_id) dep_level = self.__get_dependency_level((extension, ext_handler)) if 0 <= dep_level < max_dep_level: extension_full_name = handler_i.get_extension_full_name(extension) try: # Do no wait for extension status if the handler failed if not extension_success: raise Exception("Skipping processing of extensions since execution of dependent extension {0} failed".format( extension_full_name)) # Wait for the extension installation until it is handled. # This is done for the install and enable. Not for the uninstallation. # If handled successfully, proceed with the current handler. # Otherwise, skip the rest of the extension installation. self.wait_for_handler_completion(handler_i, wait_until, extension=extension) except Exception as error: logger.warn( "Dependent extension {0} failed or timed out, will skip processing the rest of the extensions".format( extension_full_name)) depends_on_err_msg = ustr(error) add_event(name=extension_full_name, version=handler_i.ext_handler.version, op=WALAEventOperation.ExtensionProcessing, is_success=False, message=depends_on_err_msg) @staticmethod def wait_for_handler_completion(handler_i, wait_until, extension=None): """ Check the status of the extension being handled. Wait until it has a terminal state or times out. :raises: Exception if it is not handled successfully. """ extension_name = handler_i.get_extension_full_name(extension) # If the handler had no settings, we should not wait at all for handler to report status. if extension is None: logger.info("No settings found for {0}, not waiting for it's status".format(extension_name)) return try: ext_completed, status = False, None # Keep polling for the extension status until it succeeds or times out while datetime.datetime.now(UTC) <= wait_until: ext_completed, status = handler_i.is_ext_handling_complete(extension) if ext_completed: break time.sleep(5) except Exception as e: msg = "Failed to wait for Handler completion due to unknown error. Marking the dependent extension as failed: {0}, {1}".format( extension_name, textutil.format_exception(e)) raise Exception(msg) # In case of timeout or terminal error state, we log it and raise # Incase extension reported status at the last sec, we should prioritize reporting status over timeout if not ext_completed and datetime.datetime.now(UTC) > wait_until: msg = "Dependent Extension {0} did not reach a terminal state within the allowed timeout. Last status was {1}".format( extension_name, status) raise Exception(msg) if status != ExtensionStatusValue.success: msg = "Dependent Extension {0} did not succeed. Status was {1}".format(extension_name, status) raise Exception(msg) def handle_ext_handler(self, ext_handler_i, extension, goal_state_id): """ Execute the requested command for the handler and return if success :param ext_handler_i: The ExtHandlerInstance object to execute the command on :param extension: The extension settings on which to run the command on :param goal_state_id: ID of the current GoalState :return: True if the operation was successful, False if not """ try: # Ensure the extension config was valid if ext_handler_i.ext_handler.is_invalid_setting: raise ExtensionsGoalStateError(ext_handler_i.ext_handler.invalid_setting_reason) handler_state = ext_handler_i.ext_handler.state # The Guest Agent currently only supports 1 installed version per extension on the VM. # If the extension version is unregistered and the customers wants to uninstall the extension, # we should let it go through even if the installed version doesnt exist in Handler manifest (PIR) anymore. # If target state is enabled and version not found in manifest, do not process the extension. if ext_handler_i.decide_version(target_state=handler_state, extension=extension, gs_activity_id=self._gs_activity_id) is None and handler_state == ExtensionRequestedState.Enabled: handler_version = ext_handler_i.ext_handler.version name = ext_handler_i.ext_handler.name err_msg = "Unable to find version {0} in manifest for extension {1}".format(handler_version, name) ext_handler_i.set_operation(WALAEventOperation.Download) raise ExtensionError(msg=err_msg) # Handle everything on an extension level rather than Handler level ext_handler_i.logger.info("Target handler state: {0} [{1}]", handler_state, goal_state_id) if handler_state == ExtensionRequestedState.Enabled: self.handle_enable(ext_handler_i, extension) elif handler_state == ExtensionRequestedState.Disabled: # The "disabled" state is now deprecated. Send telemetry if it is still being used on any VMs event.info(WALAEventOperation.RequestedStateDisabled, 'Goal State is requesting "disabled" state on {0} [Activity ID: {1}]', ext_handler_i.ext_handler.name, self._gs_activity_id) self.handle_disable(ext_handler_i, extension) elif handler_state == ExtensionRequestedState.Uninstall: self.handle_uninstall(ext_handler_i, extension=extension) else: message = u"Unknown ext handler state:{0}".format(handler_state) raise ExtensionError(message) return True except MultiConfigExtensionEnableError as error: ext_name = ext_handler_i.get_extension_full_name(extension) err_msg = "Error processing MultiConfig extension {0}: {1}".format(ext_name, ustr(error)) # This error is only thrown for enable operation on MultiConfig extension. # Since these are maintained by the extensions, the expectation here is that they would update their status files appropriately with their errors. # The extensions should already have a placeholder status file, but incase they dont, setting one here to fail fast. ext_handler_i.create_status_file(extension, status=ExtensionStatusValue.error, code=error.code, operation=ext_handler_i.operation, message=err_msg, overwrite=False) add_event(name=ext_name, version=ext_handler_i.ext_handler.version, op=ext_handler_i.operation, is_success=False, log_event=True, message=err_msg) except ExtensionsGoalStateError as error: # Catch and report Invalid ExtensionConfig errors here to fail fast rather than timing out after 90 min err_msg = "Ran into config errors: {0}. \nPlease retry again as another operation with updated settings".format( ustr(error)) self.__handle_and_report_ext_handler_errors(ext_handler_i, error, report_op=WALAEventOperation.InvalidExtensionConfig, message=err_msg, extension=extension) except ExtensionUpdateError as error: # Not reporting the error as it has already been reported from the old version self.__handle_and_report_ext_handler_errors(ext_handler_i, error, ext_handler_i.operation, ustr(error), report=False, extension=extension) except ExtensionDownloadError as error: msg = "Failed to download artifacts: {0}".format(ustr(error)) self.__handle_and_report_ext_handler_errors(ext_handler_i, error, report_op=WALAEventOperation.Download, message=msg, extension=extension) except ExtensionError as error: self.__handle_and_report_ext_handler_errors(ext_handler_i, error, ext_handler_i.operation, ustr(error), extension=extension) except Exception as error: error.code = -1 self.__handle_and_report_ext_handler_errors(ext_handler_i, error, ext_handler_i.operation, ustr(error), extension=extension) return False @staticmethod def __handle_and_report_ext_handler_errors(ext_handler_i, error, report_op, message, report=True, extension=None): # This function is only called for Handler level errors, we capture MultiConfig errors separately, # so report only HandlerStatus here. ext_handler_i.set_handler_status(message=message, code=error.code) # If the handler supports multi-config, create a status file with failed status if no status file exists. # This is for correctly reporting errors back to CRP for failed Handler level operations for MultiConfig extensions. # In case of Handler failures, we will retry each time for each extension, so we need to create a status # file with failure since the extensions wont be called where they can create their status files. # This way we guarantee reporting back to CRP if ext_handler_i.should_perform_multi_config_op(extension): ext_handler_i.create_status_file(extension, status=ExtensionStatusValue.error, code=error.code, operation=report_op, message=message, overwrite=False) if report: name = ext_handler_i.get_extension_full_name(extension) handler_version = ext_handler_i.ext_handler.version add_event(name=name, version=handler_version, op=report_op, is_success=False, log_event=True, message=message) def __handle_ext_disallowed_error(self, ext_handler_i, error_code, report_op, message, extension): # # Handle and report errors for disallowed extensions (e.g. extensions blocked by policy or disabled via config). # # TODO: __handle_and_report_ext_handler_errors() is also used to report extension errors, but it does not create # a status file for single-config extensions (see below as to why this is important). This function, # __handle_ext_disallowed_error, implements what we believe is the correct behavior, but at this point we # use it only for disallowed extensions scenarios. In a future release, consider merging the two functions # after assessing any impact. # # Note: When CRP polls for extension status, it first looks at handler status and then looks for any extension # status. If extension status is present, CRP uses it instead of the handler status, ensuring that the # sequence number for the extension settings match the sequence number in the reported status. CRP polls # asynchronously to the Agent and, on a new goal state, it can check the status blob before the Agent has # reported status for that goal state, effectively checking the status of the previous goal state. This is # not an issue when the extension reports status at the extension level, since CRP wil wait for the status # for the correct sequence number. However, when the extension reports status *only* at the handler level # (e.g if the extension has no settings, during install errors, if extension is disallowed, etc.) CRP can # end up picking up a stale status. There is not a good solution for extensions with no settings, and CRP # can report an error from a previous goal state. For install errors of extensions with settings, though, # we work around this issue by reporting the error *both* at the handler level and at the extension level # (although reporting at the handler level *should* be sufficient). By reporting at the extension level, # CRP will enforce a match on the sequence number for the settings, and skip stale status blobs. # Keep a list of disallowed extensions so that report_ext_handler_status() can report status for them. self.__disallowed_ext_handlers.append(ext_handler_i.ext_handler) ext_handler_i.set_handler_status(status=ExtHandlerStatusValue.not_ready, message=message, code=error_code) # Only report extension status for install errors of extensions with settings. Disable/uninstall errors are # reported at the handler status level only. if extension is not None and ext_handler_i.ext_handler.state == ExtensionRequestedState.Enabled: # Overwrite any existing status file to reflect the failure accurately. ext_handler_i.create_status_file(extension, status=ExtensionStatusValue.error, code=error_code, operation=ext_handler_i.operation, message=message, overwrite=True) name = ext_handler_i.get_extension_full_name(extension) handler_version = ext_handler_i.ext_handler.version add_event(name=name, version=handler_version, op=report_op, is_success=False, log_event=True, message=message) def handle_enable(self, ext_handler_i, extension): """ 1- Ensure the handler is installed 2- Check if extension is enabled or disabled and then process accordingly """ uninstall_exit_code = None old_ext_handler_i = ext_handler_i.get_installed_ext_handler() current_handler_state = ext_handler_i.get_handler_state() ext_handler_i.logger.info("[Enable] current handler state is: {0}", current_handler_state.lower()) # We go through the entire process of downloading and initializing the extension if it's either a fresh # extension or if it's a retry of a previously failed upgrade. if current_handler_state == ExtHandlerState.NotInstalled or current_handler_state == ExtHandlerState.FailedUpgrade: self.__setup_new_handler(ext_handler_i, extension) if old_ext_handler_i is None: ext_handler_i.install(extension=extension) elif ext_handler_i.version_ne(old_ext_handler_i): # This is a special case, we need to update the handler version here but to do that we need to also # disable each enabled extension of this handler. uninstall_exit_code = ExtHandlersHandler._update_extension_handler_and_return_if_failed( old_ext_handler_i, ext_handler_i, extension) else: ext_handler_i.ensure_consistent_data_for_mc() ext_handler_i.update_settings(extension) self.__handle_extension(ext_handler_i, extension, uninstall_exit_code) @staticmethod def __setup_new_handler(ext_handler_i, extension): ext_handler_i.set_handler_state(ExtHandlerState.NotInstalled) ext_handler_i.download() ext_handler_i.initialize() ext_handler_i.update_settings(extension) @staticmethod def __handle_extension(ext_handler_i, extension, uninstall_exit_code): # Check if extension level settings provided for the handler, if not, call enable for the handler. # This is legacy behavior, we can have handlers with no settings. if extension is None: ext_handler_i.enable() return # MultiConfig: Handle extension level ops here ext_handler_i.logger.info("Requested extension state: {0}", extension.state) if extension.state == ExtensionState.Enabled: ext_handler_i.enable(extension, uninstall_exit_code=uninstall_exit_code) elif extension.state == ExtensionState.Disabled: # Only disable extension if the requested state == Disabled and current state is != Disabled if ext_handler_i.get_extension_state(extension) != ExtensionState.Disabled: # Extensions can only be disabled for Multi Config extensions. Disable operation for extension is # tantamount to uninstalling Handler so ignoring errors incase of Disable failure and deleting state. ext_handler_i.disable(extension, ignore_error=True) else: ext_handler_i.logger.info("Extension already disabled, not doing anything") else: raise ExtensionsGoalStateError( "Unknown requested state for Extension {0}: {1}".format(extension.name, extension.state)) @staticmethod def _update_extension_handler_and_return_if_failed(old_ext_handler_i, ext_handler_i, extension=None): def execute_old_handler_command_and_return_if_succeeds(func): """ Created a common wrapper to execute all commands that need to be executed from the old handler so that it can have a common exception handling mechanism :param func: The command to be executed on the old handler :return: True if command execution succeeds and False if it fails """ continue_on_update_failure = False exit_code = 0 try: continue_on_update_failure = ext_handler_i.load_manifest().is_continue_on_update_failure() func() except ExtensionError as e: # Reporting the event with the old handler and raising a new ExtensionUpdateError to set the # handler status on the new version msg = "%s; ContinueOnUpdate: %s" % (ustr(e), continue_on_update_failure) old_ext_handler_i.report_event(message=msg, is_success=False) if not continue_on_update_failure: # We need to populate correct error code here # Without this, the superclass defaults to code -1, which may not accurately represent the error. raise ExtensionUpdateError(msg, code=ExtensionErrorCodes.PluginUpdateProcessingFailed) exit_code = e.code if isinstance(e, ExtensionOperationError): exit_code = e.exit_code # pylint: disable=E1101 logger.info("Continue on Update failure flag is set, proceeding with update") return exit_code disable_exit_codes = defaultdict(lambda: NOT_RUN) # We only want to disable the old handler if it is currently enabled; no other state makes sense. if old_ext_handler_i.get_handler_state() == ExtHandlerState.Enabled: # Corner case - If the old handler is a Single config Handler with no extensions at all, # we should just disable the handler if not old_ext_handler_i.supports_multi_config and not any(old_ext_handler_i.extensions): disable_exit_codes[ old_ext_handler_i.ext_handler.name] = execute_old_handler_command_and_return_if_succeeds( func=partial(old_ext_handler_i.disable, extension=None)) # Else we disable all enabled extensions of this handler # Note: If MC is supported this will disable only enabled_extensions else it will disable all extensions for old_ext in old_ext_handler_i.enabled_extensions: disable_exit_codes[old_ext.name] = execute_old_handler_command_and_return_if_succeeds( func=partial(old_ext_handler_i.disable, extension=old_ext)) ext_handler_i.copy_status_files(old_ext_handler_i) if ext_handler_i.version_gt(old_ext_handler_i): ext_handler_i.update(disable_exit_codes=disable_exit_codes, updating_from_version=old_ext_handler_i.ext_handler.version, extension=extension) else: updating_from_version = ext_handler_i.ext_handler.version old_ext_handler_i.update(handler_version=updating_from_version, disable_exit_codes=disable_exit_codes, updating_from_version=updating_from_version, extension=extension) uninstall_exit_code = execute_old_handler_command_and_return_if_succeeds( func=partial(old_ext_handler_i.uninstall, extension=extension)) old_ext_handler_i.remove_ext_handler() ext_handler_i.update_with_install(uninstall_exit_code=uninstall_exit_code, extension=extension) return uninstall_exit_code def handle_disable(self, ext_handler_i, extension=None): """ Disable is a legacy behavior, CRP doesn't support it, its only for XML based extensions. In case we get a disable request, just disable that extension. """ handler_state = ext_handler_i.get_handler_state() ext_handler_i.logger.info("[Disable] current handler state is: {0}", handler_state.lower()) if handler_state == ExtHandlerState.Enabled: ext_handler_i.disable(extension) def handle_uninstall(self, ext_handler_i, extension): """ To Uninstall the handler, first ensure all extensions are disabled 1- Disable all enabled extensions first if Handler is Enabled and then Disable the handler (disabled extensions wont have any extensions dependent on them so we can just go ahead and remove all of them at once if HandlerState==Uninstall. CRP will only set the HandlerState to Uninstall if all its extensions are set to be disabled) 2- Finally uninstall the handler """ handler_state = ext_handler_i.get_handler_state() ext_handler_i.logger.info("[Uninstall] current handler state is: {0}", handler_state.lower()) if handler_state != ExtHandlerState.NotInstalled: if handler_state == ExtHandlerState.Enabled: # Corner case - Single config Handler with no extensions at all # If there are no extension settings for Handler, we should just disable the handler if not ext_handler_i.supports_multi_config and not any(ext_handler_i.extensions): ext_handler_i.disable() # If Handler is Enabled, there should be atleast 1 enabled extension for the handler # Note: If MC is supported this will disable only enabled_extensions else it will disable all extensions for enabled_ext in ext_handler_i.enabled_extensions: ext_handler_i.disable(enabled_ext) # Try uninstalling the extension and swallow any exceptions in case of failures after logging them try: ext_handler_i.uninstall(extension=extension) except ExtensionError as e: ext_handler_i.report_event(message=ustr(e), is_success=False) ext_handler_i.remove_ext_handler() def __get_handlers_on_file_system(self, goal_state_changed): handlers_to_report = [] # Ignoring the `history` and `events` directories as they're not handlers and are agent-generated for item, path in list_agent_lib_directory(skip_agent_package=True, ignore_names=[EVENTS_DIRECTORY, ARCHIVE_DIRECTORY_NAME]): try: handler_instance = ExtHandlersHandler.get_ext_handler_instance_from_path(name=item, path=path, protocol=self.protocol) if handler_instance is not None: ext_handler = handler_instance.ext_handler # For each handler we need to add extensions to report their status. # For Single Config, we just need to add one extension with name as Handler Name # For Multi Config, walk the config directory and find all unique extension names # and add them as extensions to the handler. extensions_names = set() # Settings for Multi Config are saved as ..settings. # Use this pattern to determine if Handler supports Multi Config or not and add extensions for settings_path in glob.iglob(os.path.join(handler_instance.get_conf_dir(), "*.*.settings")): match = re.search("(?P\\w+)\\.\\d+\\.settings", settings_path) if match is not None: extensions_names.add(match.group("extname")) ext_handler.supports_multi_config = True # If nothing found with that pattern then its a Single Config, add an extension with Handler Name if not any(extensions_names): extensions_names.add(ext_handler.name) for ext_name in extensions_names: ext = ExtensionSettings(name=ext_name) # Fetch the last modified sequence number seq_no, _ = handler_instance.get_status_file_path(ext) ext.sequenceNumber = seq_no # Append extension to the list of extensions for the handler ext_handler.settings.append(ext) handlers_to_report.append(ext_handler) except Exception as error: # Log error once per goal state if goal_state_changed: logger.warn("Can't fetch ExtHandler from path: {0}; Error: {1}".format(path, ustr(error))) return handlers_to_report def report_ext_handlers_status(self, goal_state_changed=False, vm_agent_update_status=None, vm_agent_supports_fast_track=False): """ Go through handler_state dir, collect and report status. Returns the status it reported, or None if an error occurred. """ try: vm_status = VMStatus(status="Ready", message="Guest Agent is running", gs_aggregate_status=self.__gs_aggregate_status, vm_agent_update_status=vm_agent_update_status) vm_status.vmAgent.set_supports_fast_track(vm_agent_supports_fast_track) handlers_to_report = [] # In case of Unsupported error, report the status of the handlers in the VM if self.__last_gs_unsupported(): handlers_to_report = self.__get_handlers_on_file_system(goal_state_changed) # If GoalState supported, report the status of extension handlers that were requested by the GoalState elif not self.__last_gs_unsupported() and self.ext_handlers is not None: handlers_to_report = self.ext_handlers for ext_handler in handlers_to_report: try: self.report_ext_handler_status(vm_status, ext_handler, goal_state_changed) except ExtensionError as error: add_event(op=WALAEventOperation.ExtensionProcessing, is_success=False, message=ustr(error)) logger.verbose("Report vm agent status") try: self.protocol.report_vm_status(vm_status) logger.verbose("Completed vm agent status report successfully") self.report_status_error_state.reset() except ProtocolNotFoundError as error: self.report_status_error_state.incr() message = "Failed to report vm agent status: {0}".format(error) logger.verbose(message) except ProtocolError as error: self.report_status_error_state.incr() message = "Failed to report vm agent status: {0}".format(error) add_event(AGENT_NAME, version=CURRENT_VERSION, op=WALAEventOperation.ExtensionProcessing, is_success=False, message=message) if self.report_status_error_state.is_triggered(): message = "Failed to report vm agent status for more than {0}" \ .format(self.report_status_error_state.min_timedelta) add_event(AGENT_NAME, version=CURRENT_VERSION, op=WALAEventOperation.ReportStatusExtended, is_success=False, message=message) self.report_status_error_state.reset() return vm_status except Exception as error: msg = u"Failed to report status: {0}".format(textutil.format_exception(error)) logger.warn(msg) add_event(AGENT_NAME, version=CURRENT_VERSION, op=WALAEventOperation.ReportStatus, is_success=False, message=msg) return None def report_ext_handler_status(self, vm_status, ext_handler, goal_state_changed): ext_handler_i = ExtHandlerInstance(ext_handler, self.protocol) handler_status = ext_handler_i.get_handler_status() # If nothing available, skip reporting if handler_status is None: # We should always have some handler status if requested state != Uninstall irrespective of single or # multi-config. If state is != Uninstall, report error if ext_handler.state != ExtensionRequestedState.Uninstall: msg = "No handler status found for {0}. Not reporting anything for it.".format(ext_handler.name) ext_handler_i.report_error_on_incarnation_change(goal_state_changed, log_msg=msg, event_msg=msg) return handler_state = ext_handler_i.get_handler_state() ext_handler_statuses = [] ext_disallowed = ext_handler in self.__disallowed_ext_handlers # For MultiConfig, we need to report status per extension even for Handler level failures. # If we have HandlerStatus for a MultiConfig handler and GS is requesting for it, we would report status per # extension even if HandlerState == NotInstalled (Sample scenario: ExtensionsGoalStateError, DecideVersionError, etc) # We also need to report extension status for an uninstalled handler if the extension is disallowed (due to # policy failure, extensions disabled, etc.) because CRP waits for extension runtime status before failing the operation. if handler_state != ExtHandlerState.NotInstalled or ext_handler.supports_multi_config or ext_disallowed: # Since we require reading the Manifest for reading the heartbeat, this would fail if HandlerManifest not found. # Only try to read heartbeat if HandlerState != NotInstalled. # If extension is disallowed, concatenate the heartbeat message to the existing handler status message, and # do not override handler error code or status with heartbeat. if handler_state != ExtHandlerState.NotInstalled: # Heartbeat is a handler level thing only, so we don't need to modify this try: heartbeat = ext_handler_i.collect_heartbeat() if heartbeat is not None: if ext_disallowed: pass # The status already specifies that the extension is disallowed ('NotReady') else: handler_status.status = heartbeat.get('status') if 'formattedMessage' in heartbeat: heartbeat_message = parse_formatted_message(heartbeat.get('formattedMessage')) if ext_disallowed: # If extension is disallowed, the agent should set the handler status message on behalf of the # extension, handler_status.message should not be None. if handler_status.message is None: handler_status.message = "Extension was not executed, but it was previously enabled and reported the following heartbeat:\n{0}".format(heartbeat_message) else: handler_status.message += " Extension was previously enabled and reported the following heartbeat:\n{0}".format(heartbeat_message) else: handler_status.message = heartbeat_message except ExtensionError as e: ext_handler_i.set_handler_status(message=ustr(e), code=e.code) ext_handler_statuses = ext_handler_i.get_extension_handler_statuses(handler_status, goal_state_changed) # If not any extension status reported, report the Handler status if not any(ext_handler_statuses): ext_handler_statuses.append(handler_status) vm_status.vmAgent.extensionHandlers.extend(ext_handler_statuses) class ExtHandlerInstance(object): def __init__(self, ext_handler, protocol, execution_log_max_size=(10 * 1024 * 1024), extension=None): self.ext_handler = ext_handler self.protocol = protocol self.operation = None self.pkg = None self.pkg_file = None self.logger = None self.set_logger(extension=extension, execution_log_max_size=execution_log_max_size) @property def supports_multi_config(self): return self.ext_handler.supports_multi_config @property def extensions(self): return self.ext_handler.settings @property def enabled_extensions(self): """ In case of Single config, just return all the extensions of the handler (expectation being that there'll only be a single extension per handler). We will not be maintaining extension level state for Single config Handlers """ if self.supports_multi_config: return [ext for ext in self.extensions if self.get_extension_state(ext) == ExtensionState.Enabled] return self.extensions def get_extension_full_name(self, extension=None): """ Get the full name of the extension .. :param extension: The requested extension :return: if MultiConfig not supported or extension == None, else . """ if self.should_perform_multi_config_op(extension): return "{0}.{1}".format(self.ext_handler.name, extension.name) return self.ext_handler.name def __set_command_execution_log(self, extension, execution_log_max_size): try: fileutil.mkdir(self.get_log_dir(), mode=0o755, reset_mode_and_owner=False) except IOError as e: self.logger.error(u"Failed to create extension log dir: {0}", e) else: log_file_name = "CommandExecution.log" if not self.should_perform_multi_config_op( extension) else "CommandExecution_{0}.log".format(extension.name) log_file = os.path.join(self.get_log_dir(), log_file_name) self.__truncate_file_head(log_file, execution_log_max_size, self.get_extension_full_name(extension)) self.logger.add_appender(logger.AppenderType.FILE, logger.LogLevel.INFO, log_file) @staticmethod def __truncate_file_head(filename, max_size, extension_name): try: if os.stat(filename).st_size <= max_size: return with open(filename, "rb") as existing_file: existing_file.seek(-1 * max_size, 2) _ = existing_file.readline() with open(filename + ".tmp", "wb") as tmp_file: shutil.copyfileobj(existing_file, tmp_file) os.rename(filename + ".tmp", filename) except (IOError, OSError) as e: if is_file_not_found_error(e): # If CommandExecution.log does not exist, it's not noteworthy; # this just means that no extension with self.ext_handler.name is # installed. return logger.error( "Exception occurred while attempting to truncate {0} for extension {1}. Exception is: {2}", filename, extension_name, ustr(e)) for f in (filename, filename + ".tmp"): try: os.remove(f) except (IOError, OSError) as cleanup_exception: if is_file_not_found_error(cleanup_exception): logger.info("File '{0}' does not exist.", f) else: logger.warn("Exception occurred while attempting to remove file '{0}': {1}", f, cleanup_exception) def decide_version(self, target_state, extension, gs_activity_id): self.logger.verbose("Decide which version to use") try: manifest = self.protocol.get_goal_state().fetch_extension_manifest(self.ext_handler.name, self.ext_handler.manifest_uris) pkg_list = manifest.pkg_list except ProtocolError as e: raise ExtensionError("Failed to get ext handler pkgs", e) except ExtensionDownloadError: self.set_operation(WALAEventOperation.Download) raise # Determine the desired and installed versions requested_version = FlexibleVersion(str(self.ext_handler.version)) installed_version_string = self.get_installed_version() installed_version = requested_version if installed_version_string is None else FlexibleVersion(installed_version_string) # Divide packages # - Find the installed package (its version must exactly match) # - Find the internal candidate (its version must exactly match) # - Separate the public packages selected_pkg = None installed_pkg = None pkg_list.versions.sort(key=lambda p: FlexibleVersion(p.version)) for pkg in pkg_list.versions: pkg_version = FlexibleVersion(pkg.version) if pkg_version == installed_version: installed_pkg = pkg if requested_version.matches(pkg_version): selected_pkg = pkg # Finally, update the version only if not downgrading # Note: # - A downgrade, which will be bound to the same major version, # is allowed if the installed version is no longer available if target_state in (ExtensionRequestedState.Uninstall, ExtensionRequestedState.Disabled): if installed_pkg is None: msg = "Failed to find installed version: {0} of Handler: {1} in handler manifest to uninstall.".format( installed_version, self.ext_handler.name) self.logger.warn(msg) self.pkg = installed_pkg self.ext_handler.version = str(installed_version) \ if installed_version is not None else None else: self.pkg = selected_pkg if self.pkg is not None: if self.ext_handler.version != str(selected_pkg.version): # The Agent should not change the version requested by the Goal State. Send telemetry if this happens. event.info( WALAEventOperation.RequestedVersionMismatch, 'Goal State requesting {0} version {1}, but Agent overriding with version {2} [Activity ID: {3}]', self.ext_handler.name, self.ext_handler.version, selected_pkg.version, gs_activity_id) self.ext_handler.version = str(selected_pkg.version) if self.pkg is not None: self.logger.verbose("Use version: {0}", self.pkg.version) # We reset the logger here incase the handler version changes if not requested_version.matches(FlexibleVersion(self.ext_handler.version)): self.set_logger(extension=extension) return self.pkg def set_logger(self, execution_log_max_size=(10 * 1024 * 1024), extension=None): prefix = "[{0}]".format(self.get_full_name(extension)) self.logger = logger.Logger(logger.DEFAULT_LOGGER, prefix) self.__set_command_execution_log(extension, execution_log_max_size) def version_gt(self, other): self_version = self.ext_handler.version other_version = other.ext_handler.version return FlexibleVersion(self_version) > FlexibleVersion(other_version) def version_ne(self, other): self_version = self.ext_handler.version other_version = other.ext_handler.version return FlexibleVersion(self_version) != FlexibleVersion(other_version) def get_installed_ext_handler(self): latest_version = self.get_installed_version() if latest_version is None: return None installed_handler = copy.deepcopy(self.ext_handler) installed_handler.version = latest_version return ExtHandlerInstance(installed_handler, self.protocol) def get_installed_version(self): latest_version = None for path in glob.iglob(os.path.join(conf.get_lib_dir(), self.ext_handler.name + "-*")): if not os.path.isdir(path): continue separator = path.rfind('-') version_from_path = FlexibleVersion(path[separator + 1:]) state_path = os.path.join(path, 'config', 'HandlerState') if not os.path.exists(state_path) or fileutil.read_file(state_path) == ExtHandlerState.NotInstalled \ or fileutil.read_file(state_path) == ExtHandlerState.FailedUpgrade: logger.verbose("Ignoring version of uninstalled or failed extension: {0}".format(path)) continue if latest_version is None or latest_version < version_from_path: latest_version = version_from_path return str(latest_version) if latest_version is not None else None def copy_status_files(self, old_ext_handler_i): self.logger.info("Copy status files from old plugin to new") old_ext_dir = old_ext_handler_i.get_base_dir() new_ext_dir = self.get_base_dir() old_ext_mrseq_file = os.path.join(old_ext_dir, "mrseq") if os.path.isfile(old_ext_mrseq_file): logger.info("Migrating {0} to {1}.", old_ext_mrseq_file, new_ext_dir) shutil.copy2(old_ext_mrseq_file, new_ext_dir) else: logger.info("{0} does not exist, no migration is needed.", old_ext_mrseq_file) old_ext_status_dir = old_ext_handler_i.get_status_dir() new_ext_status_dir = self.get_status_dir() if os.path.isdir(old_ext_status_dir): for status_file in os.listdir(old_ext_status_dir): status_file = os.path.join(old_ext_status_dir, status_file) if os.path.isfile(status_file): shutil.copy2(status_file, new_ext_status_dir) def set_operation(self, op): self.operation = op def report_event(self, name=None, message="", is_success=True, duration=0, log_event=True): ext_handler_version = self.ext_handler.version name = self.ext_handler.name if name is None else name add_event(name=name, version=ext_handler_version, message=message, op=self.operation, is_success=is_success, duration=duration, log_event=log_event) def _unzip_extension_package(self, source_file, target_directory): self.logger.info("Unzipping extension package: {0}", source_file) try: zipfile.ZipFile(source_file).extractall(target_directory) except Exception as exception: logger.info("Error while unzipping extension package: {0}", ustr(exception)) os.remove(source_file) if os.path.exists(target_directory): shutil.rmtree(target_directory) return False return True def download(self): """ If extension is signed, validate extension package signature immediately after download, and validate handler manifest 'signingInfo' after package extraction. If both signature and handler manifest are successfully validated, save state file indicating this. If validation fails, the error is captured and reported via telemetry, but download and extraction are not blocked. In future releases, once sufficient telemetry has been collected to gain confidence in the validation process, package extraction will be blocked if signature validation fails. TODO: Allow users to opt into enforcement via policy as a temporary workaround until validation is enforced by default. """ begin_utc = datetime.datetime.now(UTC) self.set_operation(WALAEventOperation.Download) if self.pkg is None or self.pkg.uris is None or len(self.pkg.uris) == 0: raise ExtensionDownloadError("No package uri found") package_file = os.path.join(conf.get_lib_dir(), self.get_extension_package_zipfile_name()) should_validate_ext_signature = signature_validation_enabled() and self.ext_handler.encoded_signature != "" signature_validated = False # Handle case where extension zip package already exists, but has not been extracted. If signature is present, # validate the package signature, extract the package, and then validate handler manifest. package_exists = False if os.path.exists(package_file): msg = "Using existing extension package: {0}".format(package_file) self.logger.info(msg) add_event(op=WALAEventOperation.Download, message=msg, name=self.ext_handler.name, version=self.ext_handler.version, is_success=True, log_event=False) # Validate package signature if should_validate_ext_signature: try: # TODO: set 'failure_log_level' to ERROR when signature validation is enforced. validate_signature(package_file, self.ext_handler.encoded_signature, package_full_name=self.get_full_name(), failure_log_level=logger.LogLevel.WARNING) signature_validated = True except SignatureValidationError: # validate_signature() only raises SignatureValidationError and sends logs/telemetry for the error, so do nothing here. # TODO: Raise error once signature validation is enforced. pass if self._unzip_extension_package(package_file, self.get_base_dir()): package_exists = True else: msg = "The existing extension package is invalid, will ignore it." self.logger.info(msg) add_event(op=WALAEventOperation.Download, message=msg, name=self.ext_handler.name, version=self.ext_handler.version, is_success=True, log_event=False) signature_validated = False # Handle the case where the extension package does not exist. Download the zip package, validate the signature # if present, and extract the package. If package is signed, validate handler manifest. if not package_exists: is_fast_track_goal_state = self.protocol.get_goal_state().extensions_goal_state.source == GoalStateSource.FastTrack try: if signature_validation_enabled() and self.ext_handler.encoded_signature == "": # Extension signature status is already reported in telemetry during goal state processing, so here, # we log locally only for debugging purposes if extension is unsigned. self.logger.info("No signature for extension '{0}' in goal state, skipping signature validation.".format(self.get_full_name())) # If signature should not be validated, pass an empty string as 'signature' to download_zip_package(), # which will skip validation when the signature parameter is empty. signature = self.ext_handler.encoded_signature if should_validate_ext_signature else "" # TODO: Once signature enforcement is implemented, update this function to accept an 'enforce_signature' flag and pass it through to download_zip_package(). self.protocol.client.download_zip_package(package_name=self.get_full_name(), uris=self.pkg.uris, target_file=package_file, target_directory=self.get_base_dir(), use_verify_header=is_fast_track_goal_state, signature=signature, enforce_signature=False) if should_validate_ext_signature: # download_zip_package() performs signature validation internally. If no exception is raised, the signature was successfully validated. # Mark this here so that we can save validation state later, if handler manifest validation also succeeds. signature_validated = True except SignatureValidationError: # download_zip_package() will propagate a SignatureValidationError if validation fails. This is the only # exception expected from validation, and the error has already been reported, so we do nothing here. # Do not block extension execution, continue to manifest validation. # TODO: Raise error once signature validation is enforced. pass self.report_event(message="Download succeeded", duration=elapsed_milliseconds(begin_utc)) # Validate 'signingInfo' - the publisher, type, and version specified in handler manifest 'signingInfo' should match the extension if should_validate_ext_signature: try: # TODO: set 'failure_log_level' to ERROR when signature validation is enforced. validate_handler_manifest_signing_info(self.load_manifest(), self.ext_handler, failure_log_level=logger.LogLevel.WARNING) # If both manifest and signature were validated successfully, save state. if signature_validated: save_signature_validation_state(self.get_base_dir()) except PackageValidationError: # validate_handler_manifest_signing_info() raises only ManifestValidationError, save_signature_validation_state() # raises only PackageValidationError. Both send logs/telemetry for any error, so do nothing here. # TODO: Raise error once signature validation is enforced. pass self.pkg_file = package_file def ensure_consistent_data_for_mc(self): # If CRP expects Handler to support MC, ensure the HandlerManifest also reflects that. # Even though the HandlerManifest.json is not expected to change once the extension is installed, # CRP can wrongfully request send a Multi-Config GoalState even if the Handler supports only Single Config. # Checking this only if HandlerState == Enable. In case of Uninstall, we dont care. if self.supports_multi_config and not self.load_manifest().supports_multiple_extensions(): raise ExtensionsGoalStateError( "Handler {0} does not support MultiConfig but CRP expects it, failing due to inconsistent data".format( self.ext_handler.name)) def initialize(self): self.logger.info("Initializing extension {0}".format(self.get_full_name())) # Add user execute permission to all files under the base dir for file in fileutil.get_all_files(self.get_base_dir()): # pylint: disable=redefined-builtin fileutil.chmod(file, os.stat(file).st_mode | stat.S_IXUSR) # Save HandlerManifest.json man_file = fileutil.search_file(self.get_base_dir(), 'HandlerManifest.json') if man_file is None: raise ExtensionDownloadError("HandlerManifest.json not found") try: man = fileutil.read_file(man_file, remove_bom=True) fileutil.write_file(self.get_manifest_file(), man) except IOError as e: fileutil.clean_ioerror(e, paths=[self.get_base_dir(), self.pkg_file]) raise ExtensionDownloadError(u"Failed to save HandlerManifest.json", e) man = self.load_manifest() man.report_invalid_boolean_properties(ext_name=self.get_full_name()) self.ensure_consistent_data_for_mc() # Create status and config dir try: status_dir = self.get_status_dir() fileutil.mkdir(status_dir, mode=0o700) conf_dir = self.get_conf_dir() fileutil.mkdir(conf_dir, mode=0o700) if get_supported_feature_by_name(SupportedFeatureNames.ExtensionTelemetryPipeline).is_supported: fileutil.mkdir(self.get_extension_events_dir(), mode=0o700) except IOError as e: fileutil.clean_ioerror(e, paths=[self.get_base_dir(), self.pkg_file]) raise ExtensionDownloadError(u"Failed to initialize extension '{0}'".format(self.get_full_name()), e) # Save HandlerEnvironment.json self.create_handler_env() self.log_telemetry_if_ext_uses_resource_limits() self.set_extension_resource_limits() def set_extension_resource_limits(self): extension_name = self.get_full_name() # setup the resource limits for extension operations and it's services. man = self.load_manifest() resource_limits = man.get_resource_limits() CGroupConfigurator.get_instance().setup_extension_slice( extension_name=extension_name, cpu_quota=resource_limits.get_extension_slice_cpu_quota()) CGroupConfigurator.get_instance().set_extension_services_cpu_memory_quota(resource_limits.get_service_list()) def log_telemetry_if_ext_uses_resource_limits(self): extension_name = self.get_full_name() man = self.load_manifest() resource_limits = man.get_resource_limits() def _log(msg): event.info(WALAEventOperation.ExtensionResourceGovernance, msg) if resource_limits is None: return if resource_limits.get_extension_slice_cpu_quota() is not None or resource_limits.get_extension_slice_memory_quota() is not None: msg = "{0} is using resource governance to set limits".format(extension_name) _log(msg) if resource_limits.get_service_list() is not None and len(resource_limits.get_service_list()) > 0: msg = "{0} is using resource governance for its service list".format(extension_name) _log(msg) def create_status_file(self, extension, status, code, operation, message, overwrite): # Create status file for specified extension. If overwrite is true, overwrite any existing status file. If # false, create a status file only if it does not already exist. _, status_path = self.get_status_file_path(extension) if status_path is not None and (overwrite or not os.path.exists(status_path)): now = datetime.datetime.now(UTC).strftime("%Y-%m-%dT%H:%M:%SZ") status_contents = [ { "version": 1.0, "timestampUTC": now, "status": { "name": self.get_extension_full_name(extension), "operation": operation, "status": status, "code": code, "formattedMessage": { "lang": "en-US", "message": redact_sas_token(message) } } } ] # Create status directory if not exists. This is needed in the case where the Handler fails before even # initializing the directories (ExtensionsGoalStateError, Version deleted from PIR error, etc) if not os.path.exists(os.path.dirname(status_path)): fileutil.mkdir(os.path.dirname(status_path), mode=0o700) self.logger.info("Creating a placeholder status file {0} with status: {1}".format(status_path, status)) fileutil.write_file(status_path, json.dumps(status_contents)) def enable(self, extension=None, uninstall_exit_code=None): try: self._enable_extension(extension, uninstall_exit_code) except ExtensionError as error: if self.should_perform_multi_config_op(extension): raise MultiConfigExtensionEnableError(error) raise # Even if a single extension is enabled for this handler, set the Handler state as Enabled self.set_handler_state(ExtHandlerState.Enabled) self.set_handler_status(status=ExtHandlerStatusValue.ready, message="Plugin enabled") def should_perform_multi_config_op(self, extension): return self.supports_multi_config and extension is not None def _enable_extension(self, extension, uninstall_exit_code): uninstall_exit_code = str(uninstall_exit_code) if uninstall_exit_code is not None else NOT_RUN env = { ExtCommandEnvVariable.UninstallReturnCode: uninstall_exit_code } # This check to call the setup if extension already installed and not called setup before self.set_extension_resource_limits() self.set_operation(WALAEventOperation.Enable) man = self.load_manifest() enable_cmd = man.get_enable_command() self.logger.info("Enable extension: [{0}]".format(enable_cmd)) self.launch_command(enable_cmd, cmd_name="enable", timeout=300, extension_error_code=ExtensionErrorCodes.PluginEnableProcessingFailed, env=env, extension=extension) if self.should_perform_multi_config_op(extension): # Only save extension state if MC supported self.__set_extension_state(extension, ExtensionState.Enabled) # start tracking the extension services cgroup. resource_limits = man.get_resource_limits() CGroupConfigurator.get_instance().start_tracking_extension_services_cgroups( resource_limits.get_service_list()) def _disable_extension(self, extension=None): self.set_operation(WALAEventOperation.Disable) man = self.load_manifest() disable_cmd = man.get_disable_command() self.logger.info("Disable extension: [{0}]".format(disable_cmd)) self.launch_command(disable_cmd, cmd_name="disable", timeout=900, extension_error_code=ExtensionErrorCodes.PluginDisableProcessingFailed, extension=extension) def disable(self, extension=None, ignore_error=False): try: self._disable_extension(extension) except ExtensionError as error: if not ignore_error: raise msg = "[Ignored Error] Ran into error disabling extension:{0}".format(ustr(error)) self.logger.info(msg) self.report_event(name=self.get_extension_full_name(extension), message=msg, is_success=False, log_event=False) # # In the case of multi-config handlers, we keep the state of each extension individually. # Disable can be called when the extension is deleted (the extension state in the goal state is set to "disabled"), # or as part of the Uninstall and Update sequences. When the extension is deleted, we need to remove its state, along # with its status and settings files. Otherwise, we need to set the state to "disabled". # if self.should_perform_multi_config_op(extension): if extension.state == ExtensionRequestedState.Disabled: self.__remove_extension_state_files(extension) else: self.__set_extension_state(extension, ExtensionState.Disabled) # For Single config, dont check enabled_extensions because no extension state is maintained. # For MultiConfig, Set the handler state to Installed only when all extensions have been disabled if not self.supports_multi_config or not any(self.enabled_extensions): self.set_handler_state(ExtHandlerState.Installed) self.set_handler_status(status=ExtHandlerStatusValue.not_ready, message="Plugin disabled") def install(self, uninstall_exit_code=None, extension=None): # For Handler level operations, extension just specifies the settings that initiated the install. # This is needed to provide the sequence number and extension name in case the extension needs to report # failure/status using status file. uninstall_exit_code = str(uninstall_exit_code) if uninstall_exit_code is not None else NOT_RUN env = {ExtCommandEnvVariable.UninstallReturnCode: uninstall_exit_code} man = self.load_manifest() install_cmd = man.get_install_command() self.logger.info("Install extension [{0}]".format(install_cmd)) self.set_operation(WALAEventOperation.Install) self.launch_command(install_cmd, cmd_name="install", timeout=900, extension=extension, extension_error_code=ExtensionErrorCodes.PluginInstallProcessingFailed, env=env) self.set_handler_state(ExtHandlerState.Installed) self.set_handler_status(status=ExtHandlerStatusValue.not_ready, message="Plugin installed but not enabled") def uninstall(self, extension=None): # For Handler level operations, extension just specifies the settings that initiated the uninstall. # This is needed to provide the sequence number and extension name in case the extension needs to report # failure/status using status file. self.set_operation(WALAEventOperation.UnInstall) man = self.load_manifest() # stop tracking extension services cgroup. resource_limits = man.get_resource_limits() CGroupConfigurator.get_instance().stop_tracking_extension_services_cgroups( resource_limits.get_service_list()) CGroupConfigurator.get_instance().reset_extension_services_quota( resource_limits.get_service_list()) uninstall_cmd = man.get_uninstall_command() self.logger.info("Uninstall extension [{0}]".format(uninstall_cmd)) self.launch_command(uninstall_cmd, cmd_name="uninstall", extension=extension) def remove_ext_handler(self): try: zip_filename = os.path.join(conf.get_lib_dir(), self.get_extension_package_zipfile_name()) if os.path.exists(zip_filename): os.remove(zip_filename) self.logger.verbose("Deleted the extension zip at path {0}", zip_filename) base_dir = self.get_base_dir() if os.path.isdir(base_dir): self.logger.info("Remove extension handler directory: {0}", base_dir) # some extensions uninstall asynchronously so ignore error 2 while removing them def on_rmtree_exception(_, __, exception): if not isinstance(exception, OSError) or exception.errno != 2: # [Errno 2] No such file or directory raise exception # On 3.12, 'onerror' has been deprecated in favor of 'onexc' if sys.version_info[0] == 3 and sys.version_info[1] >= 12 or sys.version_info[0] > 3: kwargs = { 'onexc': on_rmtree_exception } else: kwargs = { 'onerror': lambda function, path, exc_info: on_rmtree_exception(function, path, exc_info[1]) } # E1123: Unexpected keyword argument 'onexc' in function call (unexpected-keyword-arg) shutil.rmtree(base_dir, **kwargs) # pylint: disable=unexpected-keyword-arg CGroupConfigurator.get_instance().stop_tracking_extension_cgroups(self.get_full_name()) self.logger.info("Remove the extension slice: {0}".format(self.get_full_name())) CGroupConfigurator.get_instance().reset_extension_quota( extension_name=self.get_full_name()) except IOError as e: message = "Failed to remove extension handler directory: {0}".format(e) self.report_event(message=message, is_success=False) self.logger.warn(message) def update(self, handler_version=None, disable_exit_codes=None, updating_from_version=None, extension=None): # For Handler level operations, extension just specifies the settings that initiated the update. # This is needed to provide the sequence number and extension name in case the extension needs to report # failure/status using status file. if handler_version is None: handler_version = self.ext_handler.version env = { 'VERSION': handler_version, ExtCommandEnvVariable.UpdatingFromVersion: updating_from_version } if not self.supports_multi_config: # For single config, extension.name == ext_handler.name env[ExtCommandEnvVariable.DisableReturnCode] = ustr(disable_exit_codes.get(self.ext_handler.name)) else: disable_codes = [] for ext in self.extensions: disable_codes.append({ "extensionName": ext.name, "exitCode": ustr(disable_exit_codes.get(ext.name)) }) env[ExtCommandEnvVariable.DisableReturnCodeMultipleExtensions] = json.dumps(disable_codes) try: self.set_operation(WALAEventOperation.Update) man = self.load_manifest() update_cmd = man.get_update_command() self.logger.info("Update extension [{0}]".format(update_cmd)) self.launch_command(update_cmd, cmd_name="update", timeout=900, extension_error_code=ExtensionErrorCodes.PluginUpdateProcessingFailed, env=env, extension=extension) except ExtensionError: # Mark the handler as Failed so we don't clean it up and can keep reporting its status self.set_handler_state(ExtHandlerState.FailedUpgrade) raise def update_with_install(self, uninstall_exit_code=None, extension=None): man = self.load_manifest() if man.is_update_with_install(): self.install(uninstall_exit_code=uninstall_exit_code, extension=extension) else: self.logger.info("UpdateWithInstall not set. " "Skip install during upgrade.") self.set_handler_state(ExtHandlerState.Installed) def _get_last_modified_seq_no_from_config_files(self, extension): """ The sequence number is not guaranteed to always be strictly increasing. To ensure we always get the latest one, fetching the sequence number from config file that was last modified (and not necessarily the largest). :return: Last modified Sequence number or -1 on errors """ seq_no = -1 if self.supports_multi_config and (extension is None or extension.name is None): # If no extension name is provided for Multi Config, don't try to parse any sequence number from filesystem return seq_no try: largest_modified_time = 0 conf_dir = self.get_conf_dir() for item in os.listdir(conf_dir): item_path = os.path.join(conf_dir, item) if not os.path.isfile(item_path): continue try: # Settings file for Multi Config look like - ..settings # Settings file for Single Config look like - .settings match = re.search("((?P\\w+)\\.)*(?P\\d+)\\.settings", item_path) if match is not None: ext_name = match.group('ext_name') if self.supports_multi_config and extension.name != ext_name: continue curr_seq_no = int(match.group("seq_no")) curr_modified_time = os.path.getmtime(item_path) if curr_modified_time > largest_modified_time: seq_no = curr_seq_no largest_modified_time = curr_modified_time except (ValueError, IndexError, TypeError): self.logger.verbose("Failed to parse file name: {0}", item) continue except Exception as error: logger.verbose("Error fetching sequence number from config files: {0}".format(ustr(error))) seq_no = -1 return seq_no def get_status_file_path(self, extension=None): """ We should technically only fetch the sequence number from GoalState and not rely on the filesystem at all, But there are certain scenarios where we need to fetch the latest sequence number from the filesystem (For example when we need to report the status for extensions of previous GS if the current GS is Unsupported). Always prioritizing sequence number from extensions but falling back to filesystem :param extension: Extension for which the sequence number is required :return: Sequence number for the extension, Status file path or -1, None """ path = None seq_no = None if extension is not None and extension.sequenceNumber is not None: try: seq_no = int(extension.sequenceNumber) except ValueError: logger.error('Sequence number [{0}] does not appear to be valid'.format(extension.sequenceNumber)) if seq_no is None: # If we're unable to fetch Sequence number from Extension for any reason, # try fetching it from the last modified Settings file. seq_no = self._get_last_modified_seq_no_from_config_files(extension) if seq_no is not None and seq_no > -1: if self.should_perform_multi_config_op(extension) and extension is not None and extension.name is not None: path = os.path.join(self.get_status_dir(), "{0}.{1}.status".format(extension.name, seq_no)) elif not self.supports_multi_config: path = os.path.join(self.get_status_dir(), "{0}.status").format(seq_no) return seq_no if seq_no is not None else -1, path def collect_ext_status(self, ext): self.logger.verbose("Collect extension status for {0}".format(self.get_extension_full_name(ext))) seq_no, ext_status_file = self.get_status_file_path(ext) # We should never try to read any status file if the handler has no settings, returning None in that case if seq_no == -1 or ext is None: return None data = None data_str = None # Extension.name contains the extension name in case of MC and Handler name in case of Single Config. ext_status = ExtensionStatus(name=ext.name, seq_no=seq_no) try: data_str, data = self._read_status_file(ext_status_file) except ExtensionStatusError as e: msg = "" ext_status.status = ExtensionStatusValue.error if e.code == ExtensionStatusError.CouldNotReadStatusFile: ext_status.code = ExtensionErrorCodes.PluginUnknownFailure msg = u"We couldn't read any status for {0} extension, for the sequence number {1}. It failed due" \ u" to {2}".format(self.get_full_name(ext), seq_no, ustr(e)) elif e.code == ExtensionStatusError.InvalidJsonFile: ext_status.code = ExtensionErrorCodes.PluginSettingsStatusInvalid msg = u"The status reported by the extension {0}(Sequence number {1}), was in an " \ u"incorrect format and the agent could not parse it correctly. Failed due to {2}" \ .format(self.get_full_name(ext), seq_no, ustr(e)) elif e.code == ExtensionStatusError.FileNotExists: msg = "This status is being reported by the Guest Agent since no status file was " \ "reported by extension {0}: {1}".format(self.get_extension_full_name(ext), ustr(e)) # Reporting a success code and transitioning status to keep in accordance with existing code that # creates default status placeholder file ext_status.code = ExtensionErrorCodes.PluginSuccess ext_status.status = ExtensionStatusValue.transitioning # This log is periodic due to the verbose nature of the status check. Please make sure that the message # constructed above does not change very frequently and includes important info such as sequence number, # extension name to make sure that the log reflects changes in the extension sequence for which the # status is being sent. logger.periodic_warn(logger.EVERY_HALF_HOUR, u"[PERIODIC] " + msg) add_periodic(delta=logger.EVERY_HALF_HOUR, name=self.get_extension_full_name(ext), version=self.ext_handler.version, op=WALAEventOperation.StatusProcessing, is_success=False, message=msg, log_event=False) ext_status.message = msg return ext_status # We did not encounter InvalidJsonFile/CouldNotReadStatusFile and thus the status file was correctly written # and has valid json. try: parse_ext_status(ext_status, data) if len(data_str) > _MAX_STATUS_FILE_SIZE_IN_BYTES: raise ExtensionStatusError(msg="For Extension Handler {0} for the sequence number {1}, the status " "file {2} of size {3} bytes is too big. Max Limit allowed is {4} bytes" .format(self.get_full_name(ext), seq_no, ext_status_file, len(data_str), _MAX_STATUS_FILE_SIZE_IN_BYTES), code=ExtensionStatusError.MaxSizeExceeded) except ExtensionStatusError as e: msg = u"For Extension Handler {0} for the sequence number {1}, the status file {2}. " \ u"Encountered the following error: {3}".format(self.get_full_name(ext), seq_no, ext_status_file, ustr(e)) logger.periodic_warn(logger.EVERY_DAY, u"[PERIODIC] " + msg) add_periodic(delta=logger.EVERY_HALF_HOUR, name=self.get_extension_full_name(ext), version=self.ext_handler.version, op=WALAEventOperation.StatusProcessing, is_success=False, message=msg, log_event=False) if e.code == ExtensionStatusError.MaxSizeExceeded: ext_status.message, field_size = self._truncate_message(ext_status.message, _MAX_STATUS_MESSAGE_LENGTH) ext_status.substatusList = self._process_substatus_list(ext_status.substatusList, field_size) elif e.code == ExtensionStatusError.StatusFileMalformed: ext_status.message = "Could not get a valid status from the extension {0}. Encountered the " \ "following error: {1}".format(self.get_full_name(ext), ustr(e)) ext_status.code = ExtensionErrorCodes.PluginSettingsStatusInvalid ext_status.status = ExtensionStatusValue.error return ext_status def get_ext_handling_status(self, ext): seq_no, ext_status_file = self.get_status_file_path(ext) # This is legacy scenario for cases when no extension settings is available if seq_no < 0 or ext_status_file is None: return None # Missing status file is considered a non-terminal state here # so that extension sequencing can wait until it becomes existing if not os.path.exists(ext_status_file): status = ExtensionStatusValue.warning else: ext_status = self.collect_ext_status(ext) status = ext_status.status if ext_status is not None else None return status def is_ext_handling_complete(self, ext): status = self.get_ext_handling_status(ext) # when seq < 0 (i.e. no new user settings), the handling is complete and return None status if status is None: return True, None # If not in terminal state, it is incomplete if status not in _EXTENSION_TERMINAL_STATUSES: return False, status # Extension completed, return its status return True, status def report_error_on_incarnation_change(self, goal_state_changed, log_msg, event_msg, extension=None, op=WALAEventOperation.ReportStatus): # Since this code is called on a loop, logging as a warning only on goal state change, else logging it # as verbose if goal_state_changed: logger.warn(log_msg) add_event(name=self.get_extension_full_name(extension), version=self.ext_handler.version, op=op, message=event_msg, is_success=False, log_event=False) else: logger.verbose(log_msg) def get_extension_handler_statuses(self, handler_status, goal_state_changed): """ Get the list of ExtHandlerStatus objects corresponding to each extension in the Handler. Each object might have its own status for the Extension status but the Handler status would be the same for each extension in a Handle :return: List of ExtHandlerStatus objects for each extension in the Handler """ ext_handler_statuses = [] # TODO Refactor or remove this common code pattern (for each extension subordinate to an ext_handler, do X). for ext in self.extensions: # In MC, for disabled extensions we dont need to report status. Skip reporting if disabled and state == disabled # Extension.state corresponds to the state requested by CRP, self.__get_extension_state() corresponds to the # state of the extension on the VM. Skip reporting only if both are Disabled if self.should_perform_multi_config_op(ext) and \ ext.state == ExtensionState.Disabled and self.get_extension_state(ext) == ExtensionState.Disabled: continue # Breaking off extension reporting in 2 parts, one which is Handler dependent and the other that is Extension dependent try: ext_handler_status = ExtHandlerStatus() set_properties("ExtHandlerStatus", ext_handler_status, get_properties(handler_status)) except Exception as error: msg = "Something went wrong when trying to get a copy of the Handler status for {0}".format( self.get_extension_full_name()) self.report_error_on_incarnation_change(goal_state_changed, event_msg=msg, log_msg="{0}.\nStack Trace: {1}".format( msg, textutil.format_exception(error))) # Since this is a Handler level error and we need to do it per extension, breaking here and logging # error since we wont be able to report error anyways and saving it as a handler status (legacy behavior) self.set_handler_status(message=msg, code=-1) break # For the extension dependent stuff, if there's some unhandled error, we will report it back to CRP as an extension error. try: ext_status = self.collect_ext_status(ext) if ext_status is not None: ext_handler_status.extension_status = ext_status ext_handler_statuses.append(ext_handler_status) except ExtensionError as error: msg = "Unknown error when trying to fetch status from extension {0}".format( self.get_extension_full_name(ext)) self.report_error_on_incarnation_change(goal_state_changed, event_msg=msg, log_msg="{0}.\nStack Trace: {1}".format( msg, textutil.format_exception(error)), extension=ext) # Unexpected error, for single config, keep the behavior as is if not self.should_perform_multi_config_op(ext): self.set_handler_status(message=ustr(error), code=error.code) break # For MultiConfig, create a custom ExtensionStatus object with the error details and attach it to the Handler. # This way the error would be reported back to CRP and the failure would be propagated instantly as compared to CRP eventually timing it out. ext_status = ExtensionStatus(name=ext.name, seq_no=ext.sequenceNumber, code=ExtensionErrorCodes.PluginUnknownFailure, status=ExtensionStatusValue.error, message=msg) ext_handler_status.extension_status = ext_status ext_handler_statuses.append(ext_handler_status) return ext_handler_statuses def collect_heartbeat(self): # pylint: disable=R1710 man = self.load_manifest() if not man.is_report_heartbeat(): return heartbeat_file = os.path.join(conf.get_lib_dir(), self.get_heartbeat_file()) if not os.path.isfile(heartbeat_file): raise ExtensionError("Failed to get heart beat file") if not self.is_responsive(heartbeat_file): return { "status": "Unresponsive", "code": -1, "message": "Extension heartbeat is not responsive" } try: heartbeat_json = fileutil.read_file(heartbeat_file) heartbeat = json.loads(heartbeat_json)[0]['heartbeat'] except IOError as e: raise ExtensionError("Failed to get heartbeat file:{0}".format(e)) except (ValueError, KeyError) as e: raise ExtensionError("Malformed heartbeat file: {0}".format(e)) return heartbeat @staticmethod def is_responsive(heartbeat_file): """ Was heartbeat_file updated within the last ten (10) minutes? :param heartbeat_file: str :return: bool """ last_update = int(time.time() - os.stat(heartbeat_file).st_mtime) return last_update <= 600 def launch_command(self, cmd, cmd_name=None, timeout=300, extension_error_code=ExtensionErrorCodes.PluginProcessingError, env=None, extension=None): begin_utc = datetime.datetime.now(UTC) self.logger.verbose("Launch command: [{0}]", cmd) base_dir = self.get_base_dir() with tempfile.TemporaryFile(dir=base_dir, mode="w+b") as stdout: with tempfile.TemporaryFile(dir=base_dir, mode="w+b") as stderr: if env is None: env = {} # Always add Extension Path and version to the current launch_command (Ask from publishers) env.update({ ExtCommandEnvVariable.ExtensionPath: base_dir, ExtCommandEnvVariable.ExtensionVersion: str(self.ext_handler.version), ExtCommandEnvVariable.WireProtocolAddress: self.protocol.get_endpoint(), # Setting sequence number to 0 incase no settings provided to keep in accordance with the empty # 0.settings file that we create for such extensions. ExtCommandEnvVariable.ExtensionSeqNumber: str( extension.sequenceNumber) if extension is not None else _DEFAULT_SEQ_NO }) if self.should_perform_multi_config_op(extension): env[ExtCommandEnvVariable.ExtensionName] = extension.name supported_features = [] for _, feature in get_agent_supported_features_list_for_extensions().items(): supported_features.append( { "Key": feature.name, "Value": feature.version } ) if supported_features: env[ExtCommandEnvVariable.ExtensionSupportedFeatures] = json.dumps(supported_features) ext_name = self.get_extension_full_name(extension) try: # Some extensions erroneously begin cmd with a slash; don't interpret those # as root-relative. (Issue #1170) command_full_path = os.path.join(base_dir, cmd.lstrip(os.path.sep)) log_msg = "Executing command: {0} with environment variables: {1}".format(command_full_path, json.dumps(env)) self.logger.info(log_msg) self.report_event(name=ext_name, message=log_msg, log_event=False) # Add the os environment variables before executing command env.update(os.environ) process_output = CGroupConfigurator.get_instance().start_extension_command( extension_name=self.get_full_name(extension), command=command_full_path, cmd_name=cmd_name, timeout=timeout, shell=True, cwd=base_dir, env=env, stdout=stdout, stderr=stderr, error_code=extension_error_code) except OSError as e: raise ExtensionError("Failed to launch '{0}': {1}".format(command_full_path, e.strerror), code=extension_error_code) duration = elapsed_milliseconds(begin_utc) log_msg = "Command: {0}\n{1}".format(cmd, "\n".join( [line for line in process_output.split('\n') if line != ""])) self.logger.info(log_msg) self.report_event(name=ext_name, message=log_msg, duration=duration, log_event=False) return process_output def load_manifest(self): man_file = self.get_manifest_file() try: data = json.loads(fileutil.read_file(man_file)) except (IOError, OSError) as e: raise ExtensionError('Failed to load manifest file ({0}): {1}'.format(man_file, e.strerror), code=ExtensionErrorCodes.PluginHandlerManifestNotFound) except ValueError: raise ExtensionError('Malformed manifest file ({0}).'.format(man_file), code=ExtensionErrorCodes.PluginHandlerManifestDeserializationError) return HandlerManifest(data[0]) def update_settings_file(self, settings_file, settings): settings_file = os.path.join(self.get_conf_dir(), settings_file) try: fileutil.write_file(settings_file, settings) except IOError as e: fileutil.clean_ioerror(e, paths=[settings_file]) raise ExtensionError(u"Failed to update settings file", e) def update_settings(self, extension): if self.extensions is None or len(self.extensions) == 0 or extension is None: # This is the behavior of waagent 2.0.x # The new agent has to be consistent with the old one. self.logger.info("Extension has no settings, write empty 0.settings") self.update_settings_file("{0}.settings".format(_DEFAULT_SEQ_NO), "") return settings = { 'publicSettings': extension.publicSettings, 'protectedSettings': extension.protectedSettings, 'protectedSettingsCertThumbprint': extension.certificateThumbprint } ext_settings = { "runtimeSettings": [{ "handlerSettings": settings }] } # MultiConfig: change the name to ..settings for MC and .settings for SC settings_file = "{0}.{1}.settings".format(extension.name, extension.sequenceNumber) if \ self.should_perform_multi_config_op(extension) else "{0}.settings".format(extension.sequenceNumber) self.logger.info("Update settings file: {0}", settings_file) self.update_settings_file(settings_file, json.dumps(ext_settings)) def create_handler_env(self): handler_env = { HandlerEnvironment.logFolder: self.get_log_dir(), HandlerEnvironment.configFolder: self.get_conf_dir(), HandlerEnvironment.statusFolder: self.get_status_dir(), HandlerEnvironment.heartbeatFile: self.get_heartbeat_file() } if get_supported_feature_by_name(SupportedFeatureNames.ExtensionTelemetryPipeline).is_supported: handler_env[HandlerEnvironment.eventsFolder] = self.get_extension_events_dir() # For now, keep the preview key to not break extensions that were using the preview. handler_env[HandlerEnvironment.eventsFolder_preview] = self.get_extension_events_dir() env = [{ HandlerEnvironment.name: self.ext_handler.name, HandlerEnvironment.version: HandlerEnvironment.schemaVersion, HandlerEnvironment.handlerEnvironment: handler_env }] try: fileutil.write_file(self.get_env_file(), json.dumps(env)) except IOError as e: fileutil.clean_ioerror(e, paths=[self.get_base_dir(), self.pkg_file]) raise ExtensionDownloadError(u"Failed to save handler environment", e) def __get_handler_state_file_name(self, extension=None): if self.should_perform_multi_config_op(extension): return "{0}.HandlerState".format(extension.name) return "HandlerState" def set_handler_state(self, handler_state): self.__set_state(name=self.__get_handler_state_file_name(), value=handler_state) def get_handler_state(self): return self.__get_state(name=self.__get_handler_state_file_name(), default=ExtHandlerState.NotInstalled) def __set_extension_state(self, extension, extension_state): self.__set_state(name=self.__get_handler_state_file_name(extension), value=extension_state) def get_extension_state(self, extension=None): return self.__get_state(name=self.__get_handler_state_file_name(extension), default=ExtensionState.Disabled) def __set_state(self, name, value): state_dir = self.get_conf_dir() state_file = os.path.join(state_dir, name) try: if not os.path.exists(state_dir): fileutil.mkdir(state_dir, mode=0o700) fileutil.write_file(state_file, value) except IOError as e: fileutil.clean_ioerror(e, paths=[state_file]) self.logger.error("Failed to set state: {0}", e) def __get_state(self, name, default=None): state_dir = self.get_conf_dir() state_file = os.path.join(state_dir, name) if not os.path.isfile(state_file): return default try: return fileutil.read_file(state_file) except IOError as e: self.logger.error("Failed to get state: {0}", e) return default def __remove_extension_state_files(self, extension): self.logger.info("Removing states files for disabled extension: {0}".format(extension.name)) try: # MultiConfig: Remove all config/.*.settings, status/.*.status and config/.HandlerState files files_to_delete = [ os.path.join(self.get_conf_dir(), "{0}.*.settings".format(extension.name)), os.path.join(self.get_status_dir(), "{0}.*.status".format(extension.name)), os.path.join(self.get_conf_dir(), self.__get_handler_state_file_name(extension)) ] fileutil.rm_files(*files_to_delete) except Exception as error: extension_name = self.get_extension_full_name(extension) message = "Failed to remove extension state files for {0}: {1}".format(extension_name, ustr(error)) self.report_event(name=extension_name, message=message, is_success=False, log_event=False) self.logger.warn(message) def set_handler_status(self, status=ExtHandlerStatusValue.not_ready, message="", code=0): state_dir = self.get_conf_dir() handler_status = ExtHandlerStatus() handler_status.name = self.ext_handler.name handler_status.version = str(self.ext_handler.version) handler_status.message = redact_sas_token(message) handler_status.code = code handler_status.status = status handler_status.supports_multi_config = self.ext_handler.supports_multi_config status_file = os.path.join(state_dir, "HandlerStatus") try: handler_status_json = json.dumps(get_properties(handler_status)) if handler_status_json is not None: if not os.path.exists(state_dir): fileutil.mkdir(state_dir, mode=0o700) fileutil.write_file(status_file, handler_status_json) else: self.logger.error("Failed to create JSON document of handler status for {0} version {1}".format( self.ext_handler.name, self.ext_handler.version)) except (IOError, ValueError, ProtocolError) as error: fileutil.clean_ioerror(error, paths=[status_file]) self.logger.error("Failed to save handler status: {0}", textutil.format_exception(error)) def get_handler_status(self): state_dir = self.get_conf_dir() status_file = os.path.join(state_dir, "HandlerStatus") if not os.path.isfile(status_file): return None handler_status_contents = "" try: handler_status_contents = fileutil.read_file(status_file) data = json.loads(handler_status_contents) handler_status = ExtHandlerStatus() set_properties("ExtHandlerStatus", handler_status, data) return handler_status except (IOError, ValueError) as error: self.logger.error("Failed to get handler status: {0}", error) except Exception as error: error_msg = "Failed to get handler status message: {0}.\n Contents of file: {1}".format( ustr(error), handler_status_contents).replace('"', '\'') add_periodic( delta=logger.EVERY_HOUR, name=AGENT_NAME, version=CURRENT_VERSION, op=WALAEventOperation.ExtensionProcessing, is_success=False, message=error_msg) raise return None def get_extension_package_zipfile_name(self): return "{0}__{1}{2}".format(self.ext_handler.name, self.ext_handler.version, HANDLER_PKG_EXT) def get_full_name(self, extension=None): """ :return: - if extension is None or Handler does not support Multi Config, else then return - .- """ return "{0}-{1}".format(self.get_extension_full_name(extension), self.ext_handler.version) def get_base_dir(self): return os.path.join(conf.get_lib_dir(), self.get_full_name()) def get_status_dir(self): return os.path.join(self.get_base_dir(), "status") def get_conf_dir(self): return os.path.join(self.get_base_dir(), 'config') def get_extension_events_dir(self): return os.path.join(self.get_log_dir(), EVENTS_DIRECTORY) def get_heartbeat_file(self): return os.path.join(self.get_base_dir(), 'heartbeat.log') def get_manifest_file(self): return os.path.join(self.get_base_dir(), 'HandlerManifest.json') def get_env_file(self): return os.path.join(self.get_base_dir(), HandlerEnvironment.fileName) def get_log_dir(self): return os.path.join(conf.get_ext_log_dir(), self.ext_handler.name) @staticmethod def _read_status_file(ext_status_file): err_count = 0 while True: try: return ExtHandlerInstance._read_and_parse_json_status_file(ext_status_file) except Exception: err_count += 1 if err_count >= _NUM_OF_STATUS_FILE_RETRIES: raise time.sleep(_STATUS_FILE_RETRY_DELAY) @staticmethod def _read_and_parse_json_status_file(ext_status_file): if not os.path.exists(ext_status_file): raise ExtensionStatusError(msg="Status file {0} does not exist".format(ext_status_file), code=ExtensionStatusError.FileNotExists) try: data_str = fileutil.read_file(ext_status_file) except IOError as e: raise ExtensionStatusError(msg=ustr(e), inner=e, code=ExtensionStatusError.CouldNotReadStatusFile) try: data = json.loads(data_str) except (ValueError, TypeError) as e: raise ExtensionStatusError(msg="{0} \n First 2000 Bytes of status file:\n {1}".format(ustr(e), ustr(data_str)[:2000]), inner=e, code=ExtensionStatusError.InvalidJsonFile) return data_str, data def _process_substatus_list(self, substatus_list, current_status_size=0): processed_substatus = [] # Truncating the substatus to reduce the size, and preserve other fields of the text for substatus in substatus_list: substatus.name, field_size = self._truncate_message(substatus.name, _MAX_SUBSTATUS_FIELD_LENGTH) current_status_size += field_size substatus.message, field_size = self._truncate_message(substatus.message, _MAX_SUBSTATUS_FIELD_LENGTH) current_status_size += field_size if current_status_size <= _MAX_STATUS_FILE_SIZE_IN_BYTES: processed_substatus.append(substatus) else: break return processed_substatus @staticmethod def _truncate_message(field, truncate_size=_MAX_SUBSTATUS_FIELD_LENGTH): # pylint: disable=R1710 if field is None: # pylint: disable=R1705 return else: truncated_field = field if len(field) < truncate_size else field[:truncate_size] + _TRUNCATED_SUFFIX return truncated_field, len(truncated_field) class HandlerEnvironment(object): # HandlerEnvironment.json schema version schemaVersion = 1.0 fileName = "HandlerEnvironment.json" handlerEnvironment = "handlerEnvironment" logFolder = "logFolder" configFolder = "configFolder" statusFolder = "statusFolder" heartbeatFile = "heartbeatFile" eventsFolder_preview = "eventsFolder_preview" eventsFolder = "eventsFolder" name = "name" version = "version" class HandlerManifest(object): def __init__(self, data): if data is None or data['handlerManifest'] is None: raise ExtensionError('Malformed manifest file.') self.data = data def get_name(self): return self.data["name"] def get_version(self): return self.data["version"] def get_install_command(self): return self.data['handlerManifest']["installCommand"] def get_uninstall_command(self): return self.data['handlerManifest']["uninstallCommand"] def get_update_command(self): return self.data['handlerManifest']["updateCommand"] def get_enable_command(self): return self.data['handlerManifest']["enableCommand"] def get_disable_command(self): return self.data['handlerManifest']["disableCommand"] def is_report_heartbeat(self): value = self.data['handlerManifest'].get('reportHeartbeat', False) return self._parse_boolean_value(value, default_val=False) def is_update_with_install(self): update_mode = self.data['handlerManifest'].get('updateMode') if update_mode is None: return True return update_mode.lower() == "updatewithinstall" def is_continue_on_update_failure(self): value = self.data['handlerManifest'].get('continueOnUpdateFailure', False) return self._parse_boolean_value(value, default_val=False) def supports_multiple_extensions(self): value = self.data['handlerManifest'].get('supportsMultipleExtensions', False) return self._parse_boolean_value(value, default_val=False) def get_resource_limits(self): return ResourceLimits(self.data.get('resourceLimits', None)) def report_invalid_boolean_properties(self, ext_name): """ Check that the specified keys in the handler manifest has boolean values. """ for key in ['reportHeartbeat', 'continueOnUpdateFailure', 'supportsMultipleExtensions']: value = self.data['handlerManifest'].get(key) if value is not None and not isinstance(value, bool): msg = "In the handler manifest: '{0}' has a non-boolean value [{1}] for boolean type. Please change it to a boolean value.".format(key, value) logger.info(msg) add_event(name=ext_name, message=msg, op=WALAEventOperation.ExtensionHandlerManifest, log_event=False) @staticmethod def _parse_boolean_value(value, default_val): """ Expects boolean value but for backward compatibility, 'true' (case-insensitive) is accepted, and other values default to False Note: Json module returns unicode on py2. In py3, unicode removed ustr is a unicode object for Py2 and a str object for Py3. """ if not isinstance(value, bool): return True if isinstance(value, ustr) and value.lower() == "true" else default_val return value class ResourceLimits(object): def __init__(self, data): self.data = data def get_extension_slice_cpu_quota(self): if self.data is not None: return self.data.get('cpuQuotaPercentage', None) return None def get_extension_slice_memory_quota(self): if self.data is not None: return self.data.get('memoryQuotaInMB', None) return None def get_service_list(self): if self.data is not None: return self.data.get('services', None) return None class ExtensionStatusError(ExtensionError): """ When extension failed to provide a valid status file """ CouldNotReadStatusFile = 1 InvalidJsonFile = 2 StatusFileMalformed = 3 MaxSizeExceeded = 4 FileNotExists = 5 def __init__(self, msg=None, inner=None, code=-1): # pylint: disable=W0235 super(ExtensionStatusError, self).__init__(msg, inner, code) Azure-WALinuxAgent-a976115/azurelinuxagent/ga/firewall_manager.py000066400000000000000000000542321510742556200250650ustar00rootroot00000000000000# # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import errno import json import os import re from azurelinuxagent.common import logger from azurelinuxagent.common import event from azurelinuxagent.common.event import WALAEventOperation from azurelinuxagent.common.utils import shellutil from azurelinuxagent.common.future import ustr from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from azurelinuxagent.common.utils.shellutil import CommandError class FirewallManagerNotAvailableError(Exception): """ Exception raised the command-line tool needed to manage the firewall (e.g. iptables, firewalld, nft) is not available """ class FirewallStateError(Exception): """ Exception raised when the firewall rules are not set up correctly. """ class FirewallManager(object): """ FirewallManager abstracts the interface for managing the firewall rules on the WireServer address. Concrete implementations provide the underlying functionality using command-line tools that vary across distros (e.g. iptables, firewalld, and nftables.) If a concrete implementation cannot be instantiated because the underlying command-line tool is not available, it must raise a FirewallManagerNotAvailable exception. Each implementation must set three rules on the WireServer address: * "ACCEPT DNS" - Azure DNS runs on the WireServer address, so all traffic on port 53 must be allowed for all users. * "ACCEPT" - All traffic from the Agent (which runs as root) must be accepted. * "DROP" - All other traffic to the WireServer address must be dropped. """ def __init__(self, wire_server_address): self._wire_server_address = wire_server_address # Friendly names for the firewall rules ACCEPT_DNS = "ACCEPT DNS" ACCEPT = "ACCEPT" DROP = "DROP" @staticmethod def create(wire_server_address): """ Creates the appropriate FirewallManager implementation depending on the availability of the underlying command-line tools. NOTE: Currently this method checks only for iptables and nftables, giving precedence to the former. """ try: manager = IpTables(wire_server_address) event.info(WALAEventOperation.Firewall, "Using iptables [version {0}] to manage firewall rules", manager.version) return manager except FirewallManagerNotAvailableError: pass try: manager = NfTables(wire_server_address) event.info(WALAEventOperation.Firewall, "Using nft [version {0}] to manage firewall rules", manager.version) return manager except FirewallManagerNotAvailableError: pass raise FirewallManagerNotAvailableError("Cannot create a firewall manager; no known command-line tools are available") @property def version(self): """ Returns the version of the underlying command-line tool. """ raise NotImplementedError() def setup(self): """ Sets up the firewall rules for the WireServer. """ raise NotImplementedError() def remove(self): """ Removes all the existing firewall rules. """ raise NotImplementedError() def remove_legacy_rule(self): """ The iptables and firewalld managers need to remove legacy rules; no-op for other managers. """ def check(self): """ Checks the current state of the firewall. Returns True if the firewall is set up correctly, or False if the firewall is not setup. Raises a FirewallSetupError if the firewall is only partially set up (e.g. a rule in the chain is missing). """ raise NotImplementedError() def get_state(self): """ Returns the current state of the firewall rules as a string. The format of the return value is implementation-specific and depends on the underlying command-line tool. If the command to list the rules fails, the return value is an error message. """ try: return shellutil.run_command(self._get_state_command()) except Exception as e: message = "Failed to get the current state of the firewall rules: {0}".format(ustr(e)) logger.warn("Listing firewall rules failed: {0}".format(ustr(e))) return message def _get_state_command(self): """ Returns the command to list the current state of the firewall. """ raise NotImplementedError() class _FirewallManagerMultipleRules(FirewallManager): """ Base class for firewall managers that handle multiple rules, each rule being manipulated with a different command line (e.g. iptables, firewalld) """ def setup(self): for command in self._get_commands(self._get_append_command_option()): shellutil.run_command(command[1]) def remove(self): existing_rules = [] for rule, command in self._get_commands(self._get_check_command_option()): try: shellutil.run_command(command) existing_rules.append(rule) except CommandError as e: if e.returncode == 1: # rule does not exist pass else: raise for rule, command in self._get_commands(self._get_delete_command_option()): if rule in existing_rules: self._execute_delete_command(command) def remove_legacy_rule(self): check_command = self._get_legacy_rule_command(self._get_check_command_option()) try: shellutil.run_command(check_command) except CommandError as e: if e.returncode == 1: # rule does not exist logger.info("Did not find a legacy firewall rule: {0}", check_command) return logger.info("Found legacy firewall rule: {0}", check_command) delete_command = self._get_legacy_rule_command(self._get_delete_command_option()) self._execute_delete_command(delete_command) event.info(WALAEventOperation.Firewall, "Removed legacy firewall rule: {0}", delete_command) def _execute_delete_command(self, command): """ Executes the delete command; derived classes can customize the behavior if needed (for example, to add retries). """ shellutil.run_command(command) def check(self): missing_rules = [] existing_rules = [] missing_rules_reasons = [] for rule, command in self._get_commands(self._get_check_command_option()): try: shellutil.run_command(command) existing_rules.append(rule) except CommandError as e: if e.returncode == 1: # rule does not exist missing_rules.append(rule) # Issue: Even though the drop rule exists, the agent perceives it as missing when checking all rules. # This might occur because we mark the rule as missing due to the same error code being returned for other reasons. # So logging the error message to understand the reason for the rule being marked as missing. missing_rules_reasons.append(e.stderr) else: raise if len(missing_rules) == 0: # all rules are present return True if len(existing_rules) > 0: # some rules are present, but not all raise FirewallStateError("The following rules are missing: {0} due to: {1}".format(missing_rules, missing_rules_reasons)) return False def _get_commands(self, command_option): """ Yields each of the commands needed to set up the firewall rules, using the given command option. IMPORTANT: The order in which these rules are returned is critical, since rules are appended sequentially. The first item in the array will be at the top of the chain, etc. """ yield FirewallManager.ACCEPT_DNS, self._get_accept_dns_rule_command(command_option) yield FirewallManager.ACCEPT, self._get_accept_rule_command(command_option) yield FirewallManager.DROP, self._get_drop_rule_command(command_option) def _get_accept_dns_rule_command(self, command_option): """ Returns the command to manipulate the rule for accepting DNS requests on the WireServer address. """ raise NotImplementedError() def _get_accept_rule_command(self, command_option): """ Returns the command to manipulate the rule for accepting request on the WireServer address issued by the Agent. """ raise NotImplementedError() def _get_drop_rule_command(self, command_option): """ Returns the command to manipulate the rule for dropping all requests on the WireServer address. """ raise NotImplementedError() def _get_legacy_rule_command(self, command_option): """ Returns the command to delete the legacy firewall rule. See the overrides of this method for details on those rules. """ raise NotImplementedError() def _get_append_command_option(self): """ Returns the command-line option to append a firewall to the output chain. """ raise NotImplementedError() def _get_check_command_option(self): """ Returns the command-line option to check for existence of rule on the output chain. """ raise NotImplementedError() def _get_delete_command_option(self): """ Returns the command-line option to delete a firewall rule from the output chain. """ raise NotImplementedError() class IpTables(_FirewallManagerMultipleRules): """ FirewallManager based on the iptables command-line tool. """ def __init__(self, wire_server_address): super(IpTables, self).__init__(wire_server_address) # # The wait option, "-w" was introduced in iptables 1.4.21. Check if we can use it. # try: output = shellutil.run_command(["iptables", "--version"]) # # The output is similar to # # $ iptables --version # iptables v1.8.7 (nf_tables) # # Extract anything that looks like a version number. # match = re.match(r"^[^\d.]*([\d.]+).*$", output) if match is None: raise Exception('output of "--version": {0}'.format(output)) self._version = FlexibleVersion(match.group(1)) use_wait_option = self._version >= FlexibleVersion('1.4.21') except Exception as exception: if isinstance(exception, OSError) and exception.errno == errno.ENOENT: # pylint: disable=no-member raise FirewallManagerNotAvailableError("iptables is not available") event.warn(WALAEventOperation.Firewall, "Unable to determine version of iptables; will not use -w option. --version output: {0}", ustr(exception)) self._version = "unknown" use_wait_option = False if use_wait_option: self._base_command = ["iptables", "-w", "-t", "security"] else: self._base_command = ["iptables", "-t", "security"] @property def version(self): return self._version def _execute_delete_command(self, command): """ Continually execute the delete operation until the return code is non-zero or the limit has been reached. """ for _ in range(1, 100): try: shellutil.run_command(command) except CommandError as e: if e.returncode == 1: return if e.returncode == 2: raise Exception("Invalid firewall deletion command '{0}'".format(command)) def _get_state_command(self): return self._base_command + ["-t", "security", "-L", "-nxv"] def _get_accept_dns_rule_command(self, command_option): return self._base_command + [command_option, "OUTPUT", "-d", self._wire_server_address, "-p", "tcp", "--destination-port", "53", "-j", "ACCEPT"] def _get_accept_rule_command(self, command_option): return self._base_command + [command_option, "OUTPUT", "-d", self._wire_server_address, "-p", "tcp", "-m", "owner", "--uid-owner", str(os.getuid()), "-j", "ACCEPT"] def _get_drop_rule_command(self, command_option): return self._base_command + [command_option, "OUTPUT", "-d", self._wire_server_address, "-p", "tcp", "-m", "conntrack", "--ctstate", "INVALID,NEW", "-j", "DROP"] def _get_legacy_rule_command(self, command_option): # There was a rule change at 2.2.26, which started dropping non-root traffic to WireServer. The previous rule allowed traffic, and needs to be removed # for the newer DROP rule to have any effect. This function returns the command to manipulate the legacy rule that was added <= 2.2.25. Until 2.2.25 # has aged out, keep this cleanup in place. return self._base_command + [command_option, "OUTPUT", "-d", self._wire_server_address, "-p", "tcp", "-m", "conntrack", "--ctstate", "INVALID,NEW", "-j", "ACCEPT"] def _get_append_command_option(self): return "-A" def _get_check_command_option(self): return "-C" def _get_delete_command_option(self): return "-D" class FirewallCmd(_FirewallManagerMultipleRules): """ FirewallManager based on the firewalld command-line tool. """ def __init__(self, wire_server_address): super(FirewallCmd, self).__init__(wire_server_address) try: self._version = shellutil.run_command(["firewall-cmd", "--version"]).strip() except Exception as exception: if isinstance(exception, OSError) and exception.errno == errno.ENOENT: # pylint: disable=no-member raise FirewallManagerNotAvailableError("nft is not available") self._version = "unknown" @property def version(self): return self._version def _get_state_command(self): return ["firewall-cmd", "--permanent", "--direct", "--get-all-passthroughs"] def _get_accept_dns_rule_command(self, command_option): return ["firewall-cmd", "--permanent", "--direct", command_option, "ipv4", "-t", "security", "-A", "OUTPUT", "-d", self._wire_server_address, "-p", "tcp", '--destination-port', '53', '-j', 'ACCEPT'] def _get_accept_rule_command(self, command_option): return ["firewall-cmd", "--permanent", "--direct", command_option, "ipv4", "-t", "security", "-A", "OUTPUT", "-d", self._wire_server_address, "-p", "tcp", "-m", "owner", "--uid-owner", str(os.getuid()), "-j", "ACCEPT"] def _get_drop_rule_command(self, command_option): return ["firewall-cmd", "--permanent", "--direct", command_option, "ipv4", "-t", "security", "-A", "OUTPUT", "-d", self._wire_server_address, "-p", "tcp", "-m", "conntrack", "--ctstate", "INVALID,NEW", "-j", "DROP"] def _get_legacy_rule_command(self, command_option): # Agents <= 2.7.0.6 inserted (-I) the rule to accept DNS traffic; later agents changed that to append (-A) the rule. # The insert rule needs to be removed, otherwise there will be duplicate rules for DNS. return ["firewall-cmd", "--permanent", "--direct", command_option, "ipv4", "-t", "security", "-I", "OUTPUT", "-d", self._wire_server_address, "-p", "tcp", '--destination-port', '53', '-j', 'ACCEPT'] def _get_append_command_option(self): return "--passthrough" def _get_check_command_option(self): return "--query-passthrough" def _get_delete_command_option(self): return "--remove-passthrough" class NfTables(FirewallManager): """ FirewallManager based on the nft command-line tool. """ def __init__(self, wire_server_address): super(NfTables, self).__init__(wire_server_address) try: self._version = shellutil.run_command(["nft", "--version"]).strip() except Exception as exception: if isinstance(exception, OSError) and exception.errno == errno.ENOENT: # pylint: disable=no-member raise FirewallManagerNotAvailableError("nft is not available") self._version = "unknown" @property def version(self): return self._version def setup(self): shellutil.run_command(["nft", "-f", "-"], input=""" add table ip walinuxagent add chain ip walinuxagent output {{ type filter hook output priority 0 ; policy accept ; }} add rule ip walinuxagent output ip daddr {0} tcp dport != 53 skuid != {1} ct state invalid,new counter drop """.format(self._wire_server_address, os.getuid())) def remove(self): shellutil.run_command(["nft", "delete", "table", "walinuxagent"]) def check(self): # # First check that the walinuxagent table exists. # # The output of the list command is similar to (see 'man libnftables-json' for details): # # { # "nftables": [ # { "metainfo": { "version": "1.0.2", "release_name": "Lester Gooch", "json_schema_version": 1 } }, # { "table": { "family": "ip", "name": "walinuxagent", "handle": 2 } } # ] # } # output_text = shellutil.run_command(["nft", "--json", "list", "tables"]) try: output = json.loads(output_text) tables = [i["table"] for i in output["nftables"] if i.get("table") is not None] if all(t["name"] != "walinuxagent" for t in tables): return False except Exception as exception: raise Exception("Can't parse the output of 'nft list tables'\n{0}\nERROR: {1}".format(output_text, exception)) # # Now check that the firewall rule is set up correctly. # # The output of the list command is similar to (see 'man libnftables-json' for details): # # { # "nftables": [ # { "metainfo": { "version": "1.0.2", "release_name": "Lester Gooch", "json_schema_version": 1 } }, # { "table": { "family": "ip", "name": "walinuxagent", "handle": 2 } }, # { "chain": { "family": "ip", "table": "walinuxagent", "name": "output", "handle": 1, "type": "filter", "hook": "output", "prio": 0, "policy": "accept" } }, # { # "rule": { # "family": "ip", "table": "walinuxagent", "chain": "output", "handle": 2, # "expr": [ # { "match": { # "op": "==", # "left": { "payload": { "protocol": "ip", "field": "daddr" } }, # "right": "168.63.129.16" # }}, # { "match": { # "op": "!=", # "left": { "payload": { "protocol": "tcp", "field": "dport" } }, # "right": 53 # }}, # { "match": { # "op": "!=", # "left": { "meta": { "key": "skuid" } }, # "right": 0 # }}, # { "match": { # "op": "in", # "left": { "ct": { "key": "state" } }, # "right": [ "invalid", "new" ] # }}, # { "counter": { # "packets": 0, # "bytes": 0 # }}, # { "drop": null } # ] # } # } # ] # } # output_text = shellutil.run_command(["nft", "--json", "list", "table", "walinuxagent"]) errors = [] try: output = json.loads(output_text) rules = [i["rule"] for i in output["nftables"] if i.get("rule") is not None] if len(rules) != 1: raise FirewallStateError("There should be exactly one rule in the 'output' chain") for r in rules: if r["table"] == "walinuxagent" and r["family"] == "ip" and r["chain"] == "output": expr = r["expr"] break else: raise FirewallStateError("Cannot find any rules for the 'output' chain") address_match = {"match": {"op": "==", "left": {"payload": {"protocol": "ip", "field": "daddr"}}, "right": self._wire_server_address}} if all(i != address_match for i in expr): errors.append("No expression matches the WireServer address") dns_match = {"match": {"op": "!=", "left": {"payload": {"protocol": "tcp", "field": "dport"}}, "right": 53}} if all(i != dns_match for i in expr): errors.append("No expression excludes the DNS port") owner_expr = {"match": {"op": "!=", "left": {"meta": {"key": "skuid"}}, "right": os.getuid()}} if all(i != owner_expr for i in expr): errors.append("No expression excludes the Agent's UID") drop_action = {"drop": None} if all(i != drop_action for i in expr): errors.append("The drop action is missing") except FirewallStateError: raise except Exception as exception: raise Exception("Can't parse the output of 'nft list table walinuxagent'\n{0}\nERROR: {1}".format(output_text, exception)) if len(errors) > 0: raise FirewallStateError("{0}".format(errors)) return True def _get_state_command(self): return ['nft', 'list', 'table', 'walinuxagent'] Azure-WALinuxAgent-a976115/azurelinuxagent/ga/ga_version_updater.py000066400000000000000000000176541510742556200254550ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import glob import os import shutil from azurelinuxagent.common import conf, logger from azurelinuxagent.common.exception import AgentUpdateError from azurelinuxagent.common.future import ustr from azurelinuxagent.common.protocol.extensions_goal_state import GoalStateSource from azurelinuxagent.common.utils import fileutil from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from azurelinuxagent.common.version import AGENT_NAME, AGENT_DIR_PATTERN, CURRENT_VERSION from azurelinuxagent.ga.guestagent import GuestAgent, AGENT_MANIFEST_FILE class GAVersionUpdater(object): def __init__(self, gs_id): self._gs_id = gs_id self._version = FlexibleVersion("0.0.0.0") # Initialize to zero and retrieve from goal state later stage self._agent_manifest = None # Initialize to None and fetch from goal state at different stage for different updater def is_update_allowed_this_time(self, ext_gs_updated): """ This function checks if we allowed to update the agent. @param ext_gs_updated: True if extension goal state updated else False @return false when we don't allow updates. """ raise NotImplementedError def is_rsm_update_enabled(self, agent_family, ext_gs_updated): """ return True if we need to switch to RSM-update from self-update and vice versa. @param agent_family: agent family @param ext_gs_updated: True if extension goal state updated else False @return: False when agent need to stop rsm updates True: when agent need to switch to rsm update """ raise NotImplementedError def retrieve_agent_version(self, agent_family, goal_state): """ This function fetches the agent version from the goal state for the given family. @param agent_family: agent family @param goal_state: goal state """ raise NotImplementedError def is_retrieved_version_allowed_to_update(self, agent_family): """ Checks all base condition if new version allow to update. @param agent_family: agent family @return: True if allowed to update else False """ raise NotImplementedError def log_new_agent_update_message(self): """ This function logs the update message after we check agent allowed to update. """ raise NotImplementedError def proceed_with_update(self): """ performs upgrade/downgrade @return: AgentUpgradeExitException """ raise NotImplementedError @property def version(self): """ Return version """ return self._version def sync_new_gs_id(self, gs_id): """ Update gs_id @param gs_id: goal state id """ self._gs_id = gs_id @staticmethod def download_new_agent_pkg(package_to_download, protocol, is_fast_track_goal_state): """ Function downloads the new agent. @param package_to_download: package to download @param protocol: protocol object @param is_fast_track_goal_state: True if goal state is fast track else False """ agent_name = "{0}-{1}".format(AGENT_NAME, package_to_download.version) agent_dir = os.path.join(conf.get_lib_dir(), agent_name) agent_pkg_path = ".".join((os.path.join(conf.get_lib_dir(), agent_name), "zip")) agent_handler_manifest_file = os.path.join(agent_dir, AGENT_MANIFEST_FILE) if not os.path.exists(agent_dir) or not os.path.isfile(agent_handler_manifest_file): protocol.client.download_zip_package(agent_name, package_to_download.uris, agent_pkg_path, agent_dir, use_verify_header=is_fast_track_goal_state, signature="", enforce_signature=False) else: logger.info("Agent {0} was previously downloaded - skipping download", agent_name) if not os.path.isfile(agent_handler_manifest_file): try: # Clean up the agent directory if the manifest file is missing logger.info("Agent handler manifest file is missing, cleaning up the agent directory: {0}".format(agent_dir)) if os.path.isdir(agent_dir): shutil.rmtree(agent_dir, ignore_errors=True) except Exception as err: logger.warn("Unable to delete Agent directory: {0}".format(err)) raise AgentUpdateError("Downloaded agent package: {0} is missing agent handler manifest file: {1}".format(agent_name, agent_handler_manifest_file)) def download_and_get_new_agent(self, protocol, agent_family, goal_state): """ Function downloads the new agent and returns the downloaded version. @param protocol: protocol object @param agent_family: agent family @param goal_state: goal state @return: GuestAgent: downloaded agent """ if self._agent_manifest is None: # Fetch agent manifest if it's not already done self._agent_manifest = goal_state.fetch_agent_manifest(agent_family.name, agent_family.uris) package_to_download = self._get_agent_package_to_download(self._agent_manifest, self._version) is_fast_track_goal_state = goal_state.extensions_goal_state.source == GoalStateSource.FastTrack self.download_new_agent_pkg(package_to_download, protocol, is_fast_track_goal_state) agent = GuestAgent.from_agent_package(package_to_download) return agent def purge_extra_agents_from_disk(self): """ Remove the agents from disk except current version and new agent version """ known_agents = [CURRENT_VERSION, self._version] self._purge_unknown_agents_from_disk(known_agents) def _get_agent_package_to_download(self, agent_manifest, version): """ Returns the package of the given Version found in the manifest. If not found, returns exception """ for pkg in agent_manifest.pkg_list.versions: if FlexibleVersion(pkg.version) == version: # Found a matching package, only download that one return pkg raise AgentUpdateError("No matching package found in the agent manifest for version: {0} in goal state incarnation: {1}, " "skipping agent update".format(str(version), self._gs_id)) @staticmethod def _purge_unknown_agents_from_disk(known_agents): """ Remove from disk all directories and .zip files of unknown agents """ path = os.path.join(conf.get_lib_dir(), "{0}-*".format(AGENT_NAME)) for agent_path in glob.iglob(path): try: name = fileutil.trim_ext(agent_path, "zip") m = AGENT_DIR_PATTERN.match(name) if m is not None and FlexibleVersion(m.group(1)) not in known_agents: if os.path.isfile(agent_path): logger.info(u"Purging outdated Agent file {0}", agent_path) os.remove(agent_path) else: logger.info(u"Purging outdated Agent directory {0}", agent_path) shutil.rmtree(agent_path) except Exception as e: logger.warn(u"Purging {0} raised exception: {1}", agent_path, ustr(e)) Azure-WALinuxAgent-a976115/azurelinuxagent/ga/guestagent.py000066400000000000000000000347361510742556200237430ustar00rootroot00000000000000import json import os import shutil import time from azurelinuxagent.common.event import add_event, WALAEventOperation from azurelinuxagent.common.future import ustr from azurelinuxagent.common.utils import textutil from azurelinuxagent.common import logger, conf, event from azurelinuxagent.common.exception import UpdateError from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from azurelinuxagent.common.version import AGENT_DIR_PATTERN, AGENT_NAME from azurelinuxagent.ga.exthandlers import HandlerManifest AGENT_ERROR_FILE = "error.json" # File name for agent error record AGENT_MANIFEST_FILE = "HandlerManifest.json" MAX_FAILURE = 3 # Max failure allowed for agent before declare bad agent AGENT_UPDATE_COUNT_FILE = "update_attempt.json" # File for tracking agent update attempt count RSM_UPDATE_STATE_FILE = "waagent_rsm_update" INITIAL_UPDATE_STATE_FILE = "waagent_initial_update" class GuestAgent(object): def __init__(self, path, pkg): """ If 'path' is given, the object is initialized to the version installed under that path. If 'pkg' is given, the version specified in the package information is downloaded and the object is initialized to that version. NOTE: Prefer using the from_installed_agent and from_agent_package methods instead of calling __init__ directly """ self.pkg = pkg version = None if path is not None: m = AGENT_DIR_PATTERN.match(path) if m is None: raise UpdateError(u"Illegal agent directory: {0}".format(path)) version = m.group(1) elif self.pkg is not None: version = pkg.version if version is None: raise UpdateError(u"Illegal agent version: {0}".format(version)) self.version = FlexibleVersion(version) location = u"disk" if path is not None else u"package" logger.verbose(u"Loading Agent {0} from {1}", self.name, location) self.error = GuestAgentError(self.get_agent_error_file()) self.error.load() self.update_attempt_data = GuestAgentUpdateAttempt(self.get_agent_update_count_file()) self.update_attempt_data.load() try: self._ensure_loaded() except Exception as e: # If we're unable to unpack the agent, delete the Agent directory try: if os.path.isdir(self.get_agent_dir()): shutil.rmtree(self.get_agent_dir(), ignore_errors=True) except Exception as err: logger.warn("Unable to delete Agent files: {0}".format(err)) msg = u"Agent {0} install failed with exception:".format( self.name) detailed_msg = '{0} {1}'.format(msg, textutil.format_exception(e)) add_event( AGENT_NAME, version=self.version, op=WALAEventOperation.Install, is_success=False, message=detailed_msg) @staticmethod def from_installed_agent(path): """ Creates an instance of GuestAgent using the agent installed in the given 'path'. """ return GuestAgent(path, None) @staticmethod def from_agent_package(package): """ Creates an instance of GuestAgent using the information provided in the 'package'; if that version of the agent is not installed it, it installs it. """ return GuestAgent(None, package) @property def name(self): return "{0}-{1}".format(AGENT_NAME, self.version) def get_agent_cmd(self): return self.manifest.get_enable_command() def get_agent_dir(self): return os.path.join(conf.get_lib_dir(), self.name) def get_agent_error_file(self): return os.path.join(conf.get_lib_dir(), self.name, AGENT_ERROR_FILE) def get_agent_update_count_file(self): return os.path.join(conf.get_lib_dir(), self.name, AGENT_UPDATE_COUNT_FILE) def get_agent_manifest_path(self): return os.path.join(self.get_agent_dir(), AGENT_MANIFEST_FILE) def get_agent_pkg_path(self): return ".".join((os.path.join(conf.get_lib_dir(), self.name), "zip")) def clear_error(self): self.error.clear() self.error.save() @property def is_available(self): return self.is_downloaded and not self.is_blacklisted @property def is_blacklisted(self): return self.error is not None and self.error.is_blacklisted @property def is_downloaded(self): return self.is_blacklisted or \ os.path.isfile(self.get_agent_manifest_path()) def mark_failure(self, is_fatal=False, reason='', report_func=event.warn): try: if not os.path.isdir(self.get_agent_dir()): os.makedirs(self.get_agent_dir()) self.error.mark_failure(is_fatal=is_fatal, reason=reason) self.error.save() if self.error.is_blacklisted: msg = u"Agent {0} is permanently disabled".format(self.name) report_func(WALAEventOperation.AgentDisabled, msg) except Exception as e: logger.warn(u"Agent {0} failed recording error state: {1}", self.name, ustr(e)) def inc_update_attempt_count(self): try: self.update_attempt_data.inc_count() self.update_attempt_data.save() except Exception as e: logger.warn(u"Agent {0} failed recording update attempt: {1}", self.name, ustr(e)) def get_update_attempt_count(self): return self.update_attempt_data.count def _ensure_loaded(self): self._load_manifest() self._load_error() def _load_error(self): try: self.error = GuestAgentError(self.get_agent_error_file()) self.error.load() logger.verbose(u"Agent {0} error state: {1}", self.name, ustr(self.error)) except Exception as e: logger.warn(u"Agent {0} failed loading error state: {1}", self.name, ustr(e)) def _load_manifest(self): path = self.get_agent_manifest_path() if not os.path.isfile(path): msg = u"Agent {0} is missing the {1} file".format(self.name, AGENT_MANIFEST_FILE) raise UpdateError(msg) with open(path, "r") as manifest_file: try: manifests = json.load(manifest_file) except Exception as e: msg = u"Agent {0} has a malformed {1} ({2})".format(self.name, AGENT_MANIFEST_FILE, ustr(e)) raise UpdateError(msg) if type(manifests) is list: if len(manifests) <= 0: msg = u"Agent {0} has an empty {1}".format(self.name, AGENT_MANIFEST_FILE) raise UpdateError(msg) manifest = manifests[0] else: manifest = manifests try: self.manifest = HandlerManifest(manifest) # pylint: disable=W0201 if len(self.manifest.get_enable_command()) <= 0: raise Exception(u"Manifest is missing the enable command") except Exception as e: msg = u"Agent {0} has an illegal {1}: {2}".format( self.name, AGENT_MANIFEST_FILE, ustr(e)) raise UpdateError(msg) logger.verbose( u"Agent {0} loaded manifest from {1}", self.name, self.get_agent_manifest_path()) logger.verbose(u"Successfully loaded Agent {0} {1}: {2}", self.name, AGENT_MANIFEST_FILE, ustr(self.manifest.data)) return class GuestAgentError(object): def __init__(self, path): self.last_failure = 0.0 self.was_fatal = False if path is None: raise UpdateError(u"GuestAgentError requires a path") self.path = path self.failure_count = 0 self.reason = '' self.clear() return def mark_failure(self, is_fatal=False, reason=''): self.last_failure = time.time() self.failure_count += 1 self.was_fatal = is_fatal self.reason = reason return def clear(self): self.last_failure = 0.0 self.failure_count = 0 self.was_fatal = False self.reason = '' return @property def is_blacklisted(self): return self.was_fatal or self.failure_count >= MAX_FAILURE def load(self): if self.path is not None and os.path.isfile(self.path): try: with open(self.path, 'r') as f: self.from_json(json.load(f)) except Exception as error: # The error.json file is only supposed to be written only by the agent. # If for whatever reason the file is malformed, just delete it to reset state of the errors. logger.warn( "Ran into error when trying to load error file {0}, deleting it to clean state. Error: {1}".format( self.path, textutil.format_exception(error))) try: os.remove(self.path) except Exception: # We try best case efforts to delete the file, ignore error if we're unable to do so pass return def save(self): if os.path.isdir(os.path.dirname(self.path)): with open(self.path, 'w') as f: json.dump(self.to_json(), f) return def from_json(self, data): self.last_failure = max(self.last_failure, data.get(u"last_failure", 0.0)) self.failure_count = max(self.failure_count, data.get(u"failure_count", 0)) self.was_fatal = self.was_fatal or data.get(u"was_fatal", False) reason = data.get(u"reason", '') self.reason = reason if reason != '' else self.reason return def to_json(self): data = { u"last_failure": self.last_failure, u"failure_count": self.failure_count, u"was_fatal": self.was_fatal, u"reason": ustr(self.reason) } return data def __str__(self): return "Last Failure: {0}, Total Failures: {1}, Fatal: {2}, Reason: {3}".format( self.last_failure, self.failure_count, self.was_fatal, self.reason) class GuestAgentUpdateAttempt(object): def __init__(self, path): self.count = 0 if path is None: raise UpdateError(u"GuestAgentUpdateAttempt requires a path") self.path = path self.clear() def inc_count(self): self.count += 1 def clear(self): self.count = 0 def load(self): if self.path is not None and os.path.isfile(self.path): try: with open(self.path, 'r') as f: self.from_json(json.load(f)) except Exception as error: # The update_attempt.json file is only supposed to be written only by the agent. # If for whatever reason the file is malformed, just delete it to reset state of the errors. logger.warn( "Ran into error when trying to load error file {0}, deleting it to clean state. Error: {1}".format( self.path, textutil.format_exception(error))) try: os.remove(self.path) except Exception: # We try best case efforts to delete the file, ignore error if we're unable to do so pass def save(self): if os.path.isdir(os.path.dirname(self.path)): with open(self.path, 'w') as f: json.dump(self.to_json(), f) def from_json(self, data): self.count = data.get(u"count", 0) def to_json(self): data = { u"count": self.count } return data class GuestAgentUpdateUtil(object): @staticmethod def get_initial_update_state_file(): """ This file tracks whether the initial update attempt has been made or not """ return os.path.join(conf.get_lib_dir(), INITIAL_UPDATE_STATE_FILE) @staticmethod def save_initial_update_state_file(): """ Save the file if agent attempted initial update """ try: with open(GuestAgentUpdateUtil.get_initial_update_state_file(), "w"): pass except Exception as e: msg = "Error creating the initial update state file ({0}): {1}".format(GuestAgentUpdateUtil.get_initial_update_state_file(), ustr(e)) logger.warn(msg) add_event(op=WALAEventOperation.AgentUpgrade, message=msg, log_event=False) @staticmethod def is_initial_update(): """ Returns True if the state file doesn't exist, as the presence of the file indicates that the initial update has already been attempted """ return not os.path.exists(GuestAgentUpdateUtil.get_initial_update_state_file()) @staticmethod def get_rsm_update_state_file(): """ This file tracks whether the last attempted update was an RSM update or not """ return os.path.join(conf.get_lib_dir(), RSM_UPDATE_STATE_FILE) @staticmethod def save_rsm_update_state_file(): """ Save the rsm state empty file when we switch to RSM """ try: with open(GuestAgentUpdateUtil.get_rsm_update_state_file(), "w"): pass except Exception as e: msg = "Error creating the RSM state file ({0}): {1}".format(GuestAgentUpdateUtil.get_rsm_update_state_file(), ustr(e)) logger.warn(msg) add_event(op=WALAEventOperation.AgentUpgrade, message=msg, log_event=False) @staticmethod def remove_rsm_update_state_file(): """ Remove the rsm state file when we switch to self-update """ try: if os.path.exists(GuestAgentUpdateUtil.get_rsm_update_state_file()): os.remove(GuestAgentUpdateUtil.get_rsm_update_state_file()) except Exception as e: msg = "Error removing the RSM state file ({0}): {1}".format(GuestAgentUpdateUtil.get_rsm_update_state_file(), ustr(e)) logger.warn(msg) add_event(op=WALAEventOperation.AgentUpgrade, message=msg, log_event=False) @staticmethod def is_last_update_with_rsm(): """ Returns True if the state file exists, as this indicates that the last update was with RSM """ return os.path.exists(GuestAgentUpdateUtil.get_rsm_update_state_file()) Azure-WALinuxAgent-a976115/azurelinuxagent/ga/interfaces.py000066400000000000000000000027561510742556200237150ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # class ThreadHandlerInterface(object): """ Interface for all thread handlers created and maintained by the GuestAgent. """ @staticmethod def get_thread_name(): raise NotImplementedError("get_thread_name() not implemented") def run(self): raise NotImplementedError("run() not implemented") def keep_alive(self): """ Returns true if the thread handler should be restarted when the thread dies and false when it should remain dead. Defaults to True and can be overridden by sub-classes. """ return True def is_alive(self): raise NotImplementedError("is_alive() not implemented") def start(self): raise NotImplementedError("start() not implemented") def stop(self): raise NotImplementedError("stop() not implemented")Azure-WALinuxAgent-a976115/azurelinuxagent/ga/logcollector.py000066400000000000000000000457111510742556200242600ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import glob import logging import os import subprocess import time import zipfile from datetime import datetime from heapq import heappush, heappop from azurelinuxagent.common.conf import get_lib_dir, get_ext_log_dir, get_agent_log_file from azurelinuxagent.common.event import initialize_event_logger_vminfo_common_parameters_and_protocol, add_event, WALAEventOperation from azurelinuxagent.common.future import ustr, UTC from azurelinuxagent.ga.logcollector_manifests import MANIFEST_NORMAL, MANIFEST_FULL # Please note: be careful when adding agent dependencies in this module. # This module uses its own logger and logs to its own file, not to the agent log. from azurelinuxagent.common.protocol.goal_state import GoalStateProperties from azurelinuxagent.common.protocol.util import get_protocol_util _EXTENSION_LOG_DIR = get_ext_log_dir() _AGENT_LIB_DIR = get_lib_dir() _AGENT_LOG = get_agent_log_file() _LOG_COLLECTOR_DIR = os.path.join(_AGENT_LIB_DIR, "logcollector") _TRUNCATED_FILES_DIR = os.path.join(_LOG_COLLECTOR_DIR, "truncated") OUTPUT_RESULTS_FILE_PATH = os.path.join(_LOG_COLLECTOR_DIR, "results.txt") COMPRESSED_ARCHIVE_PATH = os.path.join(_LOG_COLLECTOR_DIR, "logs.zip") CGROUPS_UNIT = "collect-logs.scope" GRACEFUL_KILL_ERRCODE = 3 INVALID_CGROUPS_ERRCODE = 2 UNEXPECTED_CGROUP_PATH_ERRCODE = 4 LOG_COLLECTOR_CGROUP_PATH_VALIDATION_MAX_RETRIES = 3 LOG_COLLECTOR_CGROUP_PATH_VALIDATION_RETRY_DELAY = 5 LOG_COLLECTOR_CGROUP_PATH_VALIDATION_MAX_FAILURES = 3 _MUST_COLLECT_FILES = [ _AGENT_LOG, os.path.join(_AGENT_LIB_DIR, "waagent_status.json"), os.path.join(_AGENT_LIB_DIR, "history", "*.zip"), os.path.join(_EXTENSION_LOG_DIR, "*", "*"), os.path.join(_EXTENSION_LOG_DIR, "*", "*", "*"), "{0}.*".format(_AGENT_LOG) # any additional waagent.log files (e.g., waagent.log.1.gz) ] _FILE_SIZE_LIMIT = 30 * 1024 * 1024 # 30 MB _UNCOMPRESSED_ARCHIVE_SIZE_LIMIT = 150 * 1024 * 1024 # 150 MB _LOGGER = logging.getLogger(__name__) class LogCollector(object): _TRUNCATED_FILE_PREFIX = "truncated_" def __init__(self, is_full_mode=False): self._is_full_mode = is_full_mode self._manifest = MANIFEST_FULL if is_full_mode else MANIFEST_NORMAL self._must_collect_files = self._expand_must_collect_files() self._create_base_dirs() self._set_logger() @staticmethod def _mkdir(dirname): if not os.path.isdir(dirname): os.makedirs(dirname) @staticmethod def _reset_file(filepath): with open(filepath, "wb") as out_file: out_file.write("".encode("utf-8")) @staticmethod def _create_base_dirs(): LogCollector._mkdir(_LOG_COLLECTOR_DIR) LogCollector._mkdir(_TRUNCATED_FILES_DIR) @staticmethod def _set_logger(): _f_handler = logging.FileHandler(OUTPUT_RESULTS_FILE_PATH, encoding="utf-8") _f_format = logging.Formatter(fmt='%(asctime)s %(levelname)s %(message)s', datefmt=u'%Y-%m-%dT%H:%M:%SZ') _f_format.converter = time.gmtime _f_handler.setFormatter(_f_format) _LOGGER.addHandler(_f_handler) _LOGGER.setLevel(logging.INFO) @staticmethod def initialize_telemetry(): protocol = get_protocol_util().get_protocol(init_goal_state=False, create_transport_certificate=False, save_to_history=False) protocol.client.reset_goal_state(goal_state_properties=GoalStateProperties.RoleConfig | GoalStateProperties.HostingEnv) # Initialize the common parameters for telemetry events initialize_event_logger_vminfo_common_parameters_and_protocol(protocol) @staticmethod def _run_shell_command(command, stdout=subprocess.PIPE, log_output=False): """ Runs a shell command in a subprocess, logs any errors to the log file, enables changing the stdout stream, and logs the output of the command to the log file if indicated by the `log_output` parameter. :param command: Shell command to run :param stdout: Where to write the output of the command :param log_output: If true, log the command output to the log file """ def format_command(cmd): return " ".join(cmd) if isinstance(cmd, list) else command def _encode_command_output(output): return ustr(output, encoding="utf-8", errors="backslashreplace") try: process = subprocess.Popen(command, stdout=stdout, stderr=subprocess.PIPE, shell=False) stdout, stderr = process.communicate() return_code = process.returncode except Exception as e: error_msg = u"Command [{0}] raised unexpected exception: [{1}]".format(format_command(command), ustr(e)) _LOGGER.error(error_msg) return if return_code != 0: encoded_stdout = _encode_command_output(stdout) encoded_stderr = _encode_command_output(stderr) error_msg = "Command: [{0}], return code: [{1}], stdout: [{2}] stderr: [{3}]".format(format_command(command), return_code, encoded_stdout, encoded_stderr) _LOGGER.error(error_msg) return if log_output: msg = "Output of command [{0}]:\n{1}".format(format_command(command), _encode_command_output(stdout)) _LOGGER.info(msg) @staticmethod def _expand_must_collect_files(): # Match the regexes from the MUST_COLLECT_FILES list to existing file paths on disk. manifest = [] for path in _MUST_COLLECT_FILES: manifest.extend(sorted(glob.glob(path))) return manifest def _read_manifest(self): return self._manifest.splitlines() @staticmethod def _process_ll_command(folder): LogCollector._run_shell_command(["ls", "-alF", folder], log_output=True) @staticmethod def _process_echo_command(message): _LOGGER.info(message) @staticmethod def _process_copy_command(path): file_paths = glob.glob(path) for file_path in file_paths: _LOGGER.info(file_path) return file_paths @staticmethod def _convert_file_name_to_archive_name(file_name): # File name is the name of the file on disk, whereas archive name is the name of that same file in the archive. # For non-truncated files: /var/log/waagent.log on disk becomes var/log/waagent.log in archive # (leading separator is removed by the archive). # For truncated files: /var/lib/waagent/logcollector/truncated/var/log/syslog.1 on disk becomes # truncated_var_log_syslog.1 in the archive. if file_name.startswith(_TRUNCATED_FILES_DIR): original_file_path = file_name[len(_TRUNCATED_FILES_DIR):].lstrip(os.path.sep) archive_file_name = LogCollector._TRUNCATED_FILE_PREFIX + original_file_path.replace(os.path.sep, "_") return archive_file_name else: return file_name.lstrip(os.path.sep) @staticmethod def _remove_uncollected_truncated_files(files_to_collect): # After log collection is completed, see if there are any old truncated files which were not collected # and remove them since they probably won't be collected in the future. This is possible when the # original file got deleted, so there is no need to keep its truncated version anymore. truncated_files = os.listdir(_TRUNCATED_FILES_DIR) for file_path in truncated_files: full_path = os.path.join(_TRUNCATED_FILES_DIR, file_path) if full_path not in files_to_collect: if os.path.isfile(full_path): os.remove(full_path) @staticmethod def _expand_parameters(manifest_data): _LOGGER.info("Using %s as $LIB_DIR", _AGENT_LIB_DIR) _LOGGER.info("Using %s as $LOG_DIR", _EXTENSION_LOG_DIR) _LOGGER.info("Using %s as $AGENT_LOG", _AGENT_LOG) new_manifest = [] for line in manifest_data: new_line = line.replace("$LIB_DIR", _AGENT_LIB_DIR) new_line = new_line.replace("$LOG_DIR", _EXTENSION_LOG_DIR) new_line = new_line.replace("$AGENT_LOG", _AGENT_LOG) new_manifest.append(new_line) return new_manifest def _process_manifest_file(self): files_to_collect = set() data = self._read_manifest() manifest_entries = LogCollector._expand_parameters(data) for entry in manifest_entries: # The entry can be one of the four flavours: # 1) ll,/etc/udev/rules.d -- list out contents of the folder and store to results file # 2) echo,### Gathering Configuration Files ### -- print message to results file # 3) copy,/var/lib/waagent/provisioned -- add file to list of files to be collected # 4) diskinfo, -- ignore commands from manifest other than ll, echo, and copy for now contents = entry.split(",") if len(contents) != 2: # If it's not a comment or an empty line, it's a malformed entry if not entry.startswith("#") and len(entry.strip()) > 0: _LOGGER.error("Couldn't parse \"%s\"", entry) continue command, value = contents if command == "ll": self._process_ll_command(value) elif command == "echo": self._process_echo_command(value) elif command == "copy": files_to_collect.update(self._process_copy_command(value)) return files_to_collect @staticmethod def _truncate_large_file(file_path): # Truncate large file to size limit (keep freshest entries of the file), copy file to a temporary location # and update file path in list of files to collect try: # Binary files cannot be truncated, don't include large binary files ext = os.path.splitext(file_path)[1] if ext in [".gz", ".zip", ".xz"]: _LOGGER.warning("Discarding large binary file %s", file_path) return None truncated_file_path = os.path.join(_TRUNCATED_FILES_DIR, file_path.replace(os.path.sep, "_")) if os.path.exists(truncated_file_path): original_file_mtime = os.path.getmtime(file_path) truncated_file_mtime = os.path.getmtime(truncated_file_path) # If the original file hasn't been updated since the truncated file, it means there were no changes # and we don't need to truncate it again. if original_file_mtime < truncated_file_mtime: return truncated_file_path # Get the last N bytes of the file with open(truncated_file_path, "w+") as fh: LogCollector._run_shell_command(["tail", "-c", str(_FILE_SIZE_LIMIT), file_path], stdout=fh) return truncated_file_path except OSError as e: _LOGGER.error("Failed to truncate large file: %s", ustr(e)) return None def _get_file_priority(self, file_entry): # The sooner the file appears in the must collect list, the bigger its priority. # Priority is higher the lower the number (0 is highest priority). try: return self._must_collect_files.index(file_entry) except ValueError: # Doesn't matter, file is not in the must collect list, assign a low priority return 999999999 def _get_priority_files_list(self, file_list): # Given a list of files to collect, determine if they show up in the must collect list and build a priority # queue. The queue will determine the order in which the files are collected, highest priority files first. priority_file_queue = [] for file_entry in file_list: priority = self._get_file_priority(file_entry) heappush(priority_file_queue, (priority, file_entry)) return priority_file_queue def _get_final_list_for_archive(self, priority_file_queue): # Given a priority queue of files to collect, add one by one while the archive size is under the size limit. # If a single file is over the file size limit, truncate it before adding it to the archive. _LOGGER.info("### Preparing list of files to add to archive ###") total_uncompressed_size = 0 final_files_to_collect = [] while priority_file_queue: try: file_path = heappop(priority_file_queue)[1] # (priority, file_path) file_size = min(os.path.getsize(file_path), _FILE_SIZE_LIMIT) if total_uncompressed_size + file_size > _UNCOMPRESSED_ARCHIVE_SIZE_LIMIT: _LOGGER.warning("Archive too big, done with adding files.") break if os.path.getsize(file_path) <= _FILE_SIZE_LIMIT: final_files_to_collect.append(file_path) total_uncompressed_size += file_size _LOGGER.info("Adding file %s, size %s b", file_path, file_size) else: truncated_file_path = self._truncate_large_file(file_path) if truncated_file_path: _LOGGER.info("Adding truncated file %s, size %s b", truncated_file_path, file_size) final_files_to_collect.append(truncated_file_path) total_uncompressed_size += file_size except IOError as e: if e.errno == 2: # [Errno 2] No such file or directory _LOGGER.warning("File %s does not exist, skipping collection for this file", file_path) msg = "Uncompressed archive size is {0} b".format(total_uncompressed_size) _LOGGER.info(msg) add_event(op=WALAEventOperation.LogCollection, message=msg) return final_files_to_collect, total_uncompressed_size def _create_list_of_files_to_collect(self): # The final list of files to be collected by zip is created in three steps: # 1) Parse given manifest file, expanding wildcards and keeping a list of files that exist on disk # 2) Assign those files a priority depending on whether they are in the must collect file list. # 3) In priority order, add files to the final list to be collected, until the size of the archive is under # the size limit. parsed_file_paths = self._process_manifest_file() prioritized_file_paths = self._get_priority_files_list(parsed_file_paths) files_to_collect, total_uncompressed_size = self._get_final_list_for_archive(prioritized_file_paths) return files_to_collect, total_uncompressed_size def collect_logs_and_get_archive(self): """ Public method that collects necessary log files in a compressed zip archive. :return: Returns the path of the collected compressed archive """ files_to_collect = [] total_uncompressed_size = 0 try: # Clear previous run's output and create base directories if they don't exist already. self._create_base_dirs() LogCollector._reset_file(OUTPUT_RESULTS_FILE_PATH) start_time = datetime.now(UTC) _LOGGER.info("Starting log collection at %s", start_time.strftime("%Y-%m-%dT%H:%M:%SZ")) _LOGGER.info("Using log collection mode %s", "full" if self._is_full_mode else "normal") files_to_collect, total_uncompressed_size = self._create_list_of_files_to_collect() _LOGGER.info("### Creating compressed archive ###") compressed_archive = None def handle_add_file_to_archive_error(error_count, max_errors, file_to_collect, exception): error_count += 1 if error_count >= max_errors: raise Exception("Too many errors, giving up. Last error: {0}".format(ustr(exception))) else: _LOGGER.warning("Failed to add file %s to the archive: %s", file_to_collect, ustr(exception)) return error_count try: compressed_archive = zipfile.ZipFile(COMPRESSED_ARCHIVE_PATH, "w", compression=zipfile.ZIP_DEFLATED) max_errors = 8 error_count = 0 for file_to_collect in files_to_collect: try: archive_file_name = LogCollector._convert_file_name_to_archive_name(file_to_collect) compressed_archive.write(file_to_collect.encode("utf-8"), arcname=archive_file_name) except IOError as e: if e.errno == 2: # [Errno 2] No such file or directory _LOGGER.warning("File %s does not exist, skipping collection for this file", file_to_collect) else: error_count = handle_add_file_to_archive_error(error_count, max_errors, file_to_collect, e) except Exception as e: error_count = handle_add_file_to_archive_error(error_count, max_errors, file_to_collect, e) compressed_archive_size = os.path.getsize(COMPRESSED_ARCHIVE_PATH) _LOGGER.info("Successfully compressed files. Compressed archive size is %s b", compressed_archive_size) end_time = datetime.now(UTC) duration = end_time - start_time elapsed_ms = int(((duration.days * 24 * 60 * 60 + duration.seconds) * 1000) + (duration.microseconds / 1000.0)) _LOGGER.info("Finishing log collection at %s", end_time.strftime("%Y-%m-%dT%H:%M:%SZ")) _LOGGER.info("Elapsed time: %s ms", elapsed_ms) compressed_archive.write(OUTPUT_RESULTS_FILE_PATH.encode("utf-8"), arcname="results.txt") finally: if compressed_archive is not None: compressed_archive.close() return COMPRESSED_ARCHIVE_PATH, total_uncompressed_size except Exception as e: msg = "Failed to collect logs: {0}".format(ustr(e)) _LOGGER.error(msg) raise finally: self._remove_uncollected_truncated_files(files_to_collect) Azure-WALinuxAgent-a976115/azurelinuxagent/ga/logcollector_manifests.py000066400000000000000000000063521510742556200263270ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # MANIFEST_NORMAL = """echo,### Probing Directories ### ll,/var/log ll,$LIB_DIR echo,### Gathering Configuration Files ### copy,/etc/*-release copy,/etc/HOSTNAME copy,/etc/hostname copy,/etc/waagent.conf echo, echo,### Gathering Log Files ### copy,$AGENT_LOG* copy,/var/log/dmesg* copy,/var/log/syslog* copy,/var/log/auth* copy,$LOG_DIR/*/* copy,$LOG_DIR/*/*/* copy,$LOG_DIR/custom-script/handler.log echo, echo,### Gathering Extension Files ### copy,$LIB_DIR/ovf-env.xml copy,$LIB_DIR/waagent_status.json copy,$LIB_DIR/*/status/*.status copy,$LIB_DIR/*/config/*.settings copy,$LIB_DIR/*/config/HandlerState copy,$LIB_DIR/*/config/HandlerStatus copy,$LIB_DIR/error.json copy,$LIB_DIR/history/*.zip echo, """ MANIFEST_FULL = """echo,### Probing Directories ### ll,/var/log ll,$LIB_DIR ll,/etc/udev/rules.d echo,### Gathering Configuration Files ### copy,$LIB_DIR/provisioned copy,/etc/fstab copy,/etc/ssh/sshd_config copy,/boot/grub*/grub.c* copy,/boot/grub*/menu.lst copy,/etc/*-release copy,/etc/HOSTNAME copy,/etc/hostname copy,/etc/network/interfaces copy,/etc/network/interfaces.d/*.cfg copy,/etc/netplan/50-cloud-init.yaml copy,/etc/nsswitch.conf copy,/etc/resolv.conf copy,/run/systemd/resolve/stub-resolv.conf copy,/run/resolvconf/resolv.conf copy,/etc/sysconfig/iptables copy,/etc/sysconfig/network copy,/etc/sysconfig/network/ifcfg-eth* copy,/etc/sysconfig/network/routes copy,/etc/sysconfig/network-scripts/ifcfg-eth* copy,/etc/sysconfig/network-scripts/route-eth* copy,/etc/sysconfig/SuSEfirewall2 copy,/etc/ufw/ufw.conf copy,/etc/waagent.conf copy,/var/lib/dhcp/dhclient.eth0.leases copy,/var/lib/dhclient/dhclient-eth0.leases copy,/var/lib/wicked/lease-eth0-dhcp-ipv4.xml copy,/run/systemd/netif/leases/2 echo, echo,### Gathering Log Files ### copy,$AGENT_LOG* copy,/var/log/syslog* copy,/var/log/rsyslog* copy,/var/log/messages* copy,/var/log/kern* copy,/var/log/dmesg* copy,/var/log/dpkg* copy,/var/log/yum* copy,/var/log/cloud-init* copy,/var/log/boot* copy,/var/log/auth* copy,/var/log/secure* copy,$LOG_DIR/*/* copy,$LOG_DIR/*/*/* copy,$LOG_DIR/custom-script/handler.log copy,$LOG_DIR/run-command/handler.log echo, echo,### Gathering Extension Files ### copy,$LIB_DIR/ovf-env.xml copy,$LIB_DIR/*/status/*.status copy,$LIB_DIR/*/config/*.settings copy,$LIB_DIR/*/config/HandlerState copy,$LIB_DIR/*/config/HandlerStatus copy,$LIB_DIR/SharedConfig.xml copy,$LIB_DIR/ManagedIdentity-*.json copy,$LIB_DIR/*/error.json copy,$LIB_DIR/waagent_status.json copy,$LIB_DIR/history/*.zip echo, echo,### Gathering Disk Info ### diskinfo, echo,### Gathering Guest ProxyAgent Log Files ### copy,/var/log/azure-proxy-agent/* echo, """ Azure-WALinuxAgent-a976115/azurelinuxagent/ga/memorycontroller.py000066400000000000000000000213741510742556200252030ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import errno import os import re from azurelinuxagent.common import logger from azurelinuxagent.common.exception import CGroupsException from azurelinuxagent.common.future import ustr from azurelinuxagent.ga.cgroupcontroller import _CgroupController, CounterNotFound, MetricValue, MetricsCategory, \ MetricsCounter, _REPORT_EVERY_HOUR class _MemoryController(_CgroupController): def __init__(self, name, cgroup_path): super(_MemoryController, self).__init__(name, cgroup_path) self._counter_not_found_error_count = 0 def _get_memory_stat_counter(self, counter_name): """ Gets the value for the provided counter in memory.stat """ try: with open(os.path.join(self.path, 'memory.stat')) as memory_stat: # # Sample file v1: # # cat memory.stat # cache 0 # rss 0 # rss_huge 0 # shmem 0 # mapped_file 0 # dirty 0 # writeback 0 # swap 0 # ... # # Sample file v2 # # cat memory.stat # anon 0 # file 147140608 # kernel 1421312 # kernel_stack 0 # pagetables 0 # sec_pagetables 0 # percpu 130752 # sock 0 # ... # for line in memory_stat: re_memory_counter = r'{0}\s+(\d+)'.format(counter_name) match = re.match(re_memory_counter, line) if match is not None: return int(match.groups()[0]) except (IOError, OSError) as e: if e.errno == errno.ENOENT: raise raise CGroupsException("Failed to read memory.stat: {0}".format(ustr(e))) except Exception as e: raise CGroupsException("Failed to read memory.stat: {0}".format(ustr(e))) raise CounterNotFound("Cannot find counter: {0}".format(counter_name)) def get_memory_usage(self): """ Collects anon and cache usage for the cgroup and returns as a tuple Returns anon and cache memory usage for the cgroup as a tuple -> (anon, cache) :return: Anon and cache memory usage in bytes :rtype: tuple[int, int] """ raise NotImplementedError() def try_swap_memory_usage(self): """ Collects swap usage for the cgroup :return: Memory usage in bytes :rtype: int """ raise NotImplementedError() def get_max_memory_usage(self): """ Collect max memory usage for the cgroup. :return: Memory usage in bytes :rtype: int """ raise NotImplementedError() def get_tracked_metrics(self, **_): # The log collector monitor tracks anon and cache memory separately. anon_mem_usage, cache_mem_usage = self.get_memory_usage() total_mem_usage = anon_mem_usage + cache_mem_usage return [ MetricValue(MetricsCategory.MEMORY_CATEGORY, MetricsCounter.TOTAL_MEM_USAGE, self.name, total_mem_usage), MetricValue(MetricsCategory.MEMORY_CATEGORY, MetricsCounter.ANON_MEM_USAGE, self.name, anon_mem_usage), MetricValue(MetricsCategory.MEMORY_CATEGORY, MetricsCounter.CACHE_MEM_USAGE, self.name, cache_mem_usage), MetricValue(MetricsCategory.MEMORY_CATEGORY, MetricsCounter.MAX_MEM_USAGE, self.name, self.get_max_memory_usage(), _REPORT_EVERY_HOUR), MetricValue(MetricsCategory.MEMORY_CATEGORY, MetricsCounter.SWAP_MEM_USAGE, self.name, self.try_swap_memory_usage(), _REPORT_EVERY_HOUR) ] def get_unit_properties(self): return["MemoryAccounting"] def get_controller_type(self): return "memory" class MemoryControllerV1(_MemoryController): def get_memory_usage(self): # In v1, anon memory is reported in the 'rss' counter return self._get_memory_stat_counter("rss"), self._get_memory_stat_counter("cache") def try_swap_memory_usage(self): # In v1, swap memory should be collected from memory.stat, because memory.memsw.usage_in_bytes reports total Memory+SWAP. try: return self._get_memory_stat_counter("swap") except CounterNotFound as e: if self._counter_not_found_error_count < 1: logger.periodic_info(logger.EVERY_HALF_HOUR, '{0} from "memory.stat" file in the cgroup: {1}---[Note: This log for informational purpose only and can be ignored]'.format(ustr(e), self.path)) self._counter_not_found_error_count += 1 return 0 def get_max_memory_usage(self): # In v1, max memory usage is reported in memory.max_usage_in_bytes usage = 0 try: usage = int(self._get_parameters('memory.max_usage_in_bytes', first_line_only=True)) except Exception as e: if isinstance(e, (IOError, OSError)) and e.errno == errno.ENOENT: # pylint: disable=E1101 raise raise CGroupsException("Exception while attempting to read {0}".format("memory.max_usage_in_bytes"), e) return usage class MemoryControllerV2(_MemoryController): def get_memory_usage(self): # In v2, cache memory is reported in the 'file' counter return self._get_memory_stat_counter("anon"), self._get_memory_stat_counter("file") def get_memory_throttled_events(self): """ Returns the number of times processes of the cgroup are throttled and routed to perform memory recliam because the high memory boundary was exceeded. :return: Number of memory throttling events for the cgroup :rtype: int """ try: with open(os.path.join(self.path, 'memory.events')) as memory_events: # # Sample file: # # cat memory.events # low 0 # high 0 # max 0 # oom 0 # oom_kill 0 # oom_group_kill 0 # for line in memory_events: match = re.match(r'high\s+(\d+)', line) if match is not None: return int(match.groups()[0]) except (IOError, OSError) as e: if e.errno == errno.ENOENT: raise raise CGroupsException("Failed to read memory.events: {0}".format(ustr(e))) except Exception as e: raise CGroupsException("Failed to read memory.events: {0}".format(ustr(e))) raise CounterNotFound("Cannot find memory.events counter: high") def try_swap_memory_usage(self): # In v2, swap memory is reported in memory.swap.current usage = 0 try: usage = int(self._get_parameters('memory.swap.current', first_line_only=True)) except Exception as e: if isinstance(e, (IOError, OSError)) and e.errno == errno.ENOENT: # pylint: disable=E1101 raise raise CGroupsException("Exception while attempting to read {0}".format("memory.swap.current"), e) return usage def get_max_memory_usage(self): # In v2, max memory usage is reported in memory.peak usage = 0 try: usage = int(self._get_parameters('memory.peak', first_line_only=True)) except Exception as e: if isinstance(e, (IOError, OSError)) and e.errno == errno.ENOENT: # pylint: disable=E1101 raise raise CGroupsException("Exception while attempting to read {0}".format("memory.peak"), e) return usage def get_tracked_metrics(self, **_): metrics = super(MemoryControllerV2, self).get_tracked_metrics() throttled_value = MetricValue(MetricsCategory.MEMORY_CATEGORY, MetricsCounter.MEM_THROTTLED, self.name, self.get_memory_throttled_events()) metrics.append(throttled_value) return metrics Azure-WALinuxAgent-a976115/azurelinuxagent/ga/monitor.py000066400000000000000000000313501510742556200232510ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import datetime import os import threading import azurelinuxagent.common.conf as conf import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.networkutil as networkutil from azurelinuxagent.ga.cgroupcontroller import MetricValue, MetricsCategory, MetricsCounter from azurelinuxagent.ga.cgroupconfigurator import CGroupConfigurator from azurelinuxagent.ga.cgroupstelemetry import CGroupsTelemetry from azurelinuxagent.common.errorstate import ErrorState from azurelinuxagent.common.event import add_event, WALAEventOperation, report_metric from azurelinuxagent.common.future import ustr, UTC from azurelinuxagent.ga.interfaces import ThreadHandlerInterface from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.protocol.healthservice import HealthService from azurelinuxagent.common.protocol.imds import get_imds_client from azurelinuxagent.common.protocol.util import get_protocol_util from azurelinuxagent.common.utils.restutil import IOErrorCounter from azurelinuxagent.common.version import AGENT_NAME, CURRENT_VERSION from azurelinuxagent.ga.periodic_operation import PeriodicOperation def get_monitor_handler(): return MonitorHandler() class PollResourceUsage(PeriodicOperation): """ Periodic operation to poll the tracked cgroups for resource usage data. It also checks whether there are processes in the agent's cgroup that should not be there. """ def __init__(self): super(PollResourceUsage, self).__init__(conf.get_cgroup_check_period()) self.__log_metrics = conf.get_cgroup_log_metrics() self.__periodic_metrics = {} def _operation(self): tracked_metrics = CGroupsTelemetry.poll_all_tracked() for metric in tracked_metrics: key = metric.category + metric.counter + metric.instance if key not in self.__periodic_metrics or (self.__periodic_metrics[key] + metric.report_period) <= datetime.datetime.now(UTC): report_metric(metric.category, metric.counter, metric.instance, metric.value, log_event=self.__log_metrics) self.__periodic_metrics[key] = datetime.datetime.now(UTC) CGroupConfigurator.get_instance().check_cgroups(tracked_metrics) class PollSystemWideResourceUsage(PeriodicOperation): def __init__(self): super(PollSystemWideResourceUsage, self).__init__(datetime.timedelta(hours=1)) self.__log_metrics = conf.get_cgroup_log_metrics() self.osutil = get_osutil() def poll_system_memory_metrics(self): used_mem, available_mem = self.osutil.get_used_and_available_system_memory() return [ MetricValue(MetricsCategory.MEMORY_CATEGORY, MetricsCounter.USED_MEM, "", used_mem), MetricValue(MetricsCategory.MEMORY_CATEGORY, MetricsCounter.AVAILABLE_MEM, "", available_mem) ] def _operation(self): metrics = self.poll_system_memory_metrics() for metric in metrics: report_metric(metric.category, metric.counter, metric.instance, metric.value, log_event=self.__log_metrics) class ResetPeriodicLogMessages(PeriodicOperation): """ Periodic operation to clean up the hash-tables maintained by the loggers. For reference, please check azurelinuxagent.common.logger.Logger and azurelinuxagent.common.event.EventLogger classes """ def __init__(self): super(ResetPeriodicLogMessages, self).__init__(datetime.timedelta(hours=12)) def _operation(self): logger.reset_periodic() class ReportNetworkErrors(PeriodicOperation): def __init__(self): super(ReportNetworkErrors, self).__init__(datetime.timedelta(minutes=30)) def _operation(self): io_errors = IOErrorCounter.get_and_reset() hostplugin_errors = io_errors.get("hostplugin") protocol_errors = io_errors.get("protocol") other_errors = io_errors.get("other") if hostplugin_errors > 0 or protocol_errors > 0 or other_errors > 0: msg = "hostplugin:{0};protocol:{1};other:{2}".format(hostplugin_errors, protocol_errors, other_errors) add_event(op=WALAEventOperation.HttpErrors, message=msg) class ReportNetworkConfigurationChanges(PeriodicOperation): """ Periodic operation to check and log changes in network configuration. """ def __init__(self): super(ReportNetworkConfigurationChanges, self).__init__(datetime.timedelta(minutes=1)) self.osutil = get_osutil() self.last_route_table_hash = b'' self.last_nic_state = {} def log_network_configuration(self): try: route_file = '/proc/net/route' if os.path.exists(route_file): lines = [] with open(route_file) as file_object: for line in file_object: lines.append(line) if len(lines) >= 100: lines.append("= self._last_warning_time + self._LOG_WARNING_PERIOD: logger.warn(warning) self._last_warning_time = datetime.datetime.now(UTC) self._last_warning = warning def next_run_time(self): return self._next_run_time def _operation(self): """ Derived classes must override this with the definition of the operation they need to perform """ raise NotImplementedError() @staticmethod def sleep_until_next_operation(operations): """ Takes a list of operations, finds the operation that should be executed next (that with the closest next_run_time) and sleeps until it is time to execute that operation. """ next_operation_time = min(op.next_run_time() for op in operations) sleep_timedelta = next_operation_time - datetime.datetime.now(UTC) # timedelta.total_seconds() is not available on Python 2.6, do the computation manually sleep_seconds = ((sleep_timedelta.days * 24 * 3600 + sleep_timedelta.seconds) * 10.0 ** 6 + sleep_timedelta.microseconds) / 10.0 ** 6 if sleep_seconds > 0: time.sleep(sleep_seconds) Azure-WALinuxAgent-a976115/azurelinuxagent/ga/persist_firewall_rules.py000066400000000000000000000432621510742556200263570ustar00rootroot00000000000000# # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import sys import azurelinuxagent.common.conf as conf from azurelinuxagent.common import logger from azurelinuxagent.common import event from azurelinuxagent.common.event import add_event, WALAEventOperation from azurelinuxagent.ga.firewall_manager import FirewallCmd, FirewallStateError from azurelinuxagent.common.future import ustr from azurelinuxagent.common.osutil import get_osutil, systemd from azurelinuxagent.common.utils import shellutil, fileutil, textutil from azurelinuxagent.common.utils.shellutil import CommandError from azurelinuxagent.common.version import get_distro class PersistFirewallRulesHandler(object): __SERVICE_FILE_CONTENT = """ # This unit file (Version={version}) was created by the Azure VM Agent. # Do not edit. [Unit] Description=Setup network rules for WALinuxAgent After=local-fs.target Before=network-pre.target Wants=network-pre.target DefaultDependencies=no ConditionPathExists={binary_path} [Service] Type=oneshot ExecStart={py_path} {binary_path} RemainAfterExit=yes [Install] WantedBy=network.target """ __BINARY_CONTENTS = """ # This python file was created by the Azure VM Agent. Please do not edit. import os if __name__ == '__main__': if os.path.exists("{egg_path}"): os.system("{py_path} {egg_path} --setup-firewall={wire_ip}") else: print("{egg_path} file not found, skipping execution of firewall execution setup for this boot") """ _AGENT_NETWORK_SETUP_NAME_FORMAT = "{0}-network-setup.service" BINARY_FILE_NAME = "waagent-network-setup.py" # The current version of the unit file; Update it whenever the unit file is modified to ensure Agent can dynamically # modify the unit file on VM too _UNIT_VERSION = "1.4" _DISTRO = get_distro()[0] @staticmethod def get_service_file_path(): osutil = get_osutil() service_name = PersistFirewallRulesHandler._AGENT_NETWORK_SETUP_NAME_FORMAT.format(osutil.get_service_name()) return os.path.join(osutil.get_network_setup_service_install_path(), service_name) def __init__(self, dst_ip): """ This class deals with ensuring that Firewall rules are persisted over system reboots. It tries to employ using Firewalld.service if present first as it already has provisions for persistent rules. If not, it then creates a new agent-network-setup.service file and copy it over to the osutil.get_network_setup_service_install_path() dynamically On top of it, on every service restart it ensures that the WireIP is overwritten and the new IP is blocked as well. """ osutil = get_osutil() self._network_setup_service_name = self._AGENT_NETWORK_SETUP_NAME_FORMAT.format(osutil.get_service_name()) self._is_systemd = systemd.is_systemd() self._systemd_file_path = osutil.get_network_setup_service_install_path() self._dst_ip = dst_ip # The custom service will try to call the current agent executable to setup the firewall rules self._current_agent_executable_path = os.path.join(os.getcwd(), sys.argv[0]) @staticmethod def _is_firewall_service_running(): # Check if firewall-cmd can connect to the daemon # https://docs.fedoraproject.org/en-US/Fedora/19/html/Security_Guide/sec-Check_if_firewalld_is_running.html # Eg: firewall-cmd --state # > running try: status = shellutil.run_command(["firewall-cmd", "--state"]).rstrip() if status == "running": logger.info("The firewalld service is running") return True else: logger.info("The firewalld service is present, but not running: {0}".format(status)) return False except Exception as error: # Instance of 'Exception' has no 'errno' member (no-member) # OK to disable, errno is a member of IOError if isinstance(error, IOError) and error.errno == 2: # pylint: disable=no-member logger.info("The firewalld service is not present on the system") return False logger.info("The firewalld service is present, but not running: {0}".format(ustr(error))) return False def setup(self): if not self._is_firewall_service_running(): logger.info("Firewalld service not running/unavailable, trying to set up {0}".format(self.get_service_file_path())) if systemd.is_systemd(): self._setup_network_setup_service() else: event.info(WALAEventOperation.PersistFirewallRules, "Did not detect systemd, will not set {0}", self._network_setup_service_name) return event.info(WALAEventOperation.PersistFirewallRules,"Firewalld.service is running, setting up permanent rules on the VM") # In case of a failure, this would throw. In such a case, we don't need to try to setup our custom service # because on system reboot, all iptables rules are reset by firewalld.service, so it would be a no-op. # setup permanent firewalld rules firewall_manager = FirewallCmd(self._dst_ip) event.info(WALAEventOperation.Firewall, "Using firewall-cmd [version {0}] to manage the persistent firewall rules", firewall_manager.version) try: firewall_manager.remove_legacy_rule() except Exception as error: event.error(WALAEventOperation.Firewall, "Unable to remove legacy firewall rule. Error: {0}", ustr(error)) try: if firewall_manager.check(): event.info(WALAEventOperation.PersistFirewallRules, "The permanent firewall rules for Azure Fabric are already setup:\n{0}", firewall_manager.get_state()) else: firewall_manager.setup() event.info(WALAEventOperation.PersistFirewallRules, "Created permanent firewall rules for Azure Fabric:\n{0}", firewall_manager.get_state()) except FirewallStateError as e: event.warn( WALAEventOperation.PersistFirewallRules, "The permanent firewall rules for Azure Fabric are not setup correctly ({0}), will reset them. Current state:\n{1}", ustr(e), firewall_manager.get_state()) firewall_manager.remove() firewall_manager.setup() event.info(WALAEventOperation.PersistFirewallRules, "Reset permanent firewall rules for Azure Fabric:\n{0}", firewall_manager.get_state()) # Remove custom service if exists to avoid problems with firewalld try: fileutil.rm_files(*[self.get_service_file_path(), os.path.join(conf.get_lib_dir(), self.BINARY_FILE_NAME)]) except Exception as error: logger.info("Unable to delete existing service {0}: {1}".format(self._network_setup_service_name, ustr(error))) def __verify_network_setup_service_enabled(self): # Check if the custom service has already been enabled cmd = ["systemctl", "is-enabled", self._network_setup_service_name] try: return shellutil.run_command(cmd).rstrip() == "enabled" except CommandError as error: msg = "{0} not enabled. Command: {1}, ExitCode: {2}\nStdout: {3}\nStderr: {4}".format( self._network_setup_service_name, ' '.join(cmd), error.returncode, error.stdout, error.stderr) except Exception as error: msg = "Ran into error, {0} not enabled. Error: {1}".format(self._network_setup_service_name, ustr(error)) logger.verbose(msg) return False def _setup_network_setup_service(self): # Even if service is enabled, we need to overwrite the binary file with the current IP in case it changed. # This is to handle the case where WireIP can change midway on service restarts. # Additionally, incase of auto-update this would also update the location of the new EGG file ensuring that # the service is always run from the most latest agent. # If RHEL and in image mode, we need to clean up the service file in old path if self._DISTRO == 'rhel' and os.path.exists('/run/ostree-booted'): old_service_file_path = os.path.join('/usr/lib/systemd/system/', self._network_setup_service_name) if os.path.exists(old_service_file_path): logger.info("Cleaning up old service file in image mode: {0}".format(old_service_file_path)) try: fileutil.rm_files(old_service_file_path) except Exception as error: logger.warn("Unable to delete old service in image mode {0}: {1}".format(self._network_setup_service_name, ustr(error))) self.__setup_binary_file() network_service_enabled = self.__verify_network_setup_service_enabled() if network_service_enabled and not self.__should_update_unit_file(): event.info(WALAEventOperation.PersistFirewallRules, "Service: {0} already enabled. No change needed.", self._network_setup_service_name) self.__log_network_setup_service_logs() else: if not network_service_enabled: logger.info("Service: {0} not enabled. Adding it now".format(self._network_setup_service_name)) else: logger.info( "Unit file {0} version modified to {1}, setting it up again".format(self.get_service_file_path(), self._UNIT_VERSION)) # Create unit file with default values self.__set_service_unit_file() # After modifying the service, systemctl may issue a warning when checking the service, and daemon-reload should not be used to clear the warning, since it can affect other services event.info(WALAEventOperation.PersistFirewallRules, "Successfully added and enabled the {0}", self._network_setup_service_name) def __setup_binary_file(self): binary_file_path = os.path.join(conf.get_lib_dir(), self.BINARY_FILE_NAME) try: fileutil.write_file(binary_file_path, self.__BINARY_CONTENTS.format(egg_path=self._current_agent_executable_path, wire_ip=self._dst_ip, py_path=sys.executable)) logger.info("Successfully updated the Binary file {0} for firewall setup".format(binary_file_path)) except Exception: logger.warn( "Unable to setup binary file, removing the service unit file {0} to ensure its not run on system reboot".format( self.get_service_file_path())) self.__remove_file_without_raising(binary_file_path) self.__remove_file_without_raising(self.get_service_file_path()) raise def __set_service_unit_file(self): service_unit_file = self.get_service_file_path() binary_path = os.path.join(conf.get_lib_dir(), self.BINARY_FILE_NAME) try: fileutil.write_file(service_unit_file, self.__SERVICE_FILE_CONTENT.format(binary_path=binary_path, py_path=sys.executable, version=self._UNIT_VERSION)) fileutil.chmod(service_unit_file, 0o644) # Finally enable the service. This is needed to ensure the service is started on system boot cmd = ["systemctl", "enable", self._network_setup_service_name] try: shellutil.run_command(cmd) except CommandError as error: msg = ustr( "Unable to enable service: {0}; deleting service file: {1}. Command: {2}, Exit-code: {3}.\nstdout: {4}\nstderr: {5}").format( self._network_setup_service_name, service_unit_file, ' '.join(cmd), error.returncode, error.stdout, error.stderr) raise Exception(msg) except Exception: self.__remove_file_without_raising(service_unit_file) raise @staticmethod def __remove_file_without_raising(file_path): if os.path.exists(file_path): try: os.remove(file_path) except Exception as error: logger.warn("Unable to delete file: {0}; Error: {1}".format(file_path, ustr(error))) def __verify_network_setup_service_failed(self): # Check if the agent-network-setup.service failed in its last run # Note: # The `systemctl is-failed ` command would return "failed" and ExitCode: 0 if the service was actually # in a failed state. # For the rest of the cases (eg: active, in-active, dead, etc) it would return the state and a non-0 ExitCode. cmd = ["systemctl", "is-failed", self._network_setup_service_name] try: return shellutil.run_command(cmd).rstrip() == "failed" except CommandError as error: msg = "{0} not in a failed state. Command: {1}, ExitCode: {2}\nStdout: {3}\nStderr: {4}".format( self._network_setup_service_name, ' '.join(cmd), error.returncode, error.stdout, error.stderr) except Exception as error: msg = "Ran into error, {0} not failed. Error: {1}".format(self._network_setup_service_name, ustr(error)) logger.verbose(msg) return False def __log_network_setup_service_logs(self): # Get logs from journalctl - https://www.freedesktop.org/software/systemd/man/journalctl.html cmd = ["journalctl", "-u", self._network_setup_service_name, "-b", "--utc"] service_failed = self.__verify_network_setup_service_failed() try: stdout = shellutil.run_command(cmd) msg = ustr("Logs from the {0} since system boot:\n{1}").format(self._network_setup_service_name, stdout) logger.info(msg) except CommandError as error: msg = "Unable to fetch service logs, Command: {0} failed with ExitCode: {1}\nStdout: {2}\nStderr: {3}".format( ' '.join(cmd), error.returncode, error.stdout, error.stderr) logger.warn(msg) except Exception as e: msg = "Ran into unexpected error when getting logs for {0} service. Error: {1}".format( self._network_setup_service_name, textutil.format_exception(e)) logger.warn(msg) # Log service status and logs if we can fetch them from journalctl and send it to Kusto, # else just log the error of the failure of fetching logs add_event( op=WALAEventOperation.PersistFirewallRules, is_success=(not service_failed), message=msg, log_event=False) def __get_unit_file_version(self): if not os.path.exists(self.get_service_file_path()): raise OSError("{0} not found".format(self.get_service_file_path())) match = fileutil.findre_in_file(self.get_service_file_path(), line_re="This unit file \\(Version=([\\d.]+)\\) was created by the Azure VM Agent.") if match is None: raise ValueError("Version tag not found in the unit file") return match.group(1).strip() def __get_unit_exec_start(self): if not os.path.exists(self.get_service_file_path()): raise OSError("{0} not found".format(self.get_service_file_path())) match = fileutil.findre_in_file(self.get_service_file_path(), line_re="ExecStart=(.*)") if match is None: raise ValueError("ExecStart tag not found in the unit file") return match.group(1).strip() def __should_update_unit_file(self): """ Check if the unit file version changed from the expected version or if the exec-start changed from the expected exec-start :return: True if unit file need update else False """ try: unit_file_version = self.__get_unit_file_version() unit_exec_start = self.__get_unit_exec_start() except Exception as error: logger.info("Unable to read content of unit file: {0}, overwriting unit file".format(ustr(error))) # Since we can't determine the version or exec start, marking the file as modified to overwrite the unit file return True if unit_file_version != self._UNIT_VERSION: logger.info( "Unit file version: {0} does not match with expected version: {1}, overwriting unit file".format( unit_file_version, self._UNIT_VERSION)) return True binary_path = os.path.join(conf.get_lib_dir(), self.BINARY_FILE_NAME) expected_exec_start = "{0} {1}".format(sys.executable, binary_path) if unit_exec_start != expected_exec_start: logger.info( "Unit file exec-start: {0} does not match with expected exec-start: {1}, overwriting unit file".format( unit_exec_start, expected_exec_start)) return True logger.info( "Unit file matches with expected version: {0} and exec start: {1}, not overwriting unit file".format(unit_file_version, unit_exec_start)) return False Azure-WALinuxAgent-a976115/azurelinuxagent/ga/policy/000077500000000000000000000000001510742556200225055ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/azurelinuxagent/ga/policy/__init__.py000066400000000000000000000011621510742556200246160ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+Azure-WALinuxAgent-a976115/azurelinuxagent/ga/policy/policy_engine.py000066400000000000000000000364531510742556200257160ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.4+ and Openssl 1.0+ # import json import re import os from azurelinuxagent.common.future import ustr from azurelinuxagent.common import logger from azurelinuxagent.common.event import WALAEventOperation, add_event from azurelinuxagent.common import conf from azurelinuxagent.common.exception import AgentError from azurelinuxagent.common.protocol.extensions_goal_state_from_vm_settings import _CaseFoldedDict from azurelinuxagent.common.utils.flexible_version import FlexibleVersion # Default policy values to be used when customer does not specify these attributes in the policy file. _DEFAULT_ALLOW_LISTED_EXTENSIONS_ONLY = False _DEFAULT_SIGNATURE_REQUIRED = False _DEFAULT_EXTENSIONS = {} # Agent supports up to this version of the policy file ("policyVersion" in schema). # Increment this number when any new attributes are added to the policy schema. _MAX_SUPPORTED_POLICY_VERSION = "0.1.0" class InvalidPolicyError(AgentError): """ Error raised if user-provided policy is invalid. """ def __init__(self, msg, inner=None): msg = "Customer-provided policy file ('{0}') is invalid, please correct the following error: {1}".format(conf.get_policy_file_path(), msg) super(InvalidPolicyError, self).__init__(msg, inner) class _PolicyEngine(object): """ Implements base policy engine API. """ def __init__(self): """ Initialize policy engine: if policy enforcement is enabled, read and parse policy file. """ self._policy_enforcement_enabled = self.__get_policy_enforcement_enabled() self._policy_file_contents = None # Raw policy file contents will be saved in __read_policy() if not self.policy_enforcement_enabled: return _PolicyEngine._log_policy_event("Policy enforcement is enabled.") self._policy = self._parse_policy(self.__read_policy()) @property def policy_file_contents(self): return self._policy_file_contents @staticmethod def _log_policy_event(msg, is_success=True, op=WALAEventOperation.Policy, send_event=True): """ Log information to console and telemetry. """ if is_success: logger.info(msg) else: logger.error(msg) if send_event: add_event(op=op, message=msg, is_success=is_success, log_event=False) @staticmethod def __get_policy_enforcement_enabled(): """ Policy will be enabled if (1) policy file exists at the expected location and (2) the conf flag "Debug.EnableExtensionPolicy" is true. """ return conf.get_extension_policy_enabled() and os.path.exists(conf.get_policy_file_path()) @property def policy_enforcement_enabled(self): return self._policy_enforcement_enabled def __read_policy(self): """ Read customer-provided policy JSON file, load and return as a dict. Policy file is expected to be at conf.get_policy_file_path(). Note that this method should only be called after verifying that the file exists (currently done in __init__). Raise InvalidPolicyError if JSON is invalid, or any exceptions are thrown while reading the file. """ with open(conf.get_policy_file_path(), 'r') as f: try: self._policy_file_contents = f.read() _PolicyEngine._log_policy_event( "Enforcing policy using policy file found at '{0}'.".format(conf.get_policy_file_path())) # json.loads will raise error if file contents are not a valid json (including empty file). custom_policy = json.loads(self._policy_file_contents) except ValueError as ex: msg = "policy file does not conform to valid json syntax." if self._policy_file_contents is not None: msg += " File contents: {0}".format(self._policy_file_contents) raise InvalidPolicyError(msg=msg, inner=ex) except Exception as ex: msg = "unable to read or load policy file." if self._policy_file_contents is not None: msg += " File contents: {0}".format(self._policy_file_contents) raise InvalidPolicyError(msg=msg, inner=ex) return custom_policy @staticmethod def _parse_policy(policy): """ Parses the given policy document and returns an equivalent document that has been populated with default values and verified for correctness, i.e. that conforms the following schema: { "policyVersion": "0.1.0", "extensionPolicies": { "allowListedExtensionsOnly": , [Optional; default: false] "signatureRequired": , [Optional; default: false] "extensions": { [Optional; default: {} (empty)] "": { "signatureRequired": [Optional; no default] "runtimePolicy": [Optional; no default] } }, } } Raises InvalidPolicyError if the policy document is invalid. """ if not isinstance(policy, dict): raise InvalidPolicyError("expected an object describing a Policy; got {0}.".format(type(policy).__name__)) _PolicyEngine._check_attributes(policy, object_name="policy", valid_attributes=["policyVersion", "extensionPolicies"]) return { "policyVersion": _PolicyEngine._parse_policy_version(policy), "extensionPolicies": _PolicyEngine._parse_extension_policies(policy) } @staticmethod def _parse_policy_version(policy): """ Validate and return "policyVersion" attribute. If not a string in the format "major[.minor[.patch]]", raise InvalidPolicyError. If policy_version is greater than maximum supported version, raise InvalidPolicyError. """ version = _PolicyEngine._get_string(policy, attribute="policyVersion") if not re.match(r"^\d+(\.\d+(\.\d+)?)?$", version): raise InvalidPolicyError("invalid value for attribute 'policyVersion': '{0}'; it should be in format 'major[.minor[.patch]]' (e.g., '1', '1.0', '1.0.0')".format(version)) if FlexibleVersion(_MAX_SUPPORTED_POLICY_VERSION) < FlexibleVersion(version): raise InvalidPolicyError("policy version '{0}' is not supported. The agent supports policy versions up to '{1}'.".format(version, _MAX_SUPPORTED_POLICY_VERSION)) return version @staticmethod def _parse_extension_policies(policy): """ Parses the "extensionPolicies" attribute of the policy document. See _parse_policy() for schema. """ extension_policies = _PolicyEngine._get_dictionary(policy, attribute="extensionPolicies", optional=True, default={}) _PolicyEngine._check_attributes(extension_policies, object_name="extensionPolicies", valid_attributes=["allowListedExtensionsOnly", "signatureRequired", "extensions"]) return { "allowListedExtensionsOnly": _PolicyEngine._get_boolean(extension_policies, attribute="allowListedExtensionsOnly", name_prefix="extensionPolicies.", optional=True, default=_DEFAULT_ALLOW_LISTED_EXTENSIONS_ONLY), "signatureRequired": _PolicyEngine._get_boolean(extension_policies, attribute="signatureRequired", name_prefix="extensionPolicies.", optional=True, default=_DEFAULT_SIGNATURE_REQUIRED), "extensions": _PolicyEngine._parse_extensions( _PolicyEngine._get_dictionary(extension_policies, attribute="extensions", name_prefix="extensionPolicies.", optional=True, default=_DEFAULT_EXTENSIONS) ) } @staticmethod def _parse_extensions(extensions): """ Parses the "extensions" attribute. See _parse_policy() for schema. The return value is a case-folded dict. CRP allows extensions to be any case, so we allow for case-insensitive lookup of individual extension policies. """ parsed = _CaseFoldedDict.from_dict({}) for extension, extension_policy in extensions.items(): if not isinstance(extension_policy, dict): raise InvalidPolicyError("invalid type {0} for attribute 'extensionPolicies.extensions.{1}'; must be 'object'".format(type(extension_policy).__name__, extension)) parsed[extension] = _PolicyEngine._parse_extension(extension_policy) return parsed @staticmethod def _parse_extension(extension): """ Parses an individual extension. See _parse_policy() for schema. """ extension_attribute_name = "extensionPolicies.extensions.{0}".format(extension) _PolicyEngine._check_attributes(extension, object_name=extension_attribute_name, valid_attributes=["signatureRequired", "runtimePolicy"]) return_value = {} signature_required = _PolicyEngine._get_boolean(extension, attribute="signatureRequired", name_prefix=extension_attribute_name, optional=True, default=None) if signature_required is not None: return_value["signatureRequired"] = signature_required # The runtimePolicy is an arbitrary object. runtime_policy = extension.get("runtimePolicy") if runtime_policy is not None: return_value["runtimePolicy"] = runtime_policy return return_value @staticmethod def _check_attributes(object_, object_name, valid_attributes): """ Check that the given object, which should be a dictionary, has only the specified attributes. If any other attributes are present, raise InvalidPolicyError. The object_name is used in the error message. """ for k in object_.keys(): if k not in valid_attributes: raise InvalidPolicyError("unrecognized attribute '{0}' in {1}".format(k, object_name)) @staticmethod def _get_dictionary(object_, attribute, name_prefix="", optional=False, default=None): """ Returns object[attribute] if it exists, verifying that it is a dictionary. If object_[attribute] does not exist and 'optional' is True, returns 'default'; if 'optional' is False raises InvalidPolicyError. If object_[attribute] is not a dictionary, raises InvalidPolicyError. The name_prefix indicates the path of the attribute within the policy document and is used in the error message. """ return _PolicyEngine._get_value(object_, attribute, name_prefix, dict, "object", optional=optional, default=default) @staticmethod def _get_string(object_, attribute, name_prefix="", optional=False, default=None): """ Returns object[attribute] if it exists, verifying that it is a string, else returns default. If object_[attribute] does not exist and 'optional' is True, returns 'default'; if 'optional' is False raises InvalidPolicyError. If object_[attribute] is not a string, raises InvalidPolicyError. The name_prefix indicates the path of the attribute within the policy document and is used in the error message. """ return _PolicyEngine._get_value(object_, attribute, name_prefix, ustr, "string", optional=optional, default=default) @staticmethod def _get_boolean(object_, attribute, name_prefix="", optional=False, default=None): """ Returns object[attribute] if it exists, verifying that it is a boolean, else returns default. If object_[attribute] does not exist and 'optional' is True, returns 'default'; if 'optional' is False raises InvalidPolicyError. If object_[attribute] is not a boolean, raises InvalidPolicyError. The name_prefix indicates the path of the attribute within the policy document and is used in the error message. """ return _PolicyEngine._get_value(object_, attribute, name_prefix, bool, "boolean", optional=optional, default=default) @staticmethod def _get_value(object_, attribute, name_prefix, type_, type_name, optional, default): """ Returns object[attribute] if it exists, verifying that it is of the given type_, else returns default. If object_[attribute] does not exist and 'optional' is True, returns 'default'; if 'optional' is False raises InvalidPolicyError. If the type of object_[attribute] is not 'type_', raises InvalidPolicyError. The name_prefix indicates the path of the attribute within the policy document, the type_name indicates a user-friendly name for type_; both are used in the error message. """ if default is not None and not optional: raise ValueError("default value should only be provided for optional attributes") value = object_.get(attribute) if value is None: if not optional: raise InvalidPolicyError("missing required attribute '{0}{1}'".format(name_prefix, attribute)) return default if not isinstance(value, type_): raise InvalidPolicyError("invalid type {0} for attribute '{1}{2}'; must be '{3}'".format(type(value).__name__, name_prefix, attribute, type_name)) return value class ExtensionPolicyEngine(_PolicyEngine): def should_allow_extension(self, extension_name): """ Return whether we should allow extension download based on policy. If policy feature not enabled, return True. If allowListedExtensionsOnly=true, return true only if extension present in "extensions" allowlist. If allowListedExtensions=false, return true always. """ if not self.policy_enforcement_enabled: return True allow_listed_extension_only = self._policy.get("extensionPolicies").get("allowListedExtensionsOnly") extension_allowlist = self._policy.get("extensionPolicies").get("extensions") should_allow = not allow_listed_extension_only or extension_allowlist.get(extension_name) is not None return should_allow def should_enforce_signature_validation(self, extension_name): """ Return whether we should enforce signature based on policy. If policy feature not enabled, return False. Individual policy takes precedence over global. """ if not self.policy_enforcement_enabled: return False global_signature_required = self._policy.get("extensionPolicies").get("signatureRequired") individual_policy = self._policy.get("extensionPolicies").get("extensions").get(extension_name) individual_signature_required = individual_policy.get("signatureRequired") if individual_policy is not None else None return individual_signature_required if individual_signature_required is not None else global_signature_required Azure-WALinuxAgent-a976115/azurelinuxagent/ga/remoteaccess.py000066400000000000000000000143721510742556200242440ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import os.path from datetime import datetime, timedelta import azurelinuxagent.common.conf as conf import azurelinuxagent.common.logger as logger from azurelinuxagent.common.event import add_event, WALAEventOperation from azurelinuxagent.common.future import ustr, UTC from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.utils import textutil from azurelinuxagent.common.utils.cryptutil import CryptUtil from azurelinuxagent.common.version import AGENT_NAME, CURRENT_VERSION REMOTE_USR_EXPIRATION_FORMAT = "%a, %d %b %Y %H:%M:%S %Z" DATE_FORMAT = "%Y-%m-%d" TRANSPORT_PRIVATE_CERT = "TransportPrivate.pem" REMOTE_ACCESS_ACCOUNT_COMMENT = "JIT_Account" MAX_TRY_ATTEMPT = 5 FAILED_ATTEMPT_THROTTLE = 1 def get_remote_access_handler(protocol): return RemoteAccessHandler(protocol) class RemoteAccessHandler(object): def __init__(self, protocol): self._os_util = get_osutil() self._protocol = protocol self._cryptUtil = CryptUtil(conf.get_openssl_cmd()) self._remote_access = None self._check_existing_jit_users = True def run(self): try: if self._os_util.jit_enabled: # Handle remote access if any. self._remote_access = self._protocol.client.get_remote_access() self._handle_remote_access() except Exception as e: msg = u"Exception processing goal state for remote access users: {0}".format(textutil.format_exception(e)) add_event(AGENT_NAME, version=CURRENT_VERSION, op=WALAEventOperation.RemoteAccessHandling, is_success=False, message=msg) def _get_existing_jit_users(self): all_users = self._os_util.get_users() return set(u[0] for u in all_users if self._is_jit_user(u[4])) def _handle_remote_access(self): if self._remote_access is not None: logger.info("Processing remote access users in goal state.") self._check_existing_jit_users = True existing_jit_users = self._get_existing_jit_users() goal_state_users = set(u.name for u in self._remote_access.user_list.users) for acc in self._remote_access.user_list.users: try: raw_expiration = acc.expiration account_expiration = datetime.strptime(raw_expiration, REMOTE_USR_EXPIRATION_FORMAT).replace(tzinfo=UTC) now = datetime.now(UTC) if acc.name not in existing_jit_users and now < account_expiration: self._add_user(acc.name, acc.encrypted_password, account_expiration) elif acc.name in existing_jit_users and now > account_expiration: # user account expired, delete it. logger.info("Remote access user '{0}' expired.", acc.name) self._remove_user(acc.name) except Exception as e: logger.error("Error processing remote access user '{0}' - {1}", acc.name, ustr(e)) for user in existing_jit_users: try: if user not in goal_state_users: # user explicitly removed self._remove_user(user) except Exception as e: logger.error("Error removing remote access user '{0}' - {1}", user, ustr(e)) else: # There are no JIT users in the goal state; that may mean that they were removed or that they # were never added. Enumerating the users on the current vm can be very slow and this path is hit # on each goal state; we use self._check_existing_jit_users to avoid enumerating the users # every single time. if self._check_existing_jit_users: logger.info("Looking for existing remote access users.") existing_jit_users = self._get_existing_jit_users() remove_user_errors = False for user in existing_jit_users: try: self._remove_user(user) except Exception as e: logger.error("Error removing remote access user '{0}' - {1}", user, ustr(e)) remove_user_errors = True if not remove_user_errors: self._check_existing_jit_users = False @staticmethod def _is_jit_user(comment): return comment == REMOTE_ACCESS_ACCOUNT_COMMENT def _add_user(self, username, encrypted_password, account_expiration): user_added = False try: expiration_date = (account_expiration + timedelta(days=1)).strftime(DATE_FORMAT) logger.info("Adding remote access user '{0}' with expiration date {1}", username, expiration_date) self._os_util.useradd(username, expiration_date, REMOTE_ACCESS_ACCOUNT_COMMENT) user_added = True logger.info("Adding remote access user '{0}' to sudoers", username) prv_key = os.path.join(conf.get_lib_dir(), TRANSPORT_PRIVATE_CERT) pwd = self._cryptUtil.decrypt_secret(encrypted_password, prv_key) self._os_util.chpasswd(username, pwd, conf.get_password_cryptid(), conf.get_password_crypt_salt_len()) self._os_util.conf_sudoer(username) except Exception: if user_added: self._remove_user(username) raise def _remove_user(self, username): logger.info("Removing remote access user '{0}'", username) self._os_util.del_account(username) Azure-WALinuxAgent-a976115/azurelinuxagent/ga/rsm_version_updater.py000066400000000000000000000167531510742556200256660ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import glob import os from azurelinuxagent.common import conf, logger, event from azurelinuxagent.common.event import add_event, WALAEventOperation from azurelinuxagent.common.exception import AgentUpgradeExitException, AgentUpdateError from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from azurelinuxagent.common.version import CURRENT_VERSION, AGENT_NAME from azurelinuxagent.ga.ga_version_updater import GAVersionUpdater from azurelinuxagent.ga.guestagent import GuestAgent class RSMVersionUpdater(GAVersionUpdater): def __init__(self, gs_id, daemon_version): super(RSMVersionUpdater, self).__init__(gs_id) self._daemon_version = daemon_version @staticmethod def _get_all_agents_on_disk(): path = os.path.join(conf.get_lib_dir(), "{0}-*".format(AGENT_NAME)) return [GuestAgent.from_installed_agent(path=agent_dir) for agent_dir in glob.iglob(path) if os.path.isdir(agent_dir)] def _get_available_agents_on_disk(self): available_agents = [agent for agent in self._get_all_agents_on_disk() if agent.is_available] return sorted(available_agents, key=lambda agent: agent.version, reverse=True) def is_update_allowed_this_time(self, ext_gs_updated): """ RSM update allowed if we have a new goal state """ return ext_gs_updated def is_rsm_update_enabled(self, agent_family, ext_gs_updated): """ Checks if there is a new goal state and decide if we need to continue with rsm update or switch to self-update. Firstly it checks agent supports GA versioning or not. If not, we return false to switch to self-update. if vm is enabled for RSM updates and continue with rsm update, otherwise we return false to switch to self-update. if either isVersionFromRSM or isVMEnabledForRSMUpgrades or version is missing in the goal state, we ignore the update as we consider it as invalid goal state. """ if ext_gs_updated: if not conf.get_enable_ga_versioning(): return False if agent_family.is_vm_enabled_for_rsm_upgrades is None: raise AgentUpdateError( "Received invalid goal state:{0}, missing isVMEnabledForRSMUpgrades property. So, skipping agent update".format( self._gs_id)) elif not agent_family.is_vm_enabled_for_rsm_upgrades: return False else: if agent_family.is_version_from_rsm is None: raise AgentUpdateError( "Received invalid goal state:{0}, missing isVersionFromRSM property. So, skipping agent update".format( self._gs_id)) if agent_family.version is None: raise AgentUpdateError( "Received invalid goal state:{0}, missing version property. So, skipping agent update".format( self._gs_id)) return True def retrieve_agent_version(self, agent_family, goal_state): """ Get the agent version from the goal state """ self._version = FlexibleVersion(agent_family.version) def is_retrieved_version_allowed_to_update(self, agent_family): """ Once version retrieved from goal state, we check if we allowed to update for that version allow update If new version not same as current version, not below than daemon version and if version is from rsm request. Downgrade is allowed only when from_version(updated from) should match the current agent version. """ if not agent_family.is_version_from_rsm or self._version == CURRENT_VERSION: return False # If the version is below daemon version or if it is a downgrade and the current agent version is not the one we are downgrading from, we don't allow update elif self._version < self._daemon_version: raise AgentUpdateError("Received invalid update request:{0}, new version {1} is below than daemon version {2}".format( self._gs_id, str(self._version), str(self._daemon_version))) elif self._version < CURRENT_VERSION and CURRENT_VERSION != FlexibleVersion(agent_family.from_version): raise AgentUpdateError("Received invalid update request:{0}, downgrade {1} is not allowed to update from {2}. Current agent version running: {3}".format( self._gs_id, str(self._version), agent_family.from_version, str(CURRENT_VERSION))) return True def log_new_agent_update_message(self): """ This function logs the update message after we check version allowed to update. """ msg = "New agent version:{0} requested by RSM in Goal state {1}, will update the agent before processing the goal state.".format( str(self._version), self._gs_id) logger.info(msg) add_event(op=WALAEventOperation.AgentUpgrade, message=msg, log_event=False) def proceed_with_update(self): """ upgrade/downgrade to the new version. Raises: AgentUpgradeExitException """ if self._version < CURRENT_VERSION: # In case of a downgrade, we mark the current agent as bad version to avoid starting it back up ever again # (the expectation here being that if we get request to a downgrade, # there's a good reason for not wanting the current version). prefix = "downgrade" try: # We should always have an agent directory for the CURRENT_VERSION agents_on_disk = self._get_available_agents_on_disk() current_agent = next(agent for agent in agents_on_disk if agent.version == CURRENT_VERSION) msg = "Marking the agent {0} as bad version since a downgrade was requested in the GoalState, " \ "suggesting that we really don't want to execute any extensions using this version".format( CURRENT_VERSION) logger.info(msg) add_event(op=WALAEventOperation.AgentUpgrade, message=msg, log_event=False) current_agent.mark_failure(is_fatal=True, reason=msg, report_func=event.info) except StopIteration: logger.warn( "Could not find a matching agent with current version {0} to blacklist, skipping it".format( CURRENT_VERSION)) else: # In case of an upgrade, we don't need to exclude anything as the daemon will automatically # start the next available highest version which would be the target version prefix = "upgrade" raise AgentUpgradeExitException( "Current Agent {0} completed all update checks, exiting current process to {1} to the new Agent version {2}".format(CURRENT_VERSION, prefix, self._version)) Azure-WALinuxAgent-a976115/azurelinuxagent/ga/self_update_version_updater.py000066400000000000000000000214251510742556200273500ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import datetime import random from azurelinuxagent.common import conf, logger from azurelinuxagent.common.event import add_event, WALAEventOperation from azurelinuxagent.common.exception import AgentUpgradeExitException, AgentUpdateError from azurelinuxagent.common.future import UTC, datetime_min_utc from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from azurelinuxagent.common.utils import timeutil from azurelinuxagent.common.version import CURRENT_VERSION from azurelinuxagent.ga.ga_version_updater import GAVersionUpdater from azurelinuxagent.ga.guestagent import GuestAgentUpdateUtil class SelfUpdateType(object): """ Enum for different modes of Self updates """ Hotfix = "Hotfix" Regular = "Regular" class SelfUpdateVersionUpdater(GAVersionUpdater): def __init__(self, gs_id): super(SelfUpdateVersionUpdater, self).__init__(gs_id) self._last_attempted_manifest_download_time = datetime_min_utc self._next_update_time = datetime_min_utc @staticmethod def _get_largest_version(agent_manifest): """ Get the largest version from the agent manifest """ largest_version = FlexibleVersion("0.0.0.0") for pkg in agent_manifest.pkg_list.versions: pkg_version = FlexibleVersion(pkg.version) if pkg_version > largest_version: largest_version = pkg_version return largest_version @staticmethod def _get_agent_upgrade_type(version): # We follow semantic versioning for the agent, if .. is same, then has changed. # In this case, we consider it as a Hotfix upgrade. Else we consider it a Regular upgrade. if version.major == CURRENT_VERSION.major and version.minor == CURRENT_VERSION.minor and version.patch == CURRENT_VERSION.patch: return SelfUpdateType.Hotfix return SelfUpdateType.Regular @staticmethod def _get_next_process_time(upgrade_type, now): """ Returns random time in between 0 to 24hrs(regular) or 4hrs(hotfix) from now """ if upgrade_type == SelfUpdateType.Hotfix: frequency = conf.get_self_update_hotfix_frequency() else: frequency = conf.get_self_update_regular_frequency() return now + datetime.timedelta(seconds=random.randint(0, frequency)) def _new_agent_allowed_now_to_update(self): """ This method is called when a new update is detected and computes a random time for the next update on the first call. Since the method is called periodically until we reach the next update time, we shouldn't refresh or recompute the next update time on every call. We use default value(datetime.datetime.min) to ensure the computation happens only once. This next_update_time will reset to default value(datetime.min) when agent allowed to update. So that, in case the update fails due to an issue, such as a package download error, the same default value used to recompute the next update time. """ now = datetime.datetime.now(UTC) upgrade_type = self._get_agent_upgrade_type(self._version) if self._next_update_time == datetime_min_utc: self._next_update_time = self._get_next_process_time(upgrade_type, now) message = "Self-update discovered new {0} upgrade WALinuxAgent-{1}; Will upgrade on or after {2}".format( upgrade_type, str(self._version), timeutil.create_utc_timestamp(self._next_update_time)) logger.info(message) add_event(op=WALAEventOperation.AgentUpgrade, message=message, log_event=False) if self._next_update_time <= now: self._next_update_time = datetime_min_utc return True return False def _should_agent_attempt_manifest_download(self): """ The agent should attempt to download the manifest if the agent has not attempted to download the manifest in the last 1 hour If we allow update, we update the last attempted manifest download time """ now = datetime.datetime.now(UTC) if self._last_attempted_manifest_download_time != datetime_min_utc: next_attempt_time = self._last_attempted_manifest_download_time + datetime.timedelta( seconds=conf.get_autoupdate_frequency()) else: next_attempt_time = now if next_attempt_time > now: return False self._last_attempted_manifest_download_time = now return True def is_update_allowed_this_time(self, ext_gs_updated): """ Checks if we allowed download manifest as per manifest download frequency """ if not self._should_agent_attempt_manifest_download(): return False return True def is_rsm_update_enabled(self, agent_family, ext_gs_updated): """ Checks if there is a new goal state and decide if we need to continue with self-update or switch to rsm update. if vm is not enabled for RSM updates or agent not supports GA versioning then we continue with self update, otherwise we return true to switch to rsm update. if isVersionFromRSM is missing but isVMEnabledForRSMUpgrades is present in the goal state, we ignore the update as we consider it as invalid goal state. """ if ext_gs_updated: if conf.get_enable_ga_versioning() and agent_family.is_vm_enabled_for_rsm_upgrades is not None and agent_family.is_vm_enabled_for_rsm_upgrades: if agent_family.is_version_from_rsm is None: raise AgentUpdateError( "Received invalid goal state:{0}, missing isVersionFromRSM property. So, skipping agent update".format( self._gs_id)) else: if agent_family.version is None: raise AgentUpdateError( "Received invalid goal state:{0}, missing version property. So, skipping agent update".format( self._gs_id)) return True return False def retrieve_agent_version(self, agent_family, goal_state): """ Get the largest version from the agent manifest """ self._agent_manifest = goal_state.fetch_agent_manifest(agent_family.name, agent_family.uris) largest_version = self._get_largest_version(self._agent_manifest) self._version = largest_version def is_retrieved_version_allowed_to_update(self, agent_family): """ we don't allow new version update, if 1) The version is not greater than current version 2) if current time is before next update time Allow the update, if 1) Initial update 2) If current time is on or after next update time """ if self._version <= CURRENT_VERSION: return False # very first update need to proceed without any delay if GuestAgentUpdateUtil.is_initial_update(): return True if not self._new_agent_allowed_now_to_update(): return False return True def log_new_agent_update_message(self): """ This function logs the update message after we check version allowed to update. """ msg = "Self-update is ready to upgrade the new agent: {0} now before processing the goal state: {1}".format( str(self._version), self._gs_id) logger.info(msg) add_event(op=WALAEventOperation.AgentUpgrade, message=msg, log_event=False) def proceed_with_update(self): """ upgrade to largest version. Downgrade is not supported. Raises: AgentUpgradeExitException """ if self._version > CURRENT_VERSION: # In case of an upgrade, we don't need to exclude anything as the daemon will automatically # start the next available highest version which would be the target version raise AgentUpgradeExitException( "Current Agent {0} completed all update checks, exiting current process to upgrade to the new Agent version {1}".format(CURRENT_VERSION, self._version)) Azure-WALinuxAgent-a976115/azurelinuxagent/ga/send_telemetry_events.py000066400000000000000000000162711510742556200261760ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import datetime import threading import time from azurelinuxagent.common import logger from azurelinuxagent.common.event import add_event, WALAEventOperation from azurelinuxagent.common.exception import ServiceStoppedError from azurelinuxagent.common.future import ustr, UTC, Queue, Empty from azurelinuxagent.ga.interfaces import ThreadHandlerInterface from azurelinuxagent.common.utils import textutil def get_send_telemetry_events_handler(protocol_util): return SendTelemetryEventsHandler(protocol_util) class SendTelemetryEventsHandler(ThreadHandlerInterface): """ This Handler takes care of sending all telemetry out of the agent to Wireserver. It sends out data as soon as there's any data available in the queue to send. """ _THREAD_NAME = "SendTelemetryHandler" _MAX_TIMEOUT = datetime.timedelta(seconds=5).seconds _MIN_EVENTS_TO_BATCH = 30 _MIN_BATCH_WAIT_TIME = datetime.timedelta(seconds=5) def __init__(self, protocol_util): self._protocol = protocol_util.get_protocol() self.should_run = True self._thread = None # We're using a Queue for handling the communication between threads. We plan to remove any dependency on the # filesystem in the future and use add_event to directly queue events into the queue rather than writing to # a file and then parsing it later. # Once we move add_event to directly queue events, we need to add a maxsize here to ensure some limitations are # being set (currently our limits are enforced by collector_threads but that would become obsolete once we # start enqueuing events directly). self._queue = Queue() @staticmethod def get_thread_name(): return SendTelemetryEventsHandler._THREAD_NAME def run(self): logger.info("Start SendTelemetryHandler service.") self.start() def is_alive(self): return self._thread is not None and self._thread.is_alive() def start(self): self._thread = threading.Thread(target=self._process_telemetry_thread) self._thread.daemon = True self._thread.name = self.get_thread_name() self._thread.start() def stop(self): """ Stop server communication and join the thread to main thread. """ self.should_run = False if self.is_alive(): self.join() def join(self): self._queue.join() self._thread.join() def stopped(self): return not self.should_run def enqueue_event(self, event): # Add event to queue and set event if self.stopped(): raise ServiceStoppedError("{0} is stopped, not accepting anymore events".format(self.get_thread_name())) # Queue.put() can block if the queue is full which can be an uninterruptible wait. Blocking for a max of # SendTelemetryEventsHandler._MAX_TIMEOUT seconds and raising a ServiceStoppedError to retry later. # Todo: Queue.put() will only raise a Full exception if a maxsize is set for the Queue. Once some size # limitations are set for the Queue, ensure to handle that correctly here. try: self._queue.put(event, timeout=SendTelemetryEventsHandler._MAX_TIMEOUT) except Exception as error: raise ServiceStoppedError( "Unable to enqueue due to: {0}, stopping any more enqueuing until the next run".format(ustr(error))) def _wait_for_event_in_queue(self): """ Wait for atleast one event in Queue or timeout after SendTelemetryEventsHandler._MAX_TIMEOUT seconds. In case of a timeout, set the event to None. :return: event if an event is added to the Queue or None to signify no events were added in queue. This would raise in case of an error. """ try: event = self._queue.get(timeout=SendTelemetryEventsHandler._MAX_TIMEOUT) self._queue.task_done() except Empty: # No elements in Queue, return None event = None return event def _process_telemetry_thread(self): logger.info("Successfully started the {0} thread".format(self.get_thread_name())) try: # On demand wait, start processing as soon as there is any data available in the queue. In worst case, # also keep checking every SendTelemetryEventsHandler._MAX_TIMEOUT secs to avoid uninterruptible waits. # Incase the service is stopped but we have events in queue, ensure we send them out before killing the thread. while not self.stopped() or not self._queue.empty(): first_event = self._wait_for_event_in_queue() if first_event: # Start processing queue only if first event is not None (i.e. Queue has atleast 1 event), # else do nothing self._send_events_in_queue(first_event) except Exception as error: err_msg = "An unknown error occurred in the {0} thread main loop, stopping thread.{1}".format( self.get_thread_name(), textutil.format_exception(error)) add_event(op=WALAEventOperation.UnhandledError, message=err_msg, is_success=False) def _send_events_in_queue(self, first_event): # Process everything in Queue start_time = datetime.datetime.now(UTC) while not self.stopped() and (self._queue.qsize() + 1) < self._MIN_EVENTS_TO_BATCH and ( start_time + self._MIN_BATCH_WAIT_TIME) > datetime.datetime.now(UTC): # To promote batching, we either wait for atleast _MIN_EVENTS_TO_BATCH events or _MIN_BATCH_WAIT_TIME secs # before sending out the first request to wireserver. # If the thread is requested to stop midway, we skip batching and send whatever we have in the queue. logger.verbose("Waiting for events to batch. Total events so far: {0}, Time elapsed: {1} secs", self._queue.qsize()+1, (datetime.datetime.now(UTC) - start_time).seconds) time.sleep(1) # Delete files after sending the data rather than deleting and sending self._protocol.report_event(self._get_events_in_queue(first_event)) def _get_events_in_queue(self, first_event): yield first_event while not self._queue.empty(): try: event = self._queue.get_nowait() self._queue.task_done() yield event except Exception as error: logger.error("Some exception when fetching event from queue: {0}".format(textutil.format_exception(error))) Azure-WALinuxAgent-a976115/azurelinuxagent/ga/signature_validation_util.py000066400000000000000000000436541510742556200270440ustar00rootroot00000000000000# Windows Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import base64 import datetime import os import re from azurelinuxagent.common import conf from azurelinuxagent.common.utils.shellutil import run_command, CommandError from azurelinuxagent.common.exception import AgentError from azurelinuxagent.common import logger from azurelinuxagent.ga.signing_certificate_util import get_microsoft_signing_certificate_path from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from azurelinuxagent.common.future import ustr, UTC, datetime_min_utc from azurelinuxagent.common.event import add_event, WALAEventOperation, elapsed_milliseconds from azurelinuxagent.common.version import AGENT_VERSION, AGENT_NAME # This file tracks the state of signature and manifest validation for the package. If the file exists, signature and # manifest have both been successfully validated. _PACKAGE_VALIDATION_STATE_FILE = "package_validated" # Signature validation requires OpenSSL version 1.1.0 or later. The 'no_check_time' flag used for the 'openssl cms -verify' # command is not supported on older versions. _MIN_OPENSSL_VERSION_FOR_SIG_VALIDATION = FlexibleVersion("1.1.0") class PackageValidationError(AgentError): """ Error raised when validation fails for a package. """ def __init__(self, msg=None, inner=None, code=-1): super(PackageValidationError, self).__init__(msg, inner) self.code = code class SignatureValidationError(PackageValidationError): """ Error raised when signature validation fails for a package. """ class ManifestValidationError(PackageValidationError): """ Error raised when handler manifest 'signingInfo' validation fails for a package. """ def _get_openssl_version(): """ Calls 'openssl version' via subprocess and extracts the version from its output. Returns OpenSSL version string in major.minor.patch format. Any letter suffix is ignored (e.g., '1.1.1f' and '1.1.1wa-fips' will both return '1.1.1'). If version cannot be found, returns '0.0.0'. """ try: command = [conf.get_openssl_cmd(), 'version'] output = run_command(command) if output is None: logger.error("Failed to get OpenSSL version. '{0}' returned no output.", ' '.join(command)) return "0.0.0" match = re.search(r"OpenSSL (\d+\.\d+\.\d+)", output) if match is not None: return match.group(1) else: logger.error("Failed to get OpenSSL version. '{0}' returned output: {1}", ' '.join(command), output) return "0.0.0" except CommandError as ex: logger.error("Failed to get OpenSSL version. Error: {0}", ex.stderr) return "0.0.0" def openssl_version_supported_for_signature_validation(): # Signature validation currently requires OpenSSL >= 1.1.0 to support the 'no_check_time' flag # used with the 'openssl cms verify' command. This flag bypasses timestamp checks, and will be removed once # proper timestamp validation is implemented. # # For private preview release only, signature validation is only supported on distros with OpenSSL >= 1.1.0, and # users will be informed accordingly. If the OpenSSL version is too old, we log this and return False rather than # raising an error. openssl_version = _get_openssl_version() if FlexibleVersion(openssl_version) < _MIN_OPENSSL_VERSION_FOR_SIG_VALIDATION: msg = ("Signature validation requires OpenSSL version {0}, but the current version is {1}. " "To validate signature, please upgrade OpenSSL to version {0} or higher.").format( _MIN_OPENSSL_VERSION_FOR_SIG_VALIDATION, openssl_version) logger.info(msg) return False return True def _write_signature_to_file(sig_string, output_file): """ Convert the base64-encoded signature string to binary, and write to the output file. """ binary_signature = base64.b64decode(sig_string.strip()) with open(output_file, "wb") as f: f.write(binary_signature) def _report_validation_event(op, level, message, name, version, duration): """ Log signature validation event and emit telemetry with appropriate message based on log level. 'level' is expected to be one of logger.LogLevel.INFO, WARNING, or ERROR. - if level is ERROR: prefix message with "[ERROR]" in telemetry - if level is WARNING: prefix with "[WARNING]" in telemetry, and append a message that failure can be ignored - if level is INFO: log message as-is TODO: for extension signature validation, add '[Name-Version]' prefix to log messages """ if level == logger.LogLevel.ERROR: logger.error(message) event_msg = message is_success = False elif level == logger.LogLevel.WARNING: message = "{0}\nThis failure can be safely ignored; will continue processing the package.".format(message) logger.warn(message) event_msg = "[WARNING] {0}".format(message) is_success = False else: # Log as INFO. If the level is invalid (i.e., not INFO, WARNING, or ERROR), treat it as INFO and prepend a warning to the message. if level != logger.LogLevel.INFO: message = "Invalid log level '{0}', reporting event at 'INFO' level instead. {1}".format(level, message) logger.info(message) event_msg = message is_success = True add_event(op=op, message=event_msg, name=name, version=version, is_success=is_success, duration=duration, log_event=False) def validate_signature(package_path, signature, package_full_name, failure_log_level): """ Validates signature of provided package using OpenSSL CLI. The verification checks the signature against a trusted Microsoft root certificate but does not enforce certificate expiration. :param package_path: path to package file being validated :param signature: base64-encoded signature string :param package_full_name: string in the format "Name-Version", only used for telemetry purposes :param failure_log_level: expected to be logger.LogLevel.WARNING or logger.LogLevel.ERROR. If level is warning, a message is appended to any failure log/telemetry indicating that the failure can be safely ignored. :raises SignatureValidationError: if signature validation fails """ # Initialize variables that will be used in the except/finally blocks. These are assigned inside the try block, # but defining them here ensures safe access if an exception occurs before assignment. start_time = datetime_min_utc signature_path = "" name, version = "", "" try: start_time = datetime.datetime.now(UTC) # Extract package name and version from 'package_full_name' for telemetry. If format is not -, use # 'package_full_name' as the name and an empty string for version. name, version = package_full_name.rsplit('-', 1) if '-' in package_full_name else (package_full_name, "") signature_file_name = os.path.basename(package_path) + "_signature.pem" signature_path = os.path.join(conf.get_lib_dir(), str(signature_file_name)) _report_validation_event(op=WALAEventOperation.SignatureValidation, level=logger.LogLevel.INFO, message="Validating signature for package '{0}'".format(package_full_name), name=name, version=version, duration=0) _write_signature_to_file(signature, signature_path) microsoft_root_cert_file = get_microsoft_signing_certificate_path() if not os.path.isfile(microsoft_root_cert_file): msg = ("signing certificate was not found at expected location ('{0}'). Try restarting the agent, " "or see log ('{1}') for additional details.").format(microsoft_root_cert_file, conf.get_agent_log_file()) raise Exception(msg) # Use OpenSSL CLI to verify that the provided signature file correctly signs the package. The verification # process checks the certificate chain against the specified root certificate file, but the certificate's # expiration date is not enforced due to the `-no_check_time` flag. This allows the signature to be validated # regardless of the certificate's expiration status. However, bypassing expiration checking does not guarantee # that the signature is valid, as it could have been created with an expired/revoked certificate. This flag serves # as a temporary measure until a robust solution for handling expired/revoked certificates is implemented. # # TODO: implement timestamp token parsing and validate that certificate was valid at time of signing command = [ conf.get_openssl_cmd(), 'cms', '-verify', '-binary', '-inform', 'der', # Signature input format must be DER (binary encoding) '-in', signature_path, # Path to the CMS signature file to be verified '-content', package_path, # Path to the original package that was signed '-purpose', 'any', # Allows verification for any purpose, not restricted to specific uses '-CAfile', microsoft_root_cert_file, # Path to the trusted root certificate file used for verification '-no_check_time' # Skips checking whether the certificate is expired ] run_command(command, encode_output=False) _report_validation_event(op=WALAEventOperation.PackageSignatureResult, level=logger.LogLevel.INFO, message="Successfully validated signature for package '{0}'".format(package_full_name), name=name, version=version, duration=elapsed_milliseconds(start_time)) except CommandError as ex: # If the signature validation command failed, report a "PackageSignatureResult" event with operation duration. # Do not report this event for errors where the command was not attempted (e.g., missing root certificate). msg = "Signature validation failed for package '{0}'. \nReturn code: {1}\nError details:\n{2}".format(package_full_name, ex.returncode, ex.stderr) _report_validation_event(op=WALAEventOperation.PackageSignatureResult, level=failure_log_level, message=msg, name=name, version=version, duration=elapsed_milliseconds(start_time)) # For validation-related errors only, send the full signature string in telemetry for debugging purposes. # Send as a separate event to avoid dropping the main error event due to buffer overflow. add_event(op=WALAEventOperation.SignatureValidation, message="Package encoded signature: '{0}'".format(signature), name=name, version=version, log_event=False) raise SignatureValidationError(msg) except Exception as ex: # Catch all exceptions unrelated to OpenSSL signature verification. Report a "SignatureValidation" event without duration. msg = "Signature validation failed for package '{0}'. Error: {1}".format(package_full_name, ustr(ex)) _report_validation_event(op=WALAEventOperation.SignatureValidation, level=failure_log_level, message=msg, name=name, version=version, duration=0) raise SignatureValidationError(msg) finally: # If signature file cleanup fails, log a warning and swallow the error try: if signature_path != "" and os.path.isfile(signature_path): os.remove(signature_path) except Exception as ex: msg = "Failed to cleanup signature file ('{0}'). Error: {1}".format(signature_path, ex) _report_validation_event(op=WALAEventOperation.SignatureValidation, level=logger.LogLevel.WARNING, message=msg, name=name, version=version, duration=0) def validate_handler_manifest_signing_info(manifest, ext_handler, failure_log_level): """ For signed extensions, the handler manifest includes a "signingInfo" section that specifies the type, publisher, and version of the extension. During signature validation (after extracting zip package), we check these attributes against the expected values for the extension. If there is a mismatch, raise an error. :param manifest: HandlerManifest object :param ext_handler: Extension object :param failure_log_level: expected to be logger.LogLevel.WARNING or logger.LogLevel.ERROR. If level is warning, a message is appended to any failure log/telemetry indicating that the failure can be safely ignored. :raises ManifestValidationError: if handler manifest validation fails """ start_time = datetime_min_utc try: start_time = datetime.datetime.now(UTC) _report_validation_event(op=WALAEventOperation.SignatureValidation, level=logger.LogLevel.INFO, message="Validating handler manifest 'signingInfo' of extension '{0}'".format(ext_handler), name=ext_handler.name, version=ext_handler.version, duration=0) # Check that 'signingInfo' exists in the manifest structure man_signing_info = manifest.data.get("signingInfo") if man_signing_info is None: raise ManifestValidationError("HandlerManifest.json does not contain 'signingInfo'") def validate_attribute(attribute, extension_value): # Validate that the specified 'attribute' exists in 'signingInfo', and that it matches the expected 'extension_value'. # If not, report telemetry with is_success=False and raise a ManifestValidationError. signing_info_value = man_signing_info.get(attribute) if signing_info_value is None: raise ManifestValidationError("HandlerManifest.json does not contain attribute 'signingInfo.{0}'".format(attribute)) # Comparison should be case-insensitive, because CRP ignores case for extension name. if extension_value.lower() != signing_info_value.lower(): raise ManifestValidationError("expected extension {0} '{1}' does not match downloaded package {0} '{2}'".format(attribute, extension_value, signing_info_value)) # Compare extension attributes against the attributes specified in 'signingInfo' ext_publisher, ext_type = ext_handler.name.rsplit(".", 1) validate_attribute(attribute="type", extension_value=ext_type) validate_attribute(attribute="publisher", extension_value=ext_publisher) validate_attribute(attribute="version", extension_value=ext_handler.version) _report_validation_event(op=WALAEventOperation.PackageSigningInfoResult, level=logger.LogLevel.INFO, message="Successfully validated handler manifest 'signingInfo' for extension '{0}'".format(ext_handler), name=ext_handler.name, version=ext_handler.version, duration=elapsed_milliseconds(start_time)) except ManifestValidationError as ex: _report_validation_event(op=WALAEventOperation.PackageSigningInfoResult, level=failure_log_level, message=ustr(ex), name=ext_handler.name, version=ext_handler.version, duration=elapsed_milliseconds(start_time)) raise except Exception as ex: # Catch exceptions unrelated to 'signingInfo' validation (e.g. incorrectly formatted extension name). Report "SignatureValidation" event with no duration. msg = "Error during manifest 'signingInfo' validation for extension '{0}'. Error: {1}".format(ext_handler, ustr(ex)) _report_validation_event(op=WALAEventOperation.SignatureValidation, level=failure_log_level, message=msg, name=ext_handler.name, version=ext_handler.version, duration=0) raise ManifestValidationError(msg=msg) def save_signature_validation_state(target_dir): """ Create signature validation state file in the target directory. Existence of file indicates that signature and manifest were successfully validated for the package. 'name' and 'version' are used only for telemetry purposes. """ try: validation_state_file = os.path.join(target_dir, _PACKAGE_VALIDATION_STATE_FILE) _report_validation_event(op=WALAEventOperation.SignatureValidation, level=logger.LogLevel.INFO, name=AGENT_NAME, version=AGENT_VERSION, message="Saving signature validation state file: {0}".format(validation_state_file), duration=0) with open(validation_state_file, 'w'): pass except Exception as e: msg = "Error saving signature validation state file ('{0}') in directory '{1}': {2}".format(_PACKAGE_VALIDATION_STATE_FILE, target_dir, ustr(e)) _report_validation_event(op=WALAEventOperation.SignatureValidation, level=logger.LogLevel.WARNING, name=AGENT_NAME, version=AGENT_VERSION, message=msg, duration=0) raise PackageValidationError(msg=msg) def signature_has_been_validated(target_dir): """ Returns True if signature validation state file exists in the specified directory. Presence of the file indicates that the package signature was successfully validated. """ validation_state_file = os.path.join(target_dir, _PACKAGE_VALIDATION_STATE_FILE) return os.path.exists(validation_state_file) def signature_validation_enabled(): """ Returns True if signature validation is enabled in conf file and OpenSSL version supports all validation parameters. """ return conf.get_signature_validation_enabled() and openssl_version_supported_for_signature_validation() Azure-WALinuxAgent-a976115/azurelinuxagent/ga/signing_certificate_util.py000066400000000000000000000103311510742556200266130ustar00rootroot00000000000000# Windows Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os from azurelinuxagent.common import logger from azurelinuxagent.common import conf from azurelinuxagent.common import event from azurelinuxagent.common.event import WALAEventOperation _MICROSOFT_ROOT_CERT_2011_03_22 = """-----BEGIN CERTIFICATE----- MIIF7TCCA9WgAwIBAgIQP4vItfyfspZDtWnWbELhRDANBgkqhkiG9w0BAQsFADCB iDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCldhc2hpbmd0b24xEDAOBgNVBAcTB1Jl ZG1vbmQxHjAcBgNVBAoTFU1pY3Jvc29mdCBDb3Jwb3JhdGlvbjEyMDAGA1UEAxMp TWljcm9zb2Z0IFJvb3QgQ2VydGlmaWNhdGUgQXV0aG9yaXR5IDIwMTEwHhcNMTEw MzIyMjIwNTI4WhcNMzYwMzIyMjIxMzA0WjCBiDELMAkGA1UEBhMCVVMxEzARBgNV BAgTCldhc2hpbmd0b24xEDAOBgNVBAcTB1JlZG1vbmQxHjAcBgNVBAoTFU1pY3Jv c29mdCBDb3Jwb3JhdGlvbjEyMDAGA1UEAxMpTWljcm9zb2Z0IFJvb3QgQ2VydGlm aWNhdGUgQXV0aG9yaXR5IDIwMTEwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIK AoICAQCygEGqNThNE3IyaCJNuLLx/9VSvGzH9dJKjDbu0cJcfoyKrq8TKG/Ac+M6 ztAlqFo6be+ouFmrEyNozQwph9FvgFyPRH9dkAFSWKxRxV8qh9zc2AodwQO5e7BW 6KPeZGHCnvjzfLnsDbVU/ky2ZU+I8JxImQxCCwl8MVkXeQZ4KI2JOkwDJb5xalwL 54RgpJki49KvhKSn+9GY7Qyp3pSJ4Q6g3MDOmT3qCFK7VnnkH4S6Hri0xElcTzFL h93dBWcmmYDgcRGjuKVB4qRTufcyKYMME782XgSzS0NHL2vikR7TmE/dQgfI6B0S /Jmpaz6SfsjWaTr8ZL22CZ3K/QwLopt3YEsDlKQwaRLWQi3BQUzK3Kr9j1uDRprZ /LHR47PJf0h6zSTwQY9cdNCssBAgBkm3xy0hyFfj0IbzA2j70M5xwYmZSmQBbP3s MJHPQTySx+W6hh1hhMdfgzlirrSSL0fzC/hV66AfWdC7dJse0Hbm8ukG1xDo+mTe acY1logC8Ea4PyeZb8txiSk190gWAjWP1Xl8TQLPX+uKg09FcYj5qQ1OcunCnAfP SRtOBA5jUYxe2ADBVSy2xuDCZU7JNDn1nLPEfuhhbhNfFcRf2X7tHc7uROzLLoax 7Dj2cO2rXBPB2Q8Nx4CyVe0096yb5MPa50c8prWPMd/FS6/r8QIDAQABo1EwTzAL BgNVHQ8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUci06AjGQQ7kU BU7h6qfHMdEjiTQwEAYJKwYBBAGCNxUBBAMCAQAwDQYJKoZIhvcNAQELBQADggIB AH9yzw+3xRXbm8BJyiZb/p4T5tPw0tuXX/JLP02zrhmu7deXoKzvqTqjwkGw5biR nhOBJAPmCf0/V0A5ISRW0RAvS0CpNoZLtFNXmvvxfomPEf4YbFGq6O0JlbXlccmh 6Yd1phV/yX43VF50k8XDZ8wNT2uoFwxtCJJ+i92Bqi1wIcM9BhS7vyRep4TXPw8h Ir1LAAbblxzYXtTFC1yHblCk6MM4pPvLLMWSZpuFXst6bJN8gClYW1e1QGm6CHmm ZGIVnYeWRbVmIyADixxzoNOieTPgUFmG2y/lAiXqcyqfABTINseSO+lOAOzYVgm5 M0kS0lQLAausR7aRKX1MtHWAUgHoyoL2n8ysnI8X6i8msKtyrAv+nlEex0NVZ09R s1fWtuzuUrc66U7h14GIvE+OdbtLqPA1qibUZ2dJsnBMO5PcHd94kIZysjik0dyS TclY6ysSXNQ7roxrsIPlAT/4CTL2kzU0Iq/dNw13CYArzUgA8YyZGUcFAenRv9FO 0OYoQzeZpApKCNmacXPSqs0xE2N2oTdvkjgefRI8ZjLny23h/FKJ3crWZgWalmG+ oijHHKOnNlA8OqTfSm7mhzvO6/DggTedEzxSjr25HTTGHdUKaj2YKXCMiSrRq4IQ SB/c9O+lxbtVGjhjhE63bK2VVOxlIhBJF7jAHscPrFRH -----END CERTIFICATE-----""" def get_microsoft_signing_certificate_path(): return os.path.join(conf.get_lib_dir(), "microsoft_root_certificate.pem") def _write_certificate(cert_string, output_path): """ Write certificate string to file specified by output path. Overwrite file if it already exists. """ umask = None try: # Only owner should have read-write permissions on file creation (600) umask = os.umask(0o077) with open(output_path, "w") as cert_file: cert_file.write(cert_string) logger.info("Signing certificate written to {0}".format(output_path)) except Exception as err: msg = "Failed to write signing certificate to file ('{0}'). Error details:\n{1}".format(output_path, err) event.error(op=WALAEventOperation.SignatureValidation, fmt=msg) finally: if umask is not None: os.umask(umask) def write_signing_certificates(): """ Write root certificates to the library directory (directory specified in conf.py). We store root certificates as strings and then write them to a file on agent init. Both the baked-in and self-update agent can use the same file path for the certificates. """ _write_certificate(_MICROSOFT_ROOT_CERT_2011_03_22, get_microsoft_signing_certificate_path()) Azure-WALinuxAgent-a976115/azurelinuxagent/ga/update.py000066400000000000000000001611611510742556200230500ustar00rootroot00000000000000# Windows Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import glob import os import platform import re import shutil import signal import stat import subprocess import sys import time import uuid from datetime import datetime, timedelta from azurelinuxagent.common import conf from azurelinuxagent.common import logger from azurelinuxagent.common import event from azurelinuxagent.common.utils import fileutil, textutil from azurelinuxagent.common.agent_supported_feature import get_supported_feature_by_name, SupportedFeatureNames, \ get_agent_supported_features_list_for_crp from azurelinuxagent.common.utils.restutil import KNOWN_WIRESERVER_IP from azurelinuxagent.ga.cgroupconfigurator import CGroupConfigurator from azurelinuxagent.common.event import add_event, initialize_event_logger_vminfo_common_parameters_and_protocol, \ WALAEventOperation, EVENTS_DIRECTORY from azurelinuxagent.common.exception import ExitException, AgentUpgradeExitException, AgentMemoryExceededException from azurelinuxagent.ga.firewall_manager import FirewallManager, FirewallStateError from azurelinuxagent.common.future import ustr, UTC, datetime_min_utc from azurelinuxagent.common.osutil import get_osutil, systemd from azurelinuxagent.ga.persist_firewall_rules import PersistFirewallRulesHandler from azurelinuxagent.common.protocol.goal_state import GoalStateSource from azurelinuxagent.common.protocol.hostplugin import HostPluginProtocol, VmSettingsNotSupported from azurelinuxagent.common.protocol.restapi import VERSION_0 from azurelinuxagent.common.protocol.util import get_protocol_util from azurelinuxagent.common.utils import shellutil from azurelinuxagent.common.utils.archive import StateArchiver, AGENT_STATUS_FILE from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from azurelinuxagent.common.version import AGENT_LONG_NAME, AGENT_NAME, AGENT_DIR_PATTERN, CURRENT_AGENT, AGENT_VERSION, \ CURRENT_VERSION, DISTRO_NAME, DISTRO_VERSION, get_lis_version, \ has_logrotate, PY_VERSION_MAJOR, PY_VERSION_MINOR, PY_VERSION_MICRO, get_daemon_version from azurelinuxagent.ga.agent_update_handler import get_agent_update_handler from azurelinuxagent.ga.collect_logs import get_collect_logs_handler, is_log_collection_allowed from azurelinuxagent.ga.collect_telemetry_events import get_collect_telemetry_events_handler from azurelinuxagent.ga.env import get_env_handler from azurelinuxagent.ga.exthandlers import ExtHandlersHandler, list_agent_lib_directory, \ ExtensionStatusValue, ExtHandlerStatusValue from azurelinuxagent.ga.guestagent import GuestAgent from azurelinuxagent.ga.monitor import get_monitor_handler from azurelinuxagent.ga.send_telemetry_events import get_send_telemetry_events_handler from azurelinuxagent.ga.signing_certificate_util import write_signing_certificates, get_microsoft_signing_certificate_path CHILD_HEALTH_INTERVAL = 15 * 60 CHILD_LAUNCH_INTERVAL = 5 * 60 CHILD_LAUNCH_RESTART_MAX = 3 CHILD_POLL_INTERVAL = 60 GOAL_STATE_PERIOD_EXTENSIONS_DISABLED = 5 * 60 ORPHAN_POLL_INTERVAL = 3 ORPHAN_WAIT_INTERVAL = 15 * 60 AGENT_SENTINEL_FILE = "current_version" # This file marks that the first goal state (after provisioning) has been completed, either because it converged or because we received another goal # state before it converged. The contents will be an instance of ExtensionsSummary. If the file does not exist then we have not finished processing # the goal state. INITIAL_GOAL_STATE_FILE = "initial_goal_state" READONLY_FILE_GLOBS = [ "*.crt", "*.p7m", "*.pem", "*.prv", "Certificates.xml", "ovf-env.xml" ] class ExtensionsSummary(object): """ The extensions summary is a list of (extension name, extension status) tuples for the current goal state; it is used to report changes in the status of extensions and to keep track of when the goal state converges (i.e. when all extensions in the goal state reach a terminal state: success or error.) The summary is computed from the VmStatus reported to blob storage. """ def __init__(self, vm_status=None): if vm_status is None: self.summary = [] self.converged = True else: # take the name and status of the extension if is it not None, else use the handler's self.summary = [(o.name, o.status) for o in map(lambda h: h.extension_status if h.extension_status is not None else h, vm_status.vmAgent.extensionHandlers)] self.summary.sort(key=lambda s: s[0]) # sort by extension name to make comparisons easier self.converged = all(status in (ExtensionStatusValue.success, ExtensionStatusValue.error, ExtHandlerStatusValue.ready, ExtHandlerStatusValue.not_ready) for _, status in self.summary) def __eq__(self, other): return self.summary == other.summary def __ne__(self, other): return not (self == other) def __str__(self): return ustr(self.summary) def get_update_handler(): return UpdateHandler() class UpdateHandler(object): TELEMETRY_HEARTBEAT_PERIOD = timedelta(minutes=30) CHECK_MEMORY_USAGE_PERIOD = timedelta(seconds=conf.get_cgroup_check_period()) def __init__(self): self.osutil = get_osutil() self.protocol_util = get_protocol_util() self._is_running = True self.agents = [] self.child_agent = None self.child_launch_time = None self.child_launch_attempts = 0 self.child_process = None self.signal_handler = None self._last_telemetry_heartbeat = None self._heartbeat_id = str(uuid.uuid4()).upper() self._heartbeat_counter = 0 self._initial_attempt_check_memory_usage = True self._last_check_memory_usage_time = time.time() self._check_memory_usage_last_error_report = datetime_min_utc self._cloud_init_completed = False # Only used when Extensions.WaitForCloudInit is enabled; note that this variable is always reset on service start. # VM Size is reported via the heartbeat, default it here. self._vm_size = None # these members are used to avoid reporting errors too frequently self._heartbeat_update_goal_state_error_count = 0 self._update_goal_state_error_count = 0 self._update_goal_state_next_error_report = datetime_min_utc self._report_status_last_failed_goal_state = None # incarnation of the last goal state that has been fully processed # (None if no goal state has been processed) self._last_incarnation = None # ID of the last extensions goal state that has been fully processed (incarnation for WireServer goal states or etag for HostGAPlugin goal states) # (None if no extensions goal state has been processed) self._last_extensions_gs_id = None # Goal state that is currently been processed (None if no goal state is being processed) self._goal_state = None # Whether the agent supports FastTrack (it does, as long as the HostGAPlugin supports the vmSettings API) self._supports_fast_track = False self._extensions_summary = ExtensionsSummary() self._is_initial_goal_state = not os.path.exists(self._initial_goal_state_file_path()) if not conf.get_extensions_enabled(): self._goal_state_period = GOAL_STATE_PERIOD_EXTENSIONS_DISABLED else: if self._is_initial_goal_state: self._goal_state_period = conf.get_initial_goal_state_period() else: self._goal_state_period = conf.get_goal_state_period() def run_latest(self, child_args=None): """ This method is called from the daemon to find and launch the most current, downloaded agent. Note: - Most events should be tagged to the launched agent (agent_version) """ if self.child_process is not None: raise Exception("Illegal attempt to launch multiple goal state Agent processes") if self.signal_handler is None: self.signal_handler = signal.signal(signal.SIGTERM, self.forward_signal) both_auto_updates_used = conf.is_present("AutoUpdate.Enabled") and conf.is_present("AutoUpdate.UpdateToLatestVersion") if both_auto_updates_used: msg = u"The legacy AutoUpdate.Enabled configuration is also used, but it is ignored in favor of the new configuration (AutoUpdate.UpdateToLatestVersion)." logger.warn(msg) add_event( AGENT_NAME, version=CURRENT_VERSION, op=WALAEventOperation.Enable, is_success=False, message=msg, log_event=False) # If new flag explicitly set, agent will use latest agent downloaded and will not fall back to installed agent. See the new flag definition in conf.py use_latest_agent = conf.is_present("AutoUpdate.UpdateToLatestVersion") or conf.get_autoupdate_enabled() latest_agent = None if not use_latest_agent else self.get_latest_agent_greater_than_daemon( daemon_version=CURRENT_VERSION) if latest_agent is None: logger.info(u"Installed Agent {0} is the most current agent", CURRENT_AGENT) agent_cmd = "python -u {0} -run-exthandlers".format(sys.argv[0]) agent_dir = os.getcwd() agent_name = CURRENT_AGENT agent_version = CURRENT_VERSION else: logger.info(u"Determined Agent {0} to be the latest agent", latest_agent.name) agent_cmd = latest_agent.get_agent_cmd() agent_dir = latest_agent.get_agent_dir() agent_name = latest_agent.name agent_version = latest_agent.version if child_args is not None: agent_cmd = "{0} {1}".format(agent_cmd, child_args) try: # Launch the correct Python version for python-based agents cmds = textutil.safe_shlex_split(agent_cmd) if cmds[0].lower() == "python": cmds[0] = sys.executable agent_cmd = " ".join(cmds) self._evaluate_agent_health(latest_agent) self.child_process = subprocess.Popen( cmds, cwd=agent_dir, stdout=sys.stdout, stderr=sys.stderr, env=os.environ) logger.verbose(u"Agent {0} launched with command '{1}'", agent_name, agent_cmd) # Setting the poll interval to poll every second to reduce the agent provisioning time; # The daemon shouldn't wait for 60secs before starting the ext-handler in case the # ext-handler kills itself during agent-update during the first 15 mins (CHILD_HEALTH_INTERVAL) poll_interval = 1 ret = None start_time = time.time() while (time.time() - start_time) < CHILD_HEALTH_INTERVAL: time.sleep(poll_interval) try: ret = self.child_process.poll() except OSError: # if child_process has terminated, calling poll could raise an exception ret = -1 if ret is not None: break if ret is None or ret <= 0: msg = u"Agent {0} launched with command '{1}' is successfully running".format( agent_name, agent_cmd) logger.info(msg) add_event( AGENT_NAME, version=agent_version, op=WALAEventOperation.Enable, is_success=True, message=msg, log_event=False) if ret is None: # Wait for the process to exit if self.child_process.wait() > 0: msg = u"ExtHandler process {0} launched with command '{1}' exited with return code: {2}".format( agent_name, agent_cmd, ret) logger.warn(msg) else: msg = u"Agent {0} launched with command '{1}' failed with return code: {2}".format( agent_name, agent_cmd, ret) logger.warn(msg) add_event( AGENT_NAME, version=agent_version, op=WALAEventOperation.Enable, is_success=False, message=msg) except Exception as e: # Ignore child errors during termination if self.is_running: msg = u"Agent {0} launched with command '{1}' failed with exception: \n".format( agent_name, agent_cmd) logger.warn(msg) detailed_message = '{0} {1}'.format(msg, textutil.format_exception(e)) add_event( AGENT_NAME, version=agent_version, op=WALAEventOperation.Enable, is_success=False, message=detailed_message) if latest_agent is not None: latest_agent.mark_failure(is_fatal=True, reason=detailed_message) self.child_process = None return def run(self, debug=False): """ This is the main loop which watches for agent and extension updates. """ try: logger.info("{0} (Goal State Agent version {1})", AGENT_LONG_NAME, AGENT_VERSION) logger.info("OS: {0} {1}", DISTRO_NAME, DISTRO_VERSION) logger.info("Python: {0}.{1}.{2}", PY_VERSION_MAJOR, PY_VERSION_MINOR, PY_VERSION_MICRO) vm_arch = self.osutil.get_vm_arch() logger.info("CPU Arch: {0}", vm_arch) os_info_msg = u"Distro: {dist_name}-{dist_ver}; "\ u"OSUtil: {util_name}; "\ u"AgentService: {service_name}; "\ u"Python: {py_major}.{py_minor}.{py_micro}; "\ u"Arch: {vm_arch}; "\ u"systemd: {systemd}; "\ u"systemd_version: {systemd_version}; "\ u"LISDrivers: {lis_ver}; "\ u"logrotate: {has_logrotate};".format( dist_name=DISTRO_NAME, dist_ver=DISTRO_VERSION, util_name=type(self.osutil).__name__, service_name=self.osutil.service_name, py_major=PY_VERSION_MAJOR, py_minor=PY_VERSION_MINOR, py_micro=PY_VERSION_MICRO, vm_arch=vm_arch, systemd=systemd.is_systemd(), systemd_version=systemd.get_version(), lis_ver=get_lis_version(), has_logrotate=has_logrotate() ) logger.info(os_info_msg) # # Initialize the goal state; some components depend on information provided by the goal state and this # call ensures the required info is initialized (e.g. telemetry depends on the container ID.) # protocol = self.protocol_util.get_protocol(save_to_history=True) self._initialize_goal_state(protocol) # Initialize the common parameters for telemetry events initialize_event_logger_vminfo_common_parameters_and_protocol(protocol) # Send telemetry if protocol endpoint is not the known WireServer endpoint. endpoint = protocol.get_endpoint() if endpoint is not None and endpoint != KNOWN_WIRESERVER_IP: message = 'Protocol endpoint ({0}) is not known wireserver ip: {1}'.format(endpoint, KNOWN_WIRESERVER_IP) logger.info(message) add_event(op=WALAEventOperation.ProtocolEndpoint, message=message) # Send telemetry for the OS-specific info. add_event(AGENT_NAME, op=WALAEventOperation.OSInfo, message=os_info_msg) self._log_openssl_info() # # Perform initialization tasks # self._initialize_firewall(protocol.get_endpoint()) from azurelinuxagent.ga.exthandlers import get_exthandlers_handler, migrate_handler_state exthandlers_handler = get_exthandlers_handler(protocol) migrate_handler_state() from azurelinuxagent.ga.remoteaccess import get_remote_access_handler remote_access_handler = get_remote_access_handler(protocol) agent_update_handler = get_agent_update_handler(protocol) self._ensure_no_orphans() self._emit_restart_event() self._emit_changes_in_default_configuration() self._ensure_readonly_files() self._ensure_cgroups_initialized() self._ensure_extension_telemetry_state_configured_properly(protocol) self._cleanup_legacy_goal_state_history() write_signing_certificates() # Get all thread handlers telemetry_handler = get_send_telemetry_events_handler(self.protocol_util) all_thread_handlers = [ get_monitor_handler(), get_env_handler(), telemetry_handler, get_collect_telemetry_events_handler(telemetry_handler) ] if is_log_collection_allowed(): all_thread_handlers.append(get_collect_logs_handler()) # Launch all monitoring threads self._start_threads(all_thread_handlers) logger.info("Goal State Period: {0} sec. This indicates how often the agent checks for new goal states and reports status.", self._goal_state_period) while self.is_running: self._check_daemon_running(debug) self._check_threads_running(all_thread_handlers) self._process_goal_state(exthandlers_handler, remote_access_handler, agent_update_handler) self._send_heartbeat_telemetry(agent_update_handler) self._check_agent_memory_usage() time.sleep(self._goal_state_period) except AgentUpgradeExitException as exitException: add_event(op=WALAEventOperation.AgentUpgrade, message=exitException.reason, log_event=False) logger.info(exitException.reason) except ExitException as exitException: logger.info(exitException.reason) except Exception as error: msg = u"Agent {0} failed with exception: {1}".format(CURRENT_AGENT, ustr(error)) self._set_sentinel(msg=msg) logger.warn(msg) logger.warn(textutil.format_exception(error)) sys.exit(1) # additional return here because sys.exit is mocked in unit tests return # pylint: disable=unreachable self._shutdown() sys.exit(0) @staticmethod def _log_openssl_info(): try: version = shellutil.run_command(["openssl", "version"]) message = "OpenSSL version: {0}".format(version) logger.info(message) add_event(op=WALAEventOperation.OpenSsl, message=message, is_success=True) except Exception as e: message = "Failed to get OpenSSL version: {0}".format(e) logger.info(message) add_event(op=WALAEventOperation.OpenSsl, message=message, is_success=False, log_event=False) # # Collect telemetry about the 'pkey' command. CryptUtil get_pubkey_from_prv() uses the 'pkey' command only as a fallback after trying 'rsa'. # 'pkey' also works for RSA keys, but it may not be available on older versions of OpenSSL. Check telemetry after a few releases and if there # are no versions of OpenSSL that do not support 'pkey' consider removing the use of 'rsa' altogether. # try: shellutil.run_command(["openssl", "help", "pkey"]) except Exception as e: message = "OpenSSL does not support the pkey command: {0}".format(e) logger.info(message) add_event(op=WALAEventOperation.OpenSsl, message=message, is_success=False, log_event=False) def _initialize_goal_state(self, protocol): # # Block until we can fetch the first goal state (self._try_update_goal_state() does its own logging and error handling). # event.info(WALAEventOperation.GoalState, "Initializing the goal state...") while not self._try_update_goal_state(protocol): time.sleep(conf.get_goal_state_period()) event.info(WALAEventOperation.GoalState, "Goal state initialization completed.") # # If FastTrack is disabled we need to check if the current goal state (which will be retrieved using the WireServer and # hence will be a Fabric goal state) is outdated. # if not conf.get_enable_fast_track(): last_fast_track_timestamp = HostPluginProtocol.get_fast_track_timestamp() if last_fast_track_timestamp is not None: egs = protocol.client.get_goal_state().extensions_goal_state if egs.created_on_timestamp < last_fast_track_timestamp: egs.is_outdated = True event.info( WALAEventOperation.GoalState, "The current Fabric goal state is older than the most recent FastTrack goal state; will skip it.\nFabric: {0}\nFastTrack: {1}", egs.created_on_timestamp, last_fast_track_timestamp) def _wait_for_cloud_init(self): if conf.get_wait_for_cloud_init() and not self._cloud_init_completed: message = "Waiting for cloud-init to complete..." logger.info(message) add_event(op=WALAEventOperation.CloudInit, message=message) try: output = shellutil.run_command(["cloud-init", "status", "--wait"], timeout=conf.get_wait_for_cloud_init_timeout()) message = "cloud-init completed\n{0}".format(output) logger.info(message) add_event(op=WALAEventOperation.CloudInit, message=message) except Exception as e: message = "An error occurred while waiting for cloud-init; will proceed to execute VM extensions. Extensions that have conflicts with cloud-init may fail.\n{0}".format(ustr(e)) logger.error(message) add_event(op=WALAEventOperation.CloudInit, message=message, is_success=False, log_event=False) self._cloud_init_completed = True # Mark as completed even on error since we will proceed to execute extensions def _get_vm_arch(self): return platform.machine() def _check_daemon_running(self, debug): # Check that the parent process (the agent's daemon) is still running if not debug and self._is_orphaned: raise ExitException("Agent {0} is an orphan -- exiting".format(CURRENT_AGENT)) def _start_threads(self, all_thread_handlers): for thread_handler in all_thread_handlers: thread_handler.run() def _check_threads_running(self, all_thread_handlers): # Check that all the threads are still running for thread_handler in all_thread_handlers: if thread_handler.keep_alive() and not thread_handler.is_alive(): logger.warn("{0} thread died, restarting".format(thread_handler.get_thread_name())) thread_handler.start() def _try_update_goal_state(self, protocol): """ Attempts to update the goal state and returns True on success or False on failure, sending telemetry events about the failures. """ max_errors_to_log = 3 try: # # For Fast Track goal states we need to ensure that the tenant certificate is in the goal state. # # Some scenarios can produce inconsistent goal states. For example, during hibernation/resume, the Fabric goal state changes (the # tenant certificate is re-generated when the VM is restarted) *without* the incarnation necessarily changing (e.g. if the incarnation # is 1 before the hibernation; on resume the incarnation is set to 1 even though the goal state has a new certificate). If a Fast # Track goal state comes after that, the extensions will need the new certificate. # # For new Fast Track goal states, we check the certificates and, if an inconsistency is detected, re-fetch the entire goal state # (update_goal_state(force_update=True). We re-fetch 2 times, one without waiting (to address scenarios like hibernation) and one with # a delay (to address situations in which the HGAP and the WireServer are temporarily out of sync). # for attempt in range(3): protocol.client.update_goal_state(force_update=attempt > 0, silent=self._update_goal_state_error_count >= max_errors_to_log, save_to_history=True) self._goal_state = protocol.get_goal_state() if not (self._processing_new_extensions_goal_state() and self._goal_state.extensions_goal_state.source == GoalStateSource.FastTrack): break if self._check_certificates(self._goal_state): if attempt > 0: event.info(WALAEventOperation.FetchGoalState, "The extensions goal state is now in sync with the tenant cert.") break if attempt == 0: event.info(WALAEventOperation.FetchGoalState, "The extensions are out of sync with the tenant cert. Will refresh the goal state.") elif attempt == 1: event.info(WALAEventOperation.FetchGoalState, "The extensions are still out of sync with the tenant cert. Will refresh the goal state one more time after a short delay.") time.sleep(conf.get_goal_state_period()) else: event.warn(WALAEventOperation.FetchGoalState, "The extensions are still out of sync with the tenant cert. Will continue execution, but some extensions may fail.") break if self._update_goal_state_error_count > 0: event.info( WALAEventOperation.FetchGoalState, "Fetching the goal state recovered from previous errors. Fetched {0} (certificates: {1})", self._goal_state.extensions_goal_state.id, self._goal_state.certs.summary) self._update_goal_state_error_count = 0 try: self._supports_fast_track = conf.get_enable_fast_track() and protocol.client.get_host_plugin().check_vm_settings_support() except VmSettingsNotSupported: self._supports_fast_track = False return True except Exception as e: self._update_goal_state_error_count += 1 self._heartbeat_update_goal_state_error_count += 1 if self._update_goal_state_error_count <= max_errors_to_log: # Report up to 'max_errors_to_log' immediately self._update_goal_state_next_error_report = datetime.now(UTC) event.error(WALAEventOperation.FetchGoalState, "Error fetching the goal state: {0}", textutil.format_exception(e)) else: # Report one single periodic error every 6 hours if datetime.now(UTC) >= self._update_goal_state_next_error_report: self._update_goal_state_next_error_report = datetime.now(UTC) + timedelta(hours=6) event.error(WALAEventOperation.FetchGoalState, "Fetching the goal state is still failing: {0}", textutil.format_exception(e)) return False @staticmethod def _check_certificates(goal_state): # Check that the certificates needed by extensions are in the goal state certificates summary for extension in goal_state.extensions_goal_state.extensions: for settings in extension.settings: if settings.protectedSettings is None: continue certificates = goal_state.certs.summary if not any(settings.certificateThumbprint == c['thumbprint'] for c in certificates): event.warn( WALAEventOperation.FetchGoalState, "The extensions goal state is out of sync with the tenant cert. Certificate {0}, needed by {1}, is missing.", settings.certificateThumbprint, extension.name) return False return True def _processing_new_incarnation(self): """ True if we are currently processing a new incarnation (i.e. WireServer goal state) """ return self._goal_state is not None and self._goal_state.incarnation != self._last_incarnation def _processing_new_extensions_goal_state(self): """ True if we are currently processing a new extensions goal state """ return self._goal_state is not None and self._goal_state.extensions_goal_state.id != self._last_extensions_gs_id and not self._goal_state.extensions_goal_state.is_outdated def _process_goal_state(self, exthandlers_handler, remote_access_handler, agent_update_handler): protocol = exthandlers_handler.protocol # update self._goal_state if not self._try_update_goal_state(protocol): agent_update_handler.run(self._goal_state, self._processing_new_extensions_goal_state()) # status reporting should be done even when the goal state is not updated self._report_status(exthandlers_handler, agent_update_handler) return # check for agent updates agent_update_handler.run(self._goal_state, self._processing_new_extensions_goal_state()) self._wait_for_cloud_init() try: if self._processing_new_extensions_goal_state(): if not self._extensions_summary.converged: message = "A new goal state was received, but not all the extensions in the previous goal state have completed: {0}".format(self._extensions_summary) logger.warn(message) add_event(op=WALAEventOperation.GoalState, message=message, is_success=False, log_event=False) if self._is_initial_goal_state: self._on_initial_goal_state_completed(self._extensions_summary) self._extensions_summary = ExtensionsSummary() exthandlers_handler.run() # check cgroup and disable if any extension started in agent cgroup after goal state processed. # Note: Monitor thread periodically checks this in addition to here. CGroupConfigurator.get_instance().check_cgroups(cgroup_metrics=[]) # report status before processing the remote access, since that operation can take a long time self._report_status(exthandlers_handler, agent_update_handler) if self._processing_new_incarnation(): remote_access_handler.run() # lastly, archive the goal state history (but do it only on new goal states - no need to do it on every iteration) if self._processing_new_extensions_goal_state(): UpdateHandler._archive_goal_state_history() finally: if self._goal_state is not None: self._last_incarnation = self._goal_state.incarnation self._last_extensions_gs_id = self._goal_state.extensions_goal_state.id @staticmethod def _archive_goal_state_history(): try: archiver = StateArchiver(conf.get_lib_dir()) archiver.archive() except Exception as exception: logger.warn("Error cleaning up the goal state history: {0}", ustr(exception)) @staticmethod def _cleanup_legacy_goal_state_history(): try: StateArchiver.purge_legacy_goal_state_history() except Exception as exception: logger.warn("Error removing legacy history files: {0}", ustr(exception)) def _report_status(self, exthandlers_handler, agent_update_handler): # report_ext_handlers_status does its own error handling and returns None if an error occurred vm_status = exthandlers_handler.report_ext_handlers_status( goal_state_changed=self._processing_new_extensions_goal_state(), vm_agent_update_status=agent_update_handler.get_vmagent_update_status(), vm_agent_supports_fast_track=self._supports_fast_track) if vm_status is not None: self._report_extensions_summary(vm_status) if self._goal_state is not None: status_blob_text = exthandlers_handler.protocol.get_status_blob_data() if status_blob_text is None: status_blob_text = "{}" self._goal_state.save_to_history(status_blob_text, AGENT_STATUS_FILE) if self._goal_state.extensions_goal_state.is_outdated: exthandlers_handler.protocol.client.get_host_plugin().clear_fast_track_state() def _report_extensions_summary(self, vm_status): try: extensions_summary = ExtensionsSummary(vm_status) if self._extensions_summary != extensions_summary: self._extensions_summary = extensions_summary message = "Extension status: {0}".format(self._extensions_summary) logger.info(message) add_event(op=WALAEventOperation.GoalState, message=message, is_success=True) if self._extensions_summary.converged: message = "All extensions in the goal state have reached a terminal state: {0}".format(extensions_summary) logger.info(message) add_event(op=WALAEventOperation.GoalState, message=message, is_success=True) if self._is_initial_goal_state: self._on_initial_goal_state_completed(self._extensions_summary) except Exception as error: # report errors only once per goal state if self._report_status_last_failed_goal_state != self._goal_state.extensions_goal_state.id: self._report_status_last_failed_goal_state = self._goal_state.extensions_goal_state.id msg = u"Error logging the goal state summary: {0}".format(textutil.format_exception(error)) logger.warn(msg) add_event(op=WALAEventOperation.GoalState, is_success=False, message=msg) def _on_initial_goal_state_completed(self, extensions_summary): fileutil.write_file(self._initial_goal_state_file_path(), ustr(extensions_summary)) if conf.get_extensions_enabled() and self._goal_state_period != conf.get_goal_state_period(): self._goal_state_period = conf.get_goal_state_period() logger.info("Initial goal state completed, switched the goal state period to {0}", self._goal_state_period) self._is_initial_goal_state = False def forward_signal(self, signum, frame): if signum == signal.SIGTERM: self._shutdown() if self.child_process is None: return logger.info( u"Agent {0} forwarding signal {1} to {2}\n", CURRENT_AGENT, signum, self.child_agent.name if self.child_agent is not None else CURRENT_AGENT) self.child_process.send_signal(signum) if self.signal_handler not in (None, signal.SIG_IGN, signal.SIG_DFL): self.signal_handler(signum, frame) elif self.signal_handler is signal.SIG_DFL: if signum == signal.SIGTERM: self._shutdown() sys.exit(0) return @staticmethod def __get_daemon_version_for_update(): daemon_version = get_daemon_version() if daemon_version != FlexibleVersion(VERSION_0): return daemon_version # We return 0.0.0.0 if daemon version is not specified. In that case, # use the min version as 2.2.53 as we started setting the daemon version starting 2.2.53. return FlexibleVersion("2.2.53") def get_latest_agent_greater_than_daemon(self, daemon_version=None): """ If autoupdate is enabled, return the most current, downloaded, non-blacklisted agent which is not the current version (if any) and is greater than the `daemon_version`. Otherwise, return None (implying to use the installed agent). If `daemon_version` is None, we fetch it from the environment variable set by the DaemonHandler """ self._find_agents() daemon_version = self.__get_daemon_version_for_update() if daemon_version is None else daemon_version # Fetch the downloaded agents that are different from the current version and greater than the daemon version available_agents = [agent for agent in self.agents if agent.is_available and agent.version != CURRENT_VERSION and agent.version > daemon_version] return available_agents[0] if len(available_agents) >= 1 else None def _emit_restart_event(self): try: if not self._is_clean_start: msg = u"Agent did not terminate cleanly: {0}".format( fileutil.read_file(self._sentinel_file_path())) logger.info(msg) add_event( AGENT_NAME, version=CURRENT_VERSION, op=WALAEventOperation.Restart, is_success=False, message=msg) except Exception: pass return @staticmethod def _emit_changes_in_default_configuration(): try: def log_event(msg): logger.info("******** {0} ********", msg) add_event(AGENT_NAME, op=WALAEventOperation.ConfigurationChange, message=msg) def log_if_int_changed_from_default(name, current, message=""): default = conf.get_int_default_value(name) if default != current: log_event("{0} changed from its default: {1}. New value: {2}. {3}".format(name, default, current, message)) def log_if_op_disabled(name, value): if not value: log_event("{0} is set to False, not processing the operation".format(name)) def log_if_agent_versioning_feature_disabled(): supports_ga_versioning = False for _, feature in get_agent_supported_features_list_for_crp().items(): if feature.name == SupportedFeatureNames.GAVersioningGovernance: supports_ga_versioning = True break if not supports_ga_versioning: msg = "Agent : {0} doesn't support GA Versioning".format(CURRENT_VERSION) log_event(msg) log_if_int_changed_from_default("Extensions.GoalStatePeriod", conf.get_goal_state_period(), "Changing this value affects how often extensions are processed and status for the VM is reported. Too small a value may report the VM as unresponsive") log_if_int_changed_from_default("Extensions.InitialGoalStatePeriod", conf.get_initial_goal_state_period(), "Changing this value affects how often extensions are processed and status for the VM is reported. Too small a value may report the VM as unresponsive") log_if_op_disabled("OS.EnableFirewall", conf.enable_firewall()) log_if_op_disabled("Extensions.Enabled", conf.get_extensions_enabled()) log_if_op_disabled("AutoUpdate.Enabled", conf.get_autoupdate_enabled()) log_if_op_disabled("AutoUpdate.UpdateToLatestVersion", conf.get_auto_update_to_latest_version()) if conf.is_present("AutoUpdate.Enabled") and conf.get_autoupdate_enabled() != conf.get_auto_update_to_latest_version(): msg = "AutoUpdate.Enabled property is **Deprecated** now but it's set to different value from AutoUpdate.UpdateToLatestVersion. Please consider removing it if added by mistake" logger.warn(msg) add_event(AGENT_NAME, op=WALAEventOperation.ConfigurationChange, message=msg) if conf.enable_firewall(): log_if_int_changed_from_default("OS.EnableFirewallPeriod", conf.get_enable_firewall_period()) if conf.get_autoupdate_enabled(): log_if_int_changed_from_default("Autoupdate.Frequency", conf.get_autoupdate_frequency()) if conf.get_enable_fast_track(): log_if_op_disabled("Debug.EnableFastTrack", conf.get_enable_fast_track()) if conf.get_lib_dir() != "/var/lib/waagent": log_event("lib dir is in an unexpected location: {0}".format(conf.get_lib_dir())) log_if_agent_versioning_feature_disabled() except Exception as e: logger.warn("Failed to log changes in configuration: {0}", ustr(e)) def _ensure_no_orphans(self, orphan_wait_interval=ORPHAN_WAIT_INTERVAL): pid_files, ignored = self._write_pid_file() # pylint: disable=W0612 for pid_file in pid_files: try: pid = fileutil.read_file(pid_file) wait_interval = orphan_wait_interval while self.osutil.check_pid_alive(pid): wait_interval -= ORPHAN_POLL_INTERVAL if wait_interval <= 0: logger.warn( u"{0} forcibly terminated orphan process {1}", CURRENT_AGENT, pid) os.kill(pid, signal.SIGKILL) break logger.info( u"{0} waiting for orphan process {1} to terminate", CURRENT_AGENT, pid) time.sleep(ORPHAN_POLL_INTERVAL) os.remove(pid_file) except Exception as e: logger.warn( u"Exception occurred waiting for orphan agent to terminate: {0}", ustr(e)) return def _ensure_readonly_files(self): for g in READONLY_FILE_GLOBS: for path in glob.iglob(os.path.join(conf.get_lib_dir(), g)): # Owner should retain write permissions for signing certificate if path == get_microsoft_signing_certificate_path(): continue os.chmod(path, stat.S_IRUSR) def _ensure_cgroups_initialized(self): configurator = CGroupConfigurator.get_instance() configurator.initialize() def _evaluate_agent_health(self, latest_agent): """ Evaluate the health of the selected agent: If it is restarting too frequently, raise an Exception to force blacklisting. """ if latest_agent is None: self.child_agent = None return if self.child_agent is None or latest_agent.version != self.child_agent.version: self.child_agent = latest_agent self.child_launch_time = None self.child_launch_attempts = 0 if self.child_launch_time is None: self.child_launch_time = time.time() self.child_launch_attempts += 1 if (time.time() - self.child_launch_time) <= CHILD_LAUNCH_INTERVAL \ and self.child_launch_attempts >= CHILD_LAUNCH_RESTART_MAX: msg = u"Agent {0} restarted more than {1} times in {2} seconds".format( self.child_agent.name, CHILD_LAUNCH_RESTART_MAX, CHILD_LAUNCH_INTERVAL) raise Exception(msg) return def _filter_blacklisted_agents(self): self.agents = [agent for agent in self.agents if not agent.is_blacklisted] def _find_agents(self): """ Load all non-blacklisted agents currently on disk. """ try: self._set_and_sort_agents(self._load_agents()) self._filter_blacklisted_agents() except Exception as e: logger.warn(u"Exception occurred loading available agents: {0}", ustr(e)) return def _get_pid_parts(self): pid_file = conf.get_agent_pid_file_path() pid_dir = os.path.dirname(pid_file) pid_name = os.path.basename(pid_file) pid_re = re.compile(r"(\d+)_{0}".format(re.escape(pid_name))) return pid_dir, pid_name, pid_re def _get_pid_files(self): pid_dir, pid_name, pid_re = self._get_pid_parts() # pylint: disable=W0612 pid_files = [os.path.join(pid_dir, f) for f in os.listdir(pid_dir) if pid_re.match(f)] pid_files.sort(key=lambda f: int(pid_re.match(os.path.basename(f)).group(1))) return pid_files @property def is_running(self): return self._is_running @is_running.setter def is_running(self, value): self._is_running = value @property def _is_clean_start(self): return not os.path.isfile(self._sentinel_file_path()) @property def _is_orphaned(self): parent_pid = os.getppid() if parent_pid in (1, None): return True if not os.path.isfile(conf.get_agent_pid_file_path()): return True return fileutil.read_file(conf.get_agent_pid_file_path()) != ustr(parent_pid) def _load_agents(self): path = os.path.join(conf.get_lib_dir(), "{0}-*".format(AGENT_NAME)) return [GuestAgent.from_installed_agent(agent_dir) for agent_dir in glob.iglob(path) if os.path.isdir(agent_dir)] def _purge_agents(self): """ Remove from disk all directories and .zip files of unknown agents (without removing the current, running agent). """ path = os.path.join(conf.get_lib_dir(), "{0}-*".format(AGENT_NAME)) known_versions = [agent.version for agent in self.agents] if CURRENT_VERSION not in known_versions: logger.verbose( u"Running Agent {0} was not found in the agent manifest - adding to list", CURRENT_VERSION) known_versions.append(CURRENT_VERSION) for agent_path in glob.iglob(path): try: name = fileutil.trim_ext(agent_path, "zip") m = AGENT_DIR_PATTERN.match(name) if m is not None and FlexibleVersion(m.group(1)) not in known_versions: if os.path.isfile(agent_path): logger.info(u"Purging outdated Agent file {0}", agent_path) os.remove(agent_path) else: logger.info(u"Purging outdated Agent directory {0}", agent_path) shutil.rmtree(agent_path) except Exception as e: logger.warn(u"Purging {0} raised exception: {1}", agent_path, ustr(e)) return def _set_and_sort_agents(self, agents=None): if agents is None: agents = [] self.agents = agents self.agents.sort(key=lambda agent: agent.version, reverse=True) return def _set_sentinel(self, agent=CURRENT_AGENT, msg="Unknown cause"): try: fileutil.write_file( self._sentinel_file_path(), "[{0}] [{1}]".format(agent, msg)) except Exception as e: logger.warn( u"Exception writing sentinel file {0}: {1}", self._sentinel_file_path(), str(e)) return def _sentinel_file_path(self): return os.path.join(conf.get_lib_dir(), AGENT_SENTINEL_FILE) @staticmethod def _initial_goal_state_file_path(): return os.path.join(conf.get_lib_dir(), INITIAL_GOAL_STATE_FILE) def _shutdown(self): # Todo: Ensure all threads stopped when shutting down the main extension handler to ensure that the state of # all threads is clean. self.is_running = False if not os.path.isfile(self._sentinel_file_path()): return try: os.remove(self._sentinel_file_path()) except Exception as e: logger.warn( u"Exception removing sentinel file {0}: {1}", self._sentinel_file_path(), str(e)) return def _write_pid_file(self): pid_files = self._get_pid_files() pid_dir, pid_name, pid_re = self._get_pid_parts() previous_pid_file = None if len(pid_files) <= 0 else pid_files[-1] pid_index = -1 \ if previous_pid_file is None \ else int(pid_re.match(os.path.basename(previous_pid_file)).group(1)) pid_file = os.path.join(pid_dir, "{0}_{1}".format(pid_index + 1, pid_name)) try: fileutil.write_file(pid_file, ustr(os.getpid())) logger.info(u"{0} running as process {1}", CURRENT_AGENT, ustr(os.getpid())) except Exception as e: pid_file = None logger.warn( u"Expection writing goal state agent {0} pid to {1}: {2}", CURRENT_AGENT, pid_file, ustr(e)) return pid_files, pid_file def _send_heartbeat_telemetry(self, agent_update_handler): if self._last_telemetry_heartbeat is None: self._last_telemetry_heartbeat = datetime.now(UTC) - UpdateHandler.TELEMETRY_HEARTBEAT_PERIOD if datetime.now(UTC) >= (self._last_telemetry_heartbeat + UpdateHandler.TELEMETRY_HEARTBEAT_PERIOD): auto_update_enabled = 1 if conf.get_auto_update_to_latest_version() else 0 update_mode = agent_update_handler.get_current_update_mode() # Note: When we add new values to the heartbeat message, please add a semicolon at the end of the value. # This helps to parse the message easily in kusto queries with regex heartbeat_msg = "HeartbeatCounter: {0};HeartbeatId: {1};UpdateGSErrors: {2};AutoUpdate: {3};UpdateMode: {4};".format( self._heartbeat_counter, self._heartbeat_id, self._heartbeat_update_goal_state_error_count, auto_update_enabled, update_mode) # Write Heartbeat events/logs add_event(name=AGENT_NAME, version=CURRENT_VERSION, op=WALAEventOperation.HeartBeat, is_success=True, message=heartbeat_msg, log_event=False) logger.info(u"[HEARTBEAT] Agent {0} is running as the goal state agent [DEBUG {1}]", CURRENT_AGENT, heartbeat_msg) # Update/Reset the counters self._heartbeat_counter += 1 self._heartbeat_update_goal_state_error_count = 0 self._last_telemetry_heartbeat = datetime.now(UTC) def _check_agent_memory_usage(self): """ This checks the agent current memory usage and safely exit the process if agent reaches the memory limit """ try: if conf.get_enable_agent_memory_usage_check() and self._extensions_summary.converged: # we delay first attempt memory usage check, so that current agent won't get blacklisted due to multiple restarts(because of memory limit reach) too frequently if (self._initial_attempt_check_memory_usage and time.time() - self._last_check_memory_usage_time > CHILD_LAUNCH_INTERVAL) or \ (not self._initial_attempt_check_memory_usage and time.time() - self._last_check_memory_usage_time > conf.get_cgroup_check_period()): self._last_check_memory_usage_time = time.time() self._initial_attempt_check_memory_usage = False CGroupConfigurator.get_instance().check_agent_memory_usage() except AgentMemoryExceededException as exception: msg = "Check on agent memory usage:\n{0}".format(ustr(exception)) logger.info(msg) add_event(AGENT_NAME, op=WALAEventOperation.AgentMemory, is_success=True, message=msg) raise ExitException("Agent {0} is reached memory limit -- exiting".format(CURRENT_AGENT)) except Exception as exception: if self._check_memory_usage_last_error_report == datetime_min_utc or (self._check_memory_usage_last_error_report + timedelta(hours=6)) > datetime.now(UTC): self._check_memory_usage_last_error_report = datetime.now(UTC) msg = "Error checking the agent's memory usage: {0} --- [NOTE: Will not log the same error for the 6 hours]".format(ustr(exception)) logger.warn(msg) add_event(AGENT_NAME, op=WALAEventOperation.AgentMemory, is_success=False, message=msg) @staticmethod def _ensure_extension_telemetry_state_configured_properly(protocol): etp_enabled = get_supported_feature_by_name(SupportedFeatureNames.ExtensionTelemetryPipeline).is_supported for name, path in list_agent_lib_directory(skip_agent_package=True): try: handler_instance = ExtHandlersHandler.get_ext_handler_instance_from_path(name=name, path=path, protocol=protocol) except Exception: # Ignore errors if any continue try: if handler_instance is not None: # Recreate the HandlerEnvironment for existing extensions on startup. # This is to ensure that existing extensions can start using the telemetry pipeline if they support # it and also ensures that the extensions are not sending out telemetry if the Agent has to disable the feature. handler_instance.create_handler_env() events_dir = handler_instance.get_extension_events_dir() # If ETP is enabled and events directory doesn't exist for handler, create it if etp_enabled and not(os.path.exists(events_dir)): fileutil.mkdir(events_dir, mode=0o700) except Exception as e: logger.warn( "Unable to re-create HandlerEnvironment file on service startup. Error: {0}".format(ustr(e))) continue try: if not etp_enabled: # If extension telemetry pipeline is disabled, ensure we delete all existing extension events directory # because the agent will not be listening on those events. extension_event_dirs = glob.glob(os.path.join(conf.get_ext_log_dir(), "*", EVENTS_DIRECTORY)) for ext_dir in extension_event_dirs: shutil.rmtree(ext_dir, ignore_errors=True) except Exception as e: logger.warn("Error when trying to delete existing Extension events directory. Error: {0}".format(ustr(e))) @staticmethod def _initialize_firewall(wire_server_address): try: if not conf.enable_firewall(): event.info(WALAEventOperation.Firewall, "Skipping firewall initialization, since OS.EnableFirewall=False") return firewall_manager = FirewallManager.create(wire_server_address) try: firewall_manager.remove_legacy_rule() except Exception as error: event.error(WALAEventOperation.Firewall, "Unable to remove legacy firewall rule. Error: {0}", ustr(error)) logger.info("Checking state of the firewall") try: if firewall_manager.check(): event.info(WALAEventOperation.Firewall, "The firewall rules for Azure Fabric are already setup:\n{0}", firewall_manager.get_state()) else: firewall_manager.setup() event.info(WALAEventOperation.Firewall, "Created firewall rules for Azure Fabric:\n{0}", firewall_manager.get_state()) except FirewallStateError as e: event.warn(WALAEventOperation.Firewall, "The firewall rules for Azure Fabric are not setup correctly (the environment thread will fix it): {0}. Current state:\n{1}", ustr(e), firewall_manager.get_state()) # # Ensure firewall rules are persisted across reboots # event.info(WALAEventOperation.PersistFirewallRules, "Setting up persistent firewall rules") try: PersistFirewallRulesHandler(dst_ip=wire_server_address).setup() except Exception as error: event.error(WALAEventOperation.PersistFirewallRules, "Unable to setup the persistent firewall rules: {0}", ustr(error)) except Exception as e: event.error(WALAEventOperation.Firewall, "Error initializing firewall: {0}", ustr(e)) Azure-WALinuxAgent-a976115/azurelinuxagent/pa/000077500000000000000000000000001510742556200212175ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/azurelinuxagent/pa/__init__.py000066400000000000000000000011661510742556200233340ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # Azure-WALinuxAgent-a976115/azurelinuxagent/pa/deprovision/000077500000000000000000000000001510742556200235605ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/azurelinuxagent/pa/deprovision/__init__.py000066400000000000000000000013501510742556200256700ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from azurelinuxagent.pa.deprovision.factory import get_deprovision_handler __all__ = ["get_deprovision_handler"] Azure-WALinuxAgent-a976115/azurelinuxagent/pa/deprovision/arch.py000066400000000000000000000025021510742556200250460ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import azurelinuxagent.common.utils.fileutil as fileutil from azurelinuxagent.pa.deprovision.default import DeprovisionHandler, \ DeprovisionAction class ArchDeprovisionHandler(DeprovisionHandler): def __init__(self): # pylint: disable=W0235 super(ArchDeprovisionHandler, self).__init__() def setup(self, deluser): warnings, actions = super(ArchDeprovisionHandler, self).setup(deluser) warnings.append("WARNING! /etc/machine-id will be removed.") files_to_del = ['/etc/machine-id'] actions.append(DeprovisionAction(fileutil.rm_files, files_to_del)) return warnings, actions Azure-WALinuxAgent-a976115/azurelinuxagent/pa/deprovision/clearlinux.py000066400000000000000000000023501510742556200263000ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # # pylint: disable=W0611 import azurelinuxagent.common.utils.fileutil as fileutil from azurelinuxagent.pa.deprovision.default import DeprovisionHandler, \ DeprovisionAction # pylint: enable=W0611 class ClearLinuxDeprovisionHandler(DeprovisionHandler): def __init__(self, distro): # pylint: disable=W0231 self.distro = distro def setup(self, deluser): warnings, actions = super(ClearLinuxDeprovisionHandler, self).setup(deluser) # Probably should just wipe /etc and /var here return warnings, actions Azure-WALinuxAgent-a976115/azurelinuxagent/pa/deprovision/coreos.py000066400000000000000000000025111510742556200254230ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import azurelinuxagent.common.utils.fileutil as fileutil from azurelinuxagent.pa.deprovision.default import DeprovisionHandler, \ DeprovisionAction class CoreOSDeprovisionHandler(DeprovisionHandler): def __init__(self): # pylint: disable=W0235 super(CoreOSDeprovisionHandler, self).__init__() def setup(self, deluser): warnings, actions = super(CoreOSDeprovisionHandler, self).setup(deluser) warnings.append("WARNING! /etc/machine-id will be removed.") files_to_del = ['/etc/machine-id'] actions.append(DeprovisionAction(fileutil.rm_files, files_to_del)) return warnings, actions Azure-WALinuxAgent-a976115/azurelinuxagent/pa/deprovision/default.py000066400000000000000000000267641510742556200255750ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import glob import os.path import re import signal import sys import azurelinuxagent.common.conf as conf import azurelinuxagent.common.utils.fileutil as fileutil from azurelinuxagent.common import version from azurelinuxagent.ga.cgroupconfigurator import _AGENT_DROP_IN_FILE_SLICE, _DROP_IN_FILE_CPU_ACCOUNTING, \ _DROP_IN_FILE_CPU_QUOTA, _DROP_IN_FILE_MEMORY_ACCOUNTING, LOGCOLLECTOR_SLICE from azurelinuxagent.common.exception import ProtocolError from azurelinuxagent.common.osutil import get_osutil, systemd from azurelinuxagent.ga.persist_firewall_rules import PersistFirewallRulesHandler from azurelinuxagent.common.protocol.util import get_protocol_util from azurelinuxagent.ga.exthandlers import HANDLER_COMPLETE_NAME_PATTERN def read_input(message): if sys.version_info[0] >= 3: return input(message) else: # This is not defined in python3, and the linter will thus # throw an undefined-variable error on this line. # Suppress it here. return raw_input(message) # pylint: disable=E0602 class DeprovisionAction(object): def __init__(self, func, args=None, kwargs=None): if args is None: args = [] if kwargs is None: kwargs = {} self.func = func self.args = args self.kwargs = kwargs def invoke(self): self.func(*self.args, **self.kwargs) class DeprovisionHandler(object): def __init__(self): self.osutil = get_osutil() self.protocol_util = get_protocol_util() self.actions_running = False signal.signal(signal.SIGINT, self.handle_interrupt_signal) def del_root_password(self, warnings, actions): warnings.append("WARNING! root password will be disabled. " "You will not be able to login as root.") actions.append(DeprovisionAction(self.osutil.del_root_password)) def del_user(self, warnings, actions): try: ovfenv = self.protocol_util.get_ovf_env() except ProtocolError: warnings.append("WARNING! ovf-env.xml is not found.") warnings.append("WARNING! Skip delete user.") return username = ovfenv.username warnings.append(("WARNING! {0} account and entire home directory " "will be deleted.").format(username)) actions.append(DeprovisionAction(self.osutil.del_account, [username])) def regen_ssh_host_key(self, warnings, actions): warnings.append("WARNING! All SSH host key pairs will be deleted.") actions.append(DeprovisionAction(fileutil.rm_files, [conf.get_ssh_key_glob()])) def stop_agent_service(self, warnings, actions): warnings.append("WARNING! The waagent service will be stopped.") actions.append(DeprovisionAction(self.osutil.stop_agent_service)) def del_dirs(self, warnings, actions): # pylint: disable=W0613 dirs = [conf.get_lib_dir(), conf.get_ext_log_dir()] actions.append(DeprovisionAction(fileutil.rm_dirs, dirs)) def del_files(self, warnings, actions): # pylint: disable=W0613 files = ['/root/.bash_history', conf.get_agent_log_file()] actions.append(DeprovisionAction(fileutil.rm_files, files)) # For OpenBSD actions.append(DeprovisionAction(fileutil.rm_files, ["/etc/random.seed", "/var/db/host.random", "/etc/isakmpd/local.pub", "/etc/isakmpd/private/local.key", "/etc/iked/private/local.key", "/etc/iked/local.pub"])) def del_resolv(self, warnings, actions): warnings.append("WARNING! /etc/resolv.conf will be deleted.") files_to_del = ["/etc/resolv.conf"] actions.append(DeprovisionAction(fileutil.rm_files, files_to_del)) def del_dhcp_lease(self, warnings, actions): warnings.append("WARNING! Cached DHCP leases will be deleted.") dirs_to_del = ["/var/lib/dhclient", "/var/lib/dhcpcd", "/var/lib/dhcp"] actions.append(DeprovisionAction(fileutil.rm_dirs, dirs_to_del)) # For FreeBSD and OpenBSD actions.append(DeprovisionAction(fileutil.rm_files, ["/var/db/dhclient.leases.*"])) # For FreeBSD, NM controlled actions.append(DeprovisionAction(fileutil.rm_files, ["/var/lib/NetworkManager/dhclient-*.lease"])) # For Ubuntu >= 18.04, using systemd-networkd actions.append(DeprovisionAction(fileutil.rm_files, ["/run/systemd/netif/leases/*"])) def del_ext_handler_files(self, warnings, actions): # pylint: disable=W0613 ext_dirs = [d for d in os.listdir(conf.get_lib_dir()) if os.path.isdir(os.path.join(conf.get_lib_dir(), d)) and re.match(HANDLER_COMPLETE_NAME_PATTERN, d) is not None and not version.is_agent_path(d)] for ext_dir in ext_dirs: ext_base = os.path.join(conf.get_lib_dir(), ext_dir) files = glob.glob(os.path.join(ext_base, 'status', '*.status')) files += glob.glob(os.path.join(ext_base, 'config', '*.settings')) files += glob.glob(os.path.join(ext_base, 'config', 'HandlerStatus')) files += glob.glob(os.path.join(ext_base, 'mrseq')) if len(files) > 0: actions.append(DeprovisionAction(fileutil.rm_files, files)) def del_lib_dir_files(self, warnings, actions): # pylint: disable=W0613 known_files = [ 'HostingEnvironmentConfig.xml', 'Incarnation', 'partition', 'Protocol', 'SharedConfig.xml', 'WireServerEndpoint', 'published_hostname', 'fast_track.json', 'initial_goal_state', 'waagent_rsm_update', 'waagent_initial_update' ] known_files_glob = [ 'Extensions.*.xml', 'ExtensionsConfig.*.xml', 'GoalState.*.xml' ] lib_dir = conf.get_lib_dir() files = [f for f in \ [os.path.join(lib_dir, kf) for kf in known_files] \ if os.path.isfile(f)] for p in known_files_glob: files += glob.glob(os.path.join(lib_dir, p)) if len(files) > 0: actions.append(DeprovisionAction(fileutil.rm_files, files)) def reset_hostname(self, warnings, actions): # pylint: disable=W0613 localhost = ["localhost.localdomain"] actions.append(DeprovisionAction(self.osutil.set_hostname, localhost)) actions.append(DeprovisionAction(self.osutil.set_dhcp_hostname, localhost)) def setup(self, deluser): warnings = [] actions = [] self.stop_agent_service(warnings, actions) if conf.get_regenerate_ssh_host_key(): self.regen_ssh_host_key(warnings, actions) self.del_dhcp_lease(warnings, actions) self.reset_hostname(warnings, actions) if conf.get_delete_root_password(): self.del_root_password(warnings, actions) self.del_dirs(warnings, actions) self.del_files(warnings, actions) self.del_resolv(warnings, actions) if deluser: self.del_user(warnings, actions) self.del_persist_firewall_rules(actions) self.remove_agent_cgroup_config(actions) return warnings, actions def setup_changed_unique_id(self): warnings = [] actions = [] self.del_dhcp_lease(warnings, actions) self.del_lib_dir_files(warnings, actions) self.del_ext_handler_files(warnings, actions) self.del_persist_firewall_rules(actions) self.remove_agent_cgroup_config(actions) return warnings, actions def run(self, force=False, deluser=False): warnings, actions = self.setup(deluser) self.do_warnings(warnings) if self.do_confirmation(force=force): self.do_actions(actions) def run_changed_unique_id(self): ''' Clean-up files and directories that may interfere when the VM unique identifier has changed. While users *should* manually deprovision a VM, the files removed by this routine will help keep the agent from getting confused (since incarnation and extension settings, among other items, will no longer be monotonically increasing). ''' warnings, actions = self.setup_changed_unique_id() self.do_warnings(warnings) self.do_actions(actions) def do_actions(self, actions): self.actions_running = True for action in actions: action.invoke() self.actions_running = False def do_confirmation(self, force=False): if force: return True confirm = read_input("Do you want to proceed (y/n)") return True if confirm.lower().startswith('y') else False def do_warnings(self, warnings): for warning in warnings: print(warning) def handle_interrupt_signal(self, signum, frame): # pylint: disable=W0613 if not self.actions_running: print("Deprovision is interrupted.") sys.exit(0) print ('Deprovisioning may not be interrupted.') return @staticmethod def del_persist_firewall_rules(actions): agent_network_service_path = PersistFirewallRulesHandler.get_service_file_path() actions.append(DeprovisionAction(fileutil.rm_files, [agent_network_service_path, os.path.join(conf.get_lib_dir(), PersistFirewallRulesHandler.BINARY_FILE_NAME)])) @staticmethod def remove_agent_cgroup_config(actions): # Get all service drop in file paths agent_drop_in_path = systemd.get_agent_drop_in_path() slice_path = os.path.join(agent_drop_in_path, _AGENT_DROP_IN_FILE_SLICE) cpu_accounting_path = os.path.join(agent_drop_in_path, _DROP_IN_FILE_CPU_ACCOUNTING) cpu_quota_path = os.path.join(agent_drop_in_path, _DROP_IN_FILE_CPU_QUOTA) mem_accounting_path = os.path.join(agent_drop_in_path, _DROP_IN_FILE_MEMORY_ACCOUNTING) # Get log collector slice unit_file_install_path = systemd.get_unit_file_install_path() log_collector_slice_path = os.path.join(unit_file_install_path, LOGCOLLECTOR_SLICE) actions.append(DeprovisionAction(fileutil.rm_files, [slice_path, cpu_accounting_path, cpu_quota_path, mem_accounting_path, log_collector_slice_path])) Azure-WALinuxAgent-a976115/azurelinuxagent/pa/deprovision/factory.py000066400000000000000000000033111510742556200255770ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from azurelinuxagent.common.version import DISTRO_NAME, DISTRO_VERSION, DISTRO_FULL_NAME from azurelinuxagent.common.utils.distro_version import DistroVersion from .arch import ArchDeprovisionHandler from .clearlinux import ClearLinuxDeprovisionHandler from .coreos import CoreOSDeprovisionHandler from .default import DeprovisionHandler from .ubuntu import UbuntuDeprovisionHandler, Ubuntu1804DeprovisionHandler def get_deprovision_handler(distro_name=DISTRO_NAME, distro_version=DISTRO_VERSION, distro_full_name=DISTRO_FULL_NAME): if distro_name == "arch": return ArchDeprovisionHandler() if distro_name == "ubuntu": if DistroVersion(distro_version) >= DistroVersion('18.04'): return Ubuntu1804DeprovisionHandler() else: return UbuntuDeprovisionHandler() if distro_name in ("flatcar", "coreos"): return CoreOSDeprovisionHandler() if "Clear Linux" in distro_full_name: return ClearLinuxDeprovisionHandler() # pylint: disable=E1120 return DeprovisionHandler() Azure-WALinuxAgent-a976115/azurelinuxagent/pa/deprovision/ubuntu.py000066400000000000000000000042141510742556200254550ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import azurelinuxagent.common.utils.fileutil as fileutil from azurelinuxagent.pa.deprovision.default import DeprovisionHandler, \ DeprovisionAction class UbuntuDeprovisionHandler(DeprovisionHandler): def __init__(self): # pylint: disable=W0235 super(UbuntuDeprovisionHandler, self).__init__() def del_resolv(self, warnings, actions): if os.path.realpath( '/etc/resolv.conf') != '/run/resolvconf/resolv.conf': warnings.append("WARNING! /etc/resolv.conf will be deleted.") files_to_del = ["/etc/resolv.conf"] actions.append(DeprovisionAction(fileutil.rm_files, files_to_del)) else: warnings.append("WARNING! /etc/resolvconf/resolv.conf.d/tail " "and /etc/resolvconf/resolv.conf.d/original will " "be deleted.") files_to_del = ["/etc/resolvconf/resolv.conf.d/tail", "/etc/resolvconf/resolv.conf.d/original"] actions.append(DeprovisionAction(fileutil.rm_files, files_to_del)) class Ubuntu1804DeprovisionHandler(UbuntuDeprovisionHandler): def __init__(self): # pylint: disable=W0235 super(Ubuntu1804DeprovisionHandler, self).__init__() def del_resolv(self, warnings, actions): # no changes will be made to /etc/resolv.conf warnings.append("WARNING! /etc/resolv.conf will NOT be removed, this is a behavior change to earlier " "versions of Ubuntu.") Azure-WALinuxAgent-a976115/azurelinuxagent/pa/provision/000077500000000000000000000000001510742556200232475ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/azurelinuxagent/pa/provision/__init__.py000066400000000000000000000012751510742556200253650ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from azurelinuxagent.pa.provision.factory import get_provision_handler Azure-WALinuxAgent-a976115/azurelinuxagent/pa/provision/cloudinit.py000066400000000000000000000142031510742556200256130ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import os.path import time from datetime import datetime import azurelinuxagent.common.conf as conf import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.fileutil as fileutil from azurelinuxagent.common.event import elapsed_milliseconds from azurelinuxagent.common.exception import ProvisionError, ProtocolError from azurelinuxagent.common.future import ustr, UTC from azurelinuxagent.common.protocol.util import OVF_FILE_NAME from azurelinuxagent.common.protocol.ovfenv import OvfEnv from azurelinuxagent.pa.provision.default import ProvisionHandler from azurelinuxagent.pa.provision.cloudinitdetect import cloud_init_is_enabled class CloudInitProvisionHandler(ProvisionHandler): def __init__(self): # pylint: disable=W0235 super(CloudInitProvisionHandler, self).__init__() def run(self): try: if super(CloudInitProvisionHandler, self).check_provisioned_file(): logger.info("Provisioning already completed, skipping.") return utc_start = datetime.now(UTC) logger.info("Running CloudInit provisioning handler") self.wait_for_ovfenv() self.protocol_util.get_protocol() # Trigger protocol detection self.report_not_ready("Provisioning", "Starting") thumbprint = self.wait_for_ssh_host_key() # pylint: disable=W0612 self.write_provisioned() logger.info("Finished provisioning") self.report_ready() self.report_event("Provisioning with cloud-init succeeded ({0}s)".format(self._get_uptime_seconds()), is_success=True, duration=elapsed_milliseconds(utc_start)) except ProvisionError as e: msg = "Provisioning with cloud-init failed: {0} ({1}s)".format(ustr(e), self._get_uptime_seconds()) logger.error(msg) self.report_not_ready("ProvisioningFailed", ustr(e)) self.report_event(msg) return def wait_for_ovfenv(self, max_retry=1800, sleep_time=1): """ Wait for cloud-init to copy ovf-env.xml file from provision ISO """ ovf_file_path = os.path.join(conf.get_lib_dir(), OVF_FILE_NAME) logging_interval = 10 max_logging_interval = 320 for retry in range(0, max_retry): if os.path.isfile(ovf_file_path): try: ovf_env = OvfEnv(fileutil.read_file(ovf_file_path)) self.handle_provision_guest_agent(ovf_env.provision_guest_agent) return except ProtocolError as pe: raise ProvisionError("OVF xml could not be parsed " "[{0}]: {1}".format(ovf_file_path, ustr(pe))) else: if retry < max_retry - 1: if retry % logging_interval == 0: logger.info( "Waiting for cloud-init to copy ovf-env.xml to {0} " "[{1} retries remaining, " "sleeping {2}s between retries]".format(ovf_file_path, max_retry - retry, sleep_time)) if not cloud_init_is_enabled(): logger.warn("cloud-init does not appear to be enabled") logging_interval = min(logging_interval * 2, max_logging_interval) time.sleep(sleep_time) raise ProvisionError("Giving up, ovf-env.xml was not copied to {0} " "after {1}s".format(ovf_file_path, max_retry * sleep_time)) def wait_for_ssh_host_key(self, max_retry=1800, sleep_time=1): """ Wait for cloud-init to generate ssh host key """ keypair_type = conf.get_ssh_host_keypair_type() # pylint: disable=W0612 path = conf.get_ssh_key_public_path() logging_interval = 10 max_logging_interval = 320 for retry in range(0, max_retry): if os.path.isfile(path): logger.info("ssh host key found at: {0}".format(path)) try: thumbprint = self.get_ssh_host_key_thumbprint(chk_err=False) logger.info("Thumbprint obtained from : {0}".format(path)) return thumbprint except ProvisionError: logger.warn("Could not get thumbprint from {0}".format(path)) if retry < max_retry - 1: if retry % logging_interval == 0: logger.info("Waiting for ssh host key be generated at {0} " "[{1} attempts remaining, " "sleeping {2}s between retries]".format(path, max_retry - retry, sleep_time)) if not cloud_init_is_enabled(): logger.warn("cloud-init does not appear to be running") logging_interval = min(logging_interval * 2, max_logging_interval) time.sleep(sleep_time) raise ProvisionError("Giving up, ssh host key was not found at {0} " "after {1}s".format(path, max_retry * sleep_time)) Azure-WALinuxAgent-a976115/azurelinuxagent/pa/provision/cloudinitdetect.py000066400000000000000000000035261510742556200270120ustar00rootroot00000000000000"""Module for detecting the existence of cloud-init""" import subprocess import azurelinuxagent.common.logger as logger def _cloud_init_is_enabled_systemd(): """ Determine whether or not cloud-init is enabled on a systemd machine. Args: None Returns: bool: True if cloud-init is enabled, False if otherwise. """ try: systemctl_output = subprocess.check_output([ 'systemctl', 'is-enabled', 'cloud-init-local.service' ], stderr=subprocess.STDOUT).decode('utf-8').replace('\n', '') unit_is_enabled = systemctl_output == 'enabled' # pylint: disable=broad-except except Exception as exc: logger.info('Unable to get cloud-init enabled status from systemctl: {0}'.format(exc)) unit_is_enabled = False return unit_is_enabled def _cloud_init_is_enabled_service(): """ Determine whether or not cloud-init is enabled on a non-systemd machine. Args: None Returns: bool: True if cloud-init is enabled, False if otherwise. """ try: subprocess.check_output([ 'service', 'cloud-init', 'status' ], stderr=subprocess.STDOUT) unit_is_enabled = True # pylint: disable=broad-except except Exception as exc: logger.info('Unable to get cloud-init enabled status from service: {0}'.format(exc)) unit_is_enabled = False return unit_is_enabled def cloud_init_is_enabled(): """ Determine whether or not cloud-init is enabled. Args: None Returns: bool: True if cloud-init is enabled, False if otherwise. """ unit_is_enabled = _cloud_init_is_enabled_systemd() or _cloud_init_is_enabled_service() logger.info('cloud-init is enabled: {0}'.format(unit_is_enabled)) return unit_is_enabled Azure-WALinuxAgent-a976115/azurelinuxagent/pa/provision/default.py000066400000000000000000000266141510742556200252560ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # """ Provision handler """ import os import os.path import re import time from datetime import datetime import azurelinuxagent.common.conf as conf import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.shellutil as shellutil import azurelinuxagent.common.utils.fileutil as fileutil from azurelinuxagent.common.future import ustr, UTC from azurelinuxagent.common.event import add_event, WALAEventOperation, \ elapsed_milliseconds from azurelinuxagent.common.exception import ProvisionError, ProtocolError, \ OSUtilError from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.protocol.restapi import ProvisionStatus from azurelinuxagent.common.protocol.util import get_protocol_util from azurelinuxagent.common.version import AGENT_NAME from azurelinuxagent.pa.provision.cloudinitdetect import cloud_init_is_enabled CUSTOM_DATA_FILE = "CustomData" CLOUD_INIT_PATTERN = b".*/bin/cloud-init.*" CLOUD_INIT_REGEX = re.compile(CLOUD_INIT_PATTERN) PROVISIONED_FILE = 'provisioned' class ProvisionHandler(object): def __init__(self): self.osutil = get_osutil() self.protocol_util = get_protocol_util() def run(self): if not conf.get_provision_enabled(): logger.info("Provisioning is disabled, skipping.") self.write_provisioned() self.report_ready() return try: utc_start = datetime.now(UTC) thumbprint = None # pylint: disable=W0612 if self.check_provisioned_file(): logger.info("Provisioning already completed, skipping.") return logger.info("Running default provisioning handler") if cloud_init_is_enabled(): raise ProvisionError("cloud-init appears to be installed and enabled, " "this is not expected, cannot continue") logger.info("Copying ovf-env.xml") ovf_env = self.protocol_util.copy_ovf_env() self.protocol_util.get_protocol() # Trigger protocol detection self.report_not_ready("Provisioning", "Starting") logger.info("Starting provisioning") self.provision(ovf_env) thumbprint = self.reg_ssh_host_key() self.osutil.restart_ssh_service() self.write_provisioned() self.report_event("Provisioning succeeded ({0}s)".format(self._get_uptime_seconds()), is_success=True, duration=elapsed_milliseconds(utc_start)) self.handle_provision_guest_agent(ovf_env.provision_guest_agent) self.report_ready() logger.info("Provisioning complete") except (ProtocolError, ProvisionError) as e: msg = "Provisioning failed: {0} ({1}s)".format(ustr(e), self._get_uptime_seconds()) logger.error(msg) self.report_not_ready("ProvisioningFailed", ustr(e)) self.report_event(msg, is_success=False) return @staticmethod def _get_uptime_seconds(): try: with open('/proc/uptime') as fh: uptime, _ = fh.readline().split() return uptime except: # pylint: disable=W0702 return 0 def reg_ssh_host_key(self): keypair_type = conf.get_ssh_host_keypair_type() if conf.get_regenerate_ssh_host_key(): fileutil.rm_files(conf.get_ssh_key_glob()) if conf.get_ssh_host_keypair_mode() == "auto": # pylint: disable=W0105 ''' The -A option generates all supported key types. This is supported since OpenSSH 5.9 (2011). ''' # pylint: enable=W0105 shellutil.run("ssh-keygen -A") else: keygen_cmd = "ssh-keygen -N '' -t {0} -f {1}" shellutil.run(keygen_cmd. format(keypair_type, conf.get_ssh_key_private_path())) return self.get_ssh_host_key_thumbprint() def get_ssh_host_key_thumbprint(self, chk_err=True): cmd = "ssh-keygen -lf {0}".format(conf.get_ssh_key_public_path()) ret = shellutil.run_get_output(cmd, chk_err=chk_err) if ret[0] == 0: return ret[1].rstrip().split()[1].replace(':', '') else: raise ProvisionError(("Failed to generate ssh host key: " "ret={0}, out= {1}").format(ret[0], ret[1])) @staticmethod def provisioned_file_path(): return os.path.join(conf.get_lib_dir(), PROVISIONED_FILE) @staticmethod def is_provisioned(): """ A VM is considered provisioned *anytime* the provisioning sentinel file exists and not provisioned *anytime* the file is absent. """ return os.path.isfile(ProvisionHandler.provisioned_file_path()) def check_provisioned_file(self): """ If the VM was provisioned using an agent that did not record the VM unique identifier, the provisioning file will be re-written to include the identifier. A warning is logged *if* the VM unique identifier has changed since VM was provisioned. Returns False if the VM has not been provisioned. """ if not ProvisionHandler.is_provisioned(): return False s = fileutil.read_file(ProvisionHandler.provisioned_file_path()).strip() if not self.osutil.is_current_instance_id(s): if len(s) > 0: msg = "VM is provisioned, but the VM unique identifier has changed. This indicates the VM may be " \ "created from an image that was not properly deprovisioned or generalized, which can result in " \ "unexpected behavior from the guest agent -- clearing cached state" logger.warn(msg) self.report_event(msg) from azurelinuxagent.pa.deprovision \ import get_deprovision_handler deprovision_handler = get_deprovision_handler() deprovision_handler.run_changed_unique_id() self.write_provisioned() self.report_ready() return True def write_provisioned(self): fileutil.write_file( ProvisionHandler.provisioned_file_path(), get_osutil().get_instance_id()) @staticmethod def write_agent_disabled(): logger.warn("Disabling guest agent in accordance with ovf-env.xml") fileutil.write_file(conf.get_disable_agent_file_path(), '') def handle_provision_guest_agent(self, provision_guest_agent): self.report_event(message=provision_guest_agent, is_success=True, duration=0, operation=WALAEventOperation.ProvisionGuestAgent) if provision_guest_agent and provision_guest_agent.lower() == 'false': self.write_agent_disabled() def provision(self, ovfenv): logger.info("Handle ovf-env.xml.") try: logger.info("Set hostname [{0}]".format(ovfenv.hostname)) self.osutil.set_hostname(ovfenv.hostname) logger.info("Publish hostname [{0}]".format(ovfenv.hostname)) self.osutil.publish_hostname(ovfenv.hostname) self.config_user_account(ovfenv) self.save_customdata(ovfenv) if conf.get_delete_root_password(): self.osutil.del_root_password() except OSUtilError as e: raise ProvisionError("Failed to provision: {0}".format(ustr(e))) def config_user_account(self, ovfenv): logger.info("Create user account if not exists") self.osutil.useradd(ovfenv.username) if ovfenv.user_password is not None: logger.info("Set user password.") crypt_id = conf.get_password_cryptid() salt_len = conf.get_password_crypt_salt_len() self.osutil.chpasswd(ovfenv.username, ovfenv.user_password, crypt_id=crypt_id, salt_len=salt_len) logger.info("Configure sudoer") self.osutil.conf_sudoer(ovfenv.username, nopasswd=ovfenv.user_password is None) logger.info("Configure sshd") self.osutil.conf_sshd(ovfenv.disable_ssh_password_auth) self.deploy_ssh_pubkeys(ovfenv) self.deploy_ssh_keypairs(ovfenv) def save_customdata(self, ovfenv): customdata = ovfenv.customdata if customdata is None: return lib_dir = conf.get_lib_dir() if conf.get_decode_customdata() or conf.get_execute_customdata(): logger.info("Decode custom data") customdata = self.osutil.decode_customdata(customdata) logger.info("Save custom data") customdata_file = os.path.join(lib_dir, CUSTOM_DATA_FILE) fileutil.write_file(customdata_file, customdata) if conf.get_execute_customdata(): start = time.time() logger.info("Execute custom data") os.chmod(customdata_file, 0o700) shellutil.run(customdata_file) add_event(name=AGENT_NAME, duration=int(time.time() - start), is_success=True, op=WALAEventOperation.CustomData) def deploy_ssh_pubkeys(self, ovfenv): for pubkey in ovfenv.ssh_pubkeys: logger.info("Deploy ssh public key.") self.osutil.deploy_ssh_pubkey(ovfenv.username, pubkey) def deploy_ssh_keypairs(self, ovfenv): for keypair in ovfenv.ssh_keypairs: logger.info("Deploy ssh key pairs.") self.osutil.deploy_ssh_keypair(ovfenv.username, keypair) def report_event(self, message, is_success=False, duration=0, operation=WALAEventOperation.Provision): add_event(name=AGENT_NAME, message=message, duration=duration, is_success=is_success, op=operation) def report_not_ready(self, sub_status, description): status = ProvisionStatus(status="NotReady", subStatus=sub_status, description=description) try: protocol = self.protocol_util.get_protocol() protocol.report_provision_status(status) except ProtocolError as e: logger.error("Reporting NotReady failed: {0}", e) self.report_event(ustr(e)) def report_ready(self): status = ProvisionStatus(status="Ready") try: protocol = self.protocol_util.get_protocol() protocol.report_provision_status(status) except ProtocolError as e: logger.error("Reporting Ready failed: {0}", e) self.report_event(ustr(e)) Azure-WALinuxAgent-a976115/azurelinuxagent/pa/provision/factory.py000066400000000000000000000030441510742556200252710ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import azurelinuxagent.common.conf as conf from azurelinuxagent.common import logger from azurelinuxagent.common.version import DISTRO_NAME, DISTRO_VERSION, \ DISTRO_FULL_NAME from .default import ProvisionHandler from .cloudinit import CloudInitProvisionHandler, cloud_init_is_enabled def get_provision_handler(distro_name=DISTRO_NAME, # pylint: disable=W0613 distro_version=DISTRO_VERSION, # pylint: disable=W0613 distro_full_name=DISTRO_FULL_NAME): # pylint: disable=W0613 provisioning_agent = conf.get_provisioning_agent() if provisioning_agent == 'cloud-init' or ( provisioning_agent == 'auto' and cloud_init_is_enabled()): logger.info('Using cloud-init for provisioning') return CloudInitProvisionHandler() logger.info('Using waagent for provisioning') return ProvisionHandler() Azure-WALinuxAgent-a976115/azurelinuxagent/pa/rdma/000077500000000000000000000000001510742556200221425ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/azurelinuxagent/pa/rdma/__init__.py000066400000000000000000000012631510742556200242550ustar00rootroot00000000000000# Copyright 2016 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from azurelinuxagent.pa.rdma.factory import get_rdma_handler Azure-WALinuxAgent-a976115/azurelinuxagent/pa/rdma/centos.py000066400000000000000000000242251510742556200240140ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import glob # pylint: disable=W0611 import os import re import time import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.shellutil as shellutil from azurelinuxagent.pa.rdma.rdma import RDMAHandler class CentOSRDMAHandler(RDMAHandler): rdma_user_mode_package_name = 'microsoft-hyper-v-rdma' rdma_kernel_mode_package_name = 'kmod-microsoft-hyper-v-rdma' rdma_wrapper_package_name = 'msft-rdma-drivers' hyper_v_package_name = "hypervkvpd" hyper_v_package_name_new = "microsoft-hyper-v" version_major = None version_minor = None def __init__(self, distro_version): v = distro_version.split('.') if len(v) < 2: raise Exception('Unexpected centos version: %s' % distro_version) self.version_major, self.version_minor = v[0], v[1] def install_driver(self): """ Install the KVP daemon and the appropriate RDMA driver package for the RDMA firmware. """ # Check and install the KVP deamon if it not running time.sleep(10) # give some time for the hv_hvp_daemon to start up. kvpd_running = RDMAHandler.is_kvp_daemon_running() logger.info('RDMA: kvp daemon running: %s' % kvpd_running) if not kvpd_running: self.check_or_install_kvp_daemon() time.sleep(10) # wait for post-install reboot or kvp to come up # Find out RDMA firmware version and see if the existing package needs # updating or if the package is missing altogether (and install it) fw_version = self.get_rdma_version() if not fw_version: raise Exception('Cannot determine RDMA firmware version') logger.info("RDMA: found firmware version: {0}".format(fw_version)) fw_version = self.get_int_rdma_version(fw_version) installed_pkg = self.get_rdma_package_info() if installed_pkg: logger.info( 'RDMA: driver package present: {0}'.format(installed_pkg)) if self.is_rdma_package_up_to_date(installed_pkg, fw_version): logger.info('RDMA: driver package is up-to-date') return else: logger.info('RDMA: driver package needs updating') self.update_rdma_package(fw_version) else: logger.info('RDMA: driver package is NOT installed') self.update_rdma_package(fw_version) def is_rdma_package_up_to_date(self, pkg, fw_version): # Example match (pkg name, -, followed by 3 segments, fw_version and -): # - pkg=microsoft-hyper-v-rdma-4.1.0.142-20160323.x86_64 # - fw_version=142 pattern = r'{0}-(\d+\.){{3,}}({1})-'.format(self.rdma_user_mode_package_name, fw_version) return re.match(pattern, pkg) @staticmethod def get_int_rdma_version(version): s = version.split('.') if len(s) == 0: raise Exception('Unexpected RDMA firmware version: "%s"' % version) return s[0] def get_rdma_package_info(self): """ Returns the installed rdma package name or None """ ret, output = shellutil.run_get_output( 'rpm -q %s' % self.rdma_user_mode_package_name, chk_err=False) if ret != 0: return None return output def update_rdma_package(self, fw_version): logger.info("RDMA: updating RDMA packages") self.refresh_repos() self.force_install_package(self.rdma_wrapper_package_name) self.install_rdma_drivers(fw_version) def force_install_package(self, pkg_name): """ Attempts to remove existing package and installs the package """ logger.info('RDMA: Force installing package: %s' % pkg_name) if self.uninstall_package(pkg_name) != 0: logger.info('RDMA: Erasing package failed but will continue') if self.install_package(pkg_name) != 0: raise Exception('Failed to install package "{0}"'.format(pkg_name)) logger.info('RDMA: installation completed: %s' % pkg_name) @staticmethod def uninstall_package(pkg_name): return shellutil.run('yum erase -y -q {0}'.format(pkg_name)) @staticmethod def install_package(pkg_name): return shellutil.run('yum install -y -q {0}'.format(pkg_name)) def refresh_repos(self): logger.info("RDMA: refreshing yum repos") if shellutil.run('yum clean all') != 0: raise Exception('Cleaning yum repositories failed') if shellutil.run('yum updateinfo') != 0: raise Exception('Failed to act on yum repo update information') logger.info("RDMA: repositories refreshed") def install_rdma_drivers(self, fw_version): """ Installs the drivers from /opt/rdma/rhel[Major][Minor] directory, particularly the microsoft-hyper-v-rdma-* kmod-* and (no debuginfo or src). Tries to uninstall them first. """ pkg_dir = '/opt/microsoft/rdma/rhel{0}{1}'.format( self.version_major, self.version_minor) logger.info('RDMA: pkgs dir: {0}'.format(pkg_dir)) if not os.path.isdir(pkg_dir): raise Exception('RDMA packages directory %s is missing' % pkg_dir) pkgs = os.listdir(pkg_dir) logger.info('RDMA: found %d files in package directory' % len(pkgs)) # Uninstal KVP daemon first (if exists) self.uninstall_kvp_driver_package_if_exists() # Install kernel mode driver (kmod-microsoft-hyper-v-rdma-*) kmod_pkg = self.get_file_by_pattern( pkgs, r"%s-(\d+\.){3,}(%s)-\d{8}\.x86_64.rpm" % (self.rdma_kernel_mode_package_name, fw_version)) if not kmod_pkg: raise Exception("RDMA kernel mode package not found") kmod_pkg_path = os.path.join(pkg_dir, kmod_pkg) self.uninstall_pkg_and_install_from( 'kernel mode', self.rdma_kernel_mode_package_name, kmod_pkg_path) # Install user mode driver (microsoft-hyper-v-rdma-*) umod_pkg = self.get_file_by_pattern( pkgs, r"%s-(\d+\.){3,}(%s)-\d{8}\.x86_64.rpm" % (self.rdma_user_mode_package_name, fw_version)) if not umod_pkg: raise Exception("RDMA user mode package not found") umod_pkg_path = os.path.join(pkg_dir, umod_pkg) self.uninstall_pkg_and_install_from( 'user mode', self.rdma_user_mode_package_name, umod_pkg_path) logger.info("RDMA: driver packages installed") if not self.load_driver_module() or not self.is_driver_loaded(): logger.info("RDMA: driver module is not loaded; reboot required") self.reboot_system() else: logger.info("RDMA: kernel module is loaded") @staticmethod def get_file_by_pattern(file_list, pattern): for l in file_list: if re.match(pattern, l): return l return None def uninstall_pkg_and_install_from(self, pkg_type, pkg_name, pkg_path): logger.info( "RDMA: Processing {0} driver: {1}".format(pkg_type, pkg_path)) logger.info("RDMA: Try to uninstall existing version: %s" % pkg_name) if self.uninstall_package(pkg_name) == 0: logger.info("RDMA: Successfully uninstaled %s" % pkg_name) logger.info( "RDMA: Installing {0} package from {1}".format(pkg_type, pkg_path)) if self.install_package(pkg_path) != 0: raise Exception( "Failed to install RDMA {0} package".format(pkg_type)) @staticmethod def is_package_installed(pkg): """Runs rpm -q and checks return code to find out if a package is installed""" return shellutil.run("rpm -q %s" % pkg, chk_err=False) == 0 def uninstall_kvp_driver_package_if_exists(self): logger.info('RDMA: deleting existing kvp driver packages') kvp_pkgs = [self.hyper_v_package_name, self.hyper_v_package_name_new] for kvp_pkg in kvp_pkgs: if not self.is_package_installed(kvp_pkg): logger.info( "RDMA: kvp package %s does not exist, skipping" % kvp_pkg) else: logger.info('RDMA: erasing kvp package "%s"' % kvp_pkg) if shellutil.run("yum erase -q -y %s" % kvp_pkg, chk_err=False) == 0: logger.info("RDMA: successfully erased package") else: logger.error("RDMA: failed to erase package") def check_or_install_kvp_daemon(self): """Checks if kvp daemon package is installed, if not installs the package and reboots the machine. """ logger.info("RDMA: Checking kvp daemon packages.") kvp_pkgs = [self.hyper_v_package_name, self.hyper_v_package_name_new] for pkg in kvp_pkgs: logger.info("RDMA: Checking if package %s installed" % pkg) installed = self.is_package_installed(pkg) if installed: raise Exception('RDMA: package %s is installed, but the kvp daemon is not running' % pkg) kvp_pkg_to_install=self.hyper_v_package_name logger.info("RDMA: no kvp drivers installed, will install '%s'" % kvp_pkg_to_install) logger.info("RDMA: trying to install kvp package '%s'" % kvp_pkg_to_install) if self.install_package(kvp_pkg_to_install) != 0: raise Exception("RDMA: failed to install kvp daemon package '%s'" % kvp_pkg_to_install) logger.info("RDMA: package '%s' successfully installed" % kvp_pkg_to_install) logger.info("RDMA: Machine will now be rebooted.") self.reboot_system() Azure-WALinuxAgent-a976115/azurelinuxagent/pa/rdma/factory.py000066400000000000000000000035161510742556200241700ustar00rootroot00000000000000# Copyright 2016 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import azurelinuxagent.common.logger as logger from azurelinuxagent.pa.rdma.rdma import RDMAHandler from azurelinuxagent.common.version import DISTRO_FULL_NAME, DISTRO_VERSION from azurelinuxagent.common.utils.distro_version import DistroVersion from .centos import CentOSRDMAHandler from .suse import SUSERDMAHandler from .ubuntu import UbuntuRDMAHandler def get_rdma_handler( distro_full_name=DISTRO_FULL_NAME, distro_version=DISTRO_VERSION ): """Return the handler object for RDMA driver handling""" if ( (distro_full_name == 'SUSE Linux Enterprise Server' or distro_full_name == 'SLES' or distro_full_name == 'SLE_HPC') and DistroVersion(distro_version) > DistroVersion('11') ): return SUSERDMAHandler() if distro_full_name in ('CentOS Linux', 'CentOS', 'Red Hat Enterprise Linux Server', 'AlmaLinux', 'CloudLinux', 'Rocky Linux'): return CentOSRDMAHandler(distro_version) if distro_full_name == 'Ubuntu': return UbuntuRDMAHandler() logger.info("No RDMA handler exists for distro='{0}' version='{1}'", distro_full_name, distro_version) return RDMAHandler() Azure-WALinuxAgent-a976115/azurelinuxagent/pa/rdma/rdma.py000066400000000000000000000535031510742556200234450ustar00rootroot00000000000000# Windows Azure Linux Agent # # Copyright 2016 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # """ Handle packages and modules to enable RDMA for IB networking """ import os import re import time import azurelinuxagent.common.conf as conf import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.fileutil as fileutil import azurelinuxagent.common.utils.shellutil as shellutil from azurelinuxagent.common.utils.textutil import parse_doc, find, getattrib dapl_config_paths = [ '/etc/dat.conf', '/etc/rdma/dat.conf', '/usr/local/etc/dat.conf' ] def setup_rdma_device(nd_version, shared_conf): logger.verbose("Parsing SharedConfig XML contents for RDMA details") xml_doc = parse_doc(shared_conf.xml_text) if xml_doc is None: logger.error("Could not parse SharedConfig XML document") return instance_elem = find(xml_doc, "Instance") if not instance_elem: logger.error("Could not find in SharedConfig document") return rdma_ipv4_addr = getattrib(instance_elem, "rdmaIPv4Address") if not rdma_ipv4_addr: logger.error( "Could not find rdmaIPv4Address attribute on Instance element of SharedConfig.xml document") return rdma_mac_addr = getattrib(instance_elem, "rdmaMacAddress") if not rdma_mac_addr: logger.error( "Could not find rdmaMacAddress attribute on Instance element of SharedConfig.xml document") return # add colons to the MAC address (e.g. 00155D33FF1D -> # 00:15:5D:33:FF:1D) rdma_mac_addr = ':'.join([rdma_mac_addr[i:i + 2] for i in range(0, len(rdma_mac_addr), 2)]) logger.info("Found RDMA details. IPv4={0} MAC={1}".format( rdma_ipv4_addr, rdma_mac_addr)) # Set up the RDMA device with collected informatino RDMADeviceHandler(rdma_ipv4_addr, rdma_mac_addr, nd_version).start() logger.info("RDMA: device is set up") return class RDMAHandler(object): driver_module_name = 'hv_network_direct' nd_version = None def get_rdma_version(self): # pylint: disable=R1710 """Retrieve the firmware version information from the system. This depends on information provided by the Linux kernel.""" if self.nd_version: return self.nd_version kvp_key_size = 512 kvp_value_size = 2048 driver_info_source = '/var/lib/hyperv/.kvp_pool_0' base_kernel_err_msg = 'Kernel does not provide the necessary ' base_kernel_err_msg += 'information or the kvp daemon is not running.' if not os.path.isfile(driver_info_source): error_msg = 'RDMA: Source file "%s" does not exist. ' error_msg += base_kernel_err_msg logger.error(error_msg % driver_info_source) return with open(driver_info_source, "rb") as pool_file: while True: key = pool_file.read(kvp_key_size) value = pool_file.read(kvp_value_size) if key and value: key_0 = key.partition(b"\x00")[0] if key_0: key_0 = key_0.decode() value_0 = value.partition(b"\x00")[0] if value_0: value_0 = value_0.decode() if key_0 == "NdDriverVersion": self.nd_version = value_0 return self.nd_version else: break error_msg = 'RDMA: NdDriverVersion not found in "%s"' logger.error(error_msg % driver_info_source) return @staticmethod def is_kvp_daemon_running(): """Look for kvp daemon names in ps -ef output and return True/False """ # for centos, the hypervkvpd and the hv_kvp_daemon both are ok. # for suse, it uses hv_kvp_daemon kvp_daemon_names = ['hypervkvpd', 'hv_kvp_daemon'] exitcode, ps_out = shellutil.run_get_output("ps -ef") if exitcode != 0: raise Exception('RDMA: ps -ef failed: %s' % ps_out) for n in kvp_daemon_names: if n in ps_out: logger.info('RDMA: kvp daemon (%s) is running' % n) return True else: logger.verbose('RDMA: kvp daemon (%s) is not running' % n) return False def load_driver_module(self): """Load the kernel driver, this depends on the proper driver to be installed with the install_driver() method""" logger.info("RDMA: probing module '%s'" % self.driver_module_name) result = shellutil.run('modprobe --first-time %s' % self.driver_module_name) if result != 0: error_msg = 'Could not load "%s" kernel module. ' error_msg += 'Run "modprobe --first-time %s" as root for more details' logger.error( error_msg % (self.driver_module_name, self.driver_module_name) ) return False logger.info('RDMA: Loaded the kernel driver successfully.') return True def install_driver_if_needed(self): if self.nd_version: if conf.enable_check_rdma_driver(): self.install_driver() else: logger.info('RDMA: check RDMA driver is disabled, skip installing driver') else: logger.info('RDMA: skip installing driver when ndversion not present\n') def install_driver(self): """Install the driver. This is distribution specific and must be overwritten in the child implementation.""" logger.error('RDMAHandler.install_driver not implemented') def is_driver_loaded(self): """Check if the network module is loaded in kernel space""" cmd = 'lsmod | grep ^%s' % self.driver_module_name status, loaded_modules = shellutil.run_get_output(cmd) # pylint: disable=W0612 logger.info('RDMA: Checking if the module loaded.') if loaded_modules: logger.info('RDMA: module loaded.') return True logger.info('RDMA: module not loaded.') return False def reboot_system(self): """Reboot the system. This is required as the kernel module for the rdma driver cannot be unloaded with rmmod""" logger.info('RDMA: Rebooting system.') ret = shellutil.run('shutdown -r now') if ret != 0: logger.error('RDMA: Failed to reboot the system') dapl_config_paths = [ '/etc/dat.conf', '/etc/rdma/dat.conf', '/usr/local/etc/dat.conf'] class RDMADeviceHandler(object): """ Responsible for writing RDMA IP and MAC address to the /dev/hvnd_rdma interface. """ rdma_dev = '/dev/hvnd_rdma' sriov_dir = '/sys/class/infiniband' device_check_timeout_sec = 120 device_check_interval_sec = 1 ipoib_check_timeout_sec = 60 ipoib_check_interval_sec = 1 ipv4_addr = None mac_addr = None nd_version = None def __init__(self, ipv4_addr, mac_addr, nd_version): self.ipv4_addr = ipv4_addr self.mac_addr = mac_addr self.nd_version = nd_version def start(self): logger.info("RDMA: starting device processing.") self.process() logger.info("RDMA: completed device processing.") def process(self): try: if not self.nd_version: logger.info("RDMA: provisioning SRIOV RDMA device.") self.provision_sriov_rdma() else: logger.info("RDMA: provisioning Network Direct RDMA device.") self.provision_network_direct_rdma() except Exception as e: logger.error("RDMA: device processing failed: {0}".format(e)) def provision_network_direct_rdma(self): RDMADeviceHandler.update_dat_conf(dapl_config_paths, self.ipv4_addr) if not conf.enable_check_rdma_driver(): logger.info("RDMA: skip checking RDMA driver version") RDMADeviceHandler.update_network_interface(self.mac_addr, self.ipv4_addr) return skip_rdma_device = False module_name = "hv_network_direct" retcode, out = shellutil.run_get_output("modprobe -R %s" % module_name, chk_err=False) if retcode == 0: module_name = out.strip() else: logger.info("RDMA: failed to resolve module name. Use original name") retcode, out = shellutil.run_get_output("modprobe %s" % module_name) if retcode != 0: logger.error("RDMA: failed to load module %s" % module_name) return retcode, out = shellutil.run_get_output("modinfo %s" % module_name) if retcode == 0: version = re.search(r"version:\s+(\d+)\.(\d+)\.(\d+)\D", out, re.IGNORECASE) if version: v1 = int(version.groups(0)[0]) v2 = int(version.groups(0)[1]) if v1 > 4 or v1 == 4 and v2 > 0: logger.info("Skip setting /dev/hvnd_rdma on 4.1 or later") skip_rdma_device = True else: logger.info("RDMA: hv_network_direct driver version not present, assuming 4.0.x or older.") else: logger.warn("RDMA: failed to get module info on hv_network_direct.") if not skip_rdma_device: RDMADeviceHandler.wait_rdma_device( self.rdma_dev, self.device_check_timeout_sec, self.device_check_interval_sec) RDMADeviceHandler.write_rdma_config_to_device( self.rdma_dev, self.ipv4_addr, self.mac_addr) RDMADeviceHandler.update_network_interface(self.mac_addr, self.ipv4_addr) def provision_sriov_rdma(self): (key, value) = self.read_ipoib_data() if key: # provision multiple IP over IB addresses logger.info("RDMA: provisioning multiple IP over IB addresses") self.provision_sriov_multiple_ib(value) elif self.ipv4_addr: logger.info("RDMA: provisioning single IP over IB address") # provision a single IP over IB address RDMADeviceHandler.wait_any_rdma_device(self.sriov_dir, self.device_check_timeout_sec, self.device_check_interval_sec) RDMADeviceHandler.update_iboip_interface(self.ipv4_addr, self.ipoib_check_timeout_sec, self.ipoib_check_interval_sec) else: logger.info("RDMA: missing IP address") def read_ipoib_data(self) : # read from KVP pool 0 to figure out the IP over IB addresses kvp_key_size = 512 kvp_value_size = 2048 driver_info_source = '/var/lib/hyperv/.kvp_pool_0' if not os.path.isfile(driver_info_source): logger.error("RDMA: can't read KVP pool 0") return (None, None) key_0 = None value_0 = None with open(driver_info_source, "rb") as pool_file: while True: key = pool_file.read(kvp_key_size) value = pool_file.read(kvp_value_size) if key and value: key_0 = key.partition(b"\x00")[0] if key_0 : key_0 = key_0.decode() if key_0 == "IPoIB_Data": value_0 = value.partition(b"\x00")[0] if value_0 : value_0 = value_0.decode() break else: break if key_0 == "IPoIB_Data": return (key_0, value_0) return (None, None) def provision_sriov_multiple_ib(self, value) : mac_ip_array = [] values = value.split("|") num_ips = len(values) - 1 # values[0] tells how many IPs. Format - NUMPAIRS: match = re.match(r"NUMPAIRS:(\d+)", values[0]) if match: num = int(match.groups(0)[0]) if num != num_ips: logger.error("RDMA: multiple IPs reported num={0} actual number of IPs={1}".format(num, num_ips)) return else: logger.error("RDMA: failed to find number of IP addresses in {0}".format(values[0])) return for i in range(1, num_ips+1): # each MAC/IP entry is of format : match = re.match(r"([^:]+):(\d+\.\d+\.\d+\.\d+)", values[i]) if match: mac_addr = match.groups(0)[0] ipv4_addr = match.groups(0)[1] mac_ip_array.append((mac_addr, ipv4_addr)) else: logger.error("RDMA: failed to find MAC/IP address in {0}".format(values[i])) return # try to assign all MAC/IP addresses to IB interfaces # retry for up to 60 times, with 1 seconds delay between each retry = 60 while retry > 0: count = self.update_iboip_interfaces(mac_ip_array) if count == len(mac_ip_array): return time.sleep(1) retry -= 1 logger.error("RDMA: failed to set all IP over IB addresses") # Assign addresses to all IP over IB interfaces specified in mac_ip_array # Return the number of IP addresses successfully assigned def update_iboip_interfaces(self, mac_ip_array): net_dir = "/sys/class/net" nics = os.listdir(net_dir) count = 0 for nic in nics: mac_addr = None with open(os.path.join(net_dir, nic, "address")) as address_file: mac_addr = address_file.read() if not mac_addr: logger.error("RDMA: can't read address for device {0}".format(nic)) continue mac_addr = mac_addr.upper() # if this is an IB interface, match IB-specific regex if re.match(r"ib\w+", nic): match = re.match(r".+(\w\w):(\w\w):(\w\w):\w\w:\w\w:(\w\w):(\w\w):(\w\w)\n", mac_addr) else: match = re.match(r"^(\w\w):(\w\w):(\w\w):(\w\w):(\w\w):(\w\w)$", mac_addr) if not match: logger.error("RDMA: failed to parse address for device {0} address {1}".format(nic, mac_addr)) continue # format an MAC address without : mac_addr = "" mac_addr = mac_addr.join(match.groups(0)) for mac_ip in mac_ip_array: if mac_ip[0] == mac_addr: ret = 0 try: # bring up the interface and set its IP address ip_command = ["ip", "link", "set", nic, "up"] shellutil.run_command(ip_command) ip_command = ["ip", "addr", "add", "{0}/16".format(mac_ip[1]), "dev", nic] shellutil.run_command(ip_command) except shellutil.CommandError as error: ret = error.returncode if ret == 0: logger.info("RDMA: set address {0} to device {1}".format(mac_ip[1], nic)) if ret and ret != 2: # return value 2 means the address is already set logger.error("RDMA: failed to set IP address {0} on device {1}".format(mac_ip[1], nic)) else: count += 1 break return count @staticmethod def update_iboip_interface(ipv4_addr, timeout_sec, check_interval_sec): logger.info("Wait for ib become available") total_retries = timeout_sec / check_interval_sec n = 0 found_ib = None while not found_ib and n < total_retries: ret, output = shellutil.run_get_output("ifconfig -a") if ret != 0: raise Exception("Failed to list network interfaces") found_ib = re.search(r"(ib\S+):", output, re.IGNORECASE) if found_ib: break time.sleep(check_interval_sec) n += 1 if not found_ib: raise Exception("ib is not available") ibname = found_ib.groups()[0] if shellutil.run("ifconfig {0} up".format(ibname)) != 0: raise Exception("Could not run ifconfig {0} up".format(ibname)) netmask = 16 logger.info("RDMA: configuring IPv4 addr and netmask on ipoib interface") addr = '{0}/{1}'.format(ipv4_addr, netmask) if shellutil.run("ifconfig {0} {1}".format(ibname, addr)) != 0: raise Exception("Could not set addr to {0} on {1}".format(addr, ibname)) logger.info("RDMA: ipoib address and netmask configured on interface") @staticmethod def update_dat_conf(paths, ipv4_addr): """ Looks at paths for dat.conf file and updates the ip address for the infiniband interface. """ logger.info("Updating DAPL configuration file") for f in paths: logger.info("RDMA: trying {0}".format(f)) if not os.path.isfile(f): logger.info( "RDMA: DAPL config not found at {0}".format(f)) continue logger.info("RDMA: DAPL config is at: {0}".format(f)) cfg = fileutil.read_file(f) new_cfg = RDMADeviceHandler.replace_dat_conf_contents( cfg, ipv4_addr) fileutil.write_file(f, new_cfg) logger.info("RDMA: DAPL configuration is updated") return raise Exception("RDMA: DAPL configuration file not found at predefined paths") @staticmethod def replace_dat_conf_contents(cfg, ipv4_addr): old = r"ofa-v2-ib0 u2.0 nonthreadsafe default libdaplofa.so.2 dapl.2.0 \"\S+ 0\"" new = "ofa-v2-ib0 u2.0 nonthreadsafe default libdaplofa.so.2 dapl.2.0 \"{0} 0\"".format( ipv4_addr) return re.sub(old, new, cfg) @staticmethod def write_rdma_config_to_device(path, ipv4_addr, mac_addr): data = RDMADeviceHandler.generate_rdma_config(ipv4_addr, mac_addr) logger.info( "RDMA: Updating device with configuration: {0}".format(data)) with open(path, "w") as f: logger.info("RDMA: Device opened for writing") f.write(data) logger.info("RDMA: Updated device with IPv4/MAC addr successfully") @staticmethod def generate_rdma_config(ipv4_addr, mac_addr): return 'rdmaMacAddress="{0}" rdmaIPv4Address="{1}"'.format(mac_addr, ipv4_addr) @staticmethod def wait_rdma_device(path, timeout_sec, check_interval_sec): logger.info("RDMA: waiting for device={0} timeout={1}s".format(path, timeout_sec)) total_retries = timeout_sec / check_interval_sec n = 0 while n < total_retries: if os.path.exists(path): logger.info("RDMA: device ready") return logger.verbose( "RDMA: device not ready, sleep {0}s".format(check_interval_sec)) time.sleep(check_interval_sec) n += 1 logger.error("RDMA device wait timed out") raise Exception("The device did not show up in {0} seconds ({1} retries)".format( timeout_sec, total_retries)) @staticmethod def wait_any_rdma_device(directory, timeout_sec, check_interval_sec): logger.info( "RDMA: waiting for any Infiniband device at directory={0} timeout={1}s".format( directory, timeout_sec)) total_retries = timeout_sec / check_interval_sec n = 0 while n < total_retries: r = os.listdir(directory) if r: logger.info("RDMA: device found in {0}".format(directory)) return logger.verbose( "RDMA: device not ready, sleep {0}s".format(check_interval_sec)) time.sleep(check_interval_sec) n += 1 logger.error("RDMA device wait timed out") raise Exception("The device did not show up in {0} seconds ({1} retries)".format( timeout_sec, total_retries)) @staticmethod def update_network_interface(mac_addr, ipv4_addr): netmask = 16 logger.info("RDMA: will update the network interface with IPv4/MAC") if_name = RDMADeviceHandler.get_interface_by_mac(mac_addr) logger.info("RDMA: network interface found: {0}", if_name) logger.info("RDMA: bringing network interface up") if shellutil.run("ifconfig {0} up".format(if_name)) != 0: raise Exception("Could not bring up RMDA interface: {0}".format(if_name)) logger.info("RDMA: configuring IPv4 addr and netmask on interface") addr = '{0}/{1}'.format(ipv4_addr, netmask) if shellutil.run("ifconfig {0} {1}".format(if_name, addr)) != 0: raise Exception("Could set addr to {1} on {0}".format(if_name, addr)) logger.info("RDMA: network address and netmask configured on interface") @staticmethod def get_interface_by_mac(mac): ret, output = shellutil.run_get_output("ifconfig -a") if ret != 0: raise Exception("Failed to list network interfaces") output = output.replace('\n', '') match = re.search(r"(eth\d).*(HWaddr|ether) {0}".format(mac), output, re.IGNORECASE) if match is None: raise Exception("Failed to get ifname with mac: {0}".format(mac)) output = match.group(0) eths = re.findall(r"eth\d", output) if eths is None or len(eths) == 0: raise Exception("ifname with mac: {0} not found".format(mac)) return eths[-1] Azure-WALinuxAgent-a976115/azurelinuxagent/pa/rdma/suse.py000066400000000000000000000176651510742556200235120ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import glob import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.shellutil as shellutil from azurelinuxagent.pa.rdma.rdma import RDMAHandler from azurelinuxagent.common.version import DISTRO_VERSION from azurelinuxagent.common.utils.distro_version import DistroVersion class SUSERDMAHandler(RDMAHandler): def install_driver(self): # pylint: disable=R1710 """Install the appropriate driver package for the RDMA firmware""" if DistroVersion(DISTRO_VERSION) >= DistroVersion('15'): msg = 'SLE 15 and later only supports PCI pass through, no ' msg += 'special driver needed for IB interface' logger.info(msg) return True fw_version = self.get_rdma_version() if not fw_version: error_msg = 'RDMA: Could not determine firmware version. ' error_msg += 'Therefore, no driver will be installed.' logger.error(error_msg) return zypper_install = 'zypper -n in %s' zypper_install_noref = 'zypper -n --no-refresh in %s' zypper_lock = 'zypper addlock %s' zypper_remove = 'zypper -n rm %s' zypper_search = 'zypper -n se -s %s' zypper_unlock = 'zypper removelock %s' package_name = 'dummy' # Figure out the kernel that is running to find the proper kmp cmd = 'uname -r' status, kernel_release = shellutil.run_get_output(cmd) # pylint: disable=W0612 if 'default' in kernel_release: package_name = 'msft-rdma-kmp-default' info_msg = 'RDMA: Detected kernel-default' logger.info(info_msg) elif 'azure' in kernel_release: package_name = 'msft-rdma-kmp-azure' info_msg = 'RDMA: Detected kernel-azure' logger.info(info_msg) else: error_msg = 'RDMA: Could not detect kernel build, unable to ' error_msg += 'load kernel module. Kernel release: "%s"' logger.error(error_msg % kernel_release) return cmd = zypper_search % package_name status, repo_package_info = shellutil.run_get_output(cmd) driver_package_versions = [] driver_package_installed = False for entry in repo_package_info.split('\n'): if package_name in entry: sections = entry.split('|') if len(sections) < 4: error_msg = 'RDMA: Unexpected output from"%s": "%s"' logger.error(error_msg % (cmd, entry)) continue installed = sections[0].strip() version = sections[3].strip() driver_package_versions.append(version) if fw_version in version and installed.startswith('i'): info_msg = 'RDMA: Matching driver package "%s-%s" ' info_msg += 'is already installed, nothing to do.' logger.info(info_msg % (package_name, version)) return True if installed.startswith('i'): # A driver with a different version is installed driver_package_installed = True cmd = zypper_unlock % package_name result = shellutil.run(cmd) info_msg = 'Driver with different version installed ' info_msg += 'unlocked package "%s".' logger.info(info_msg % (package_name)) # If we get here the driver package is installed but the # version doesn't match or no package is installed requires_reboot = False if driver_package_installed: # Unloading the particular driver with rmmod does not work # We have to reboot after the new driver is installed if self.is_driver_loaded(): info_msg = 'RDMA: Currently loaded driver does not match the ' info_msg += 'firmware implementation, reboot will be required.' logger.info(info_msg) requires_reboot = True logger.info("RDMA: removing package %s" % package_name) cmd = zypper_remove % package_name shellutil.run(cmd) logger.info("RDMA: removed package %s" % package_name) logger.info("RDMA: looking for fw version %s in packages" % fw_version) for entry in driver_package_versions: if fw_version not in entry: logger.info("Package '%s' is not a match." % entry) else: logger.info("Package '%s' is a match. Installing." % entry) complete_name = '%s-%s' % (package_name, entry) cmd = zypper_install % complete_name result = shellutil.run(cmd) if result: error_msg = 'RDMA: Failed install of package "%s" ' error_msg += 'from available repositories.' logger.error(error_msg % complete_name) msg = 'RDMA: Successfully installed "%s" from ' msg += 'configured repositories' logger.info(msg % complete_name) # Lock the package so it does not accidentally get updated cmd = zypper_lock % package_name result = shellutil.run(cmd) info_msg = 'Applied lock to "%s"' % package_name logger.info(info_msg) if not self.load_driver_module() or requires_reboot: self.reboot_system() return True else: # pylint: disable=W0120 logger.info("RDMA: No suitable match in repos. Trying local.") local_packages = glob.glob('/opt/microsoft/rdma/*.rpm') for local_package in local_packages: logger.info("Examining: %s" % local_package) if local_package.endswith('.src.rpm'): continue if ( package_name in local_package and fw_version in local_package ): logger.info("RDMA: Installing: %s" % local_package) cmd = zypper_install_noref % local_package result = shellutil.run(cmd) if result and result != 106: error_msg = 'RDMA: Failed install of package "%s" ' error_msg += 'from local package cache' logger.error(error_msg % local_package) break msg = 'RDMA: Successfully installed "%s" from ' msg += 'local package cache' logger.info(msg % (local_package)) # Lock the package so it does not accidentally get updated cmd = zypper_lock % package_name result = shellutil.run(cmd) info_msg = 'Applied lock to "%s"' % package_name logger.info(info_msg) if not self.load_driver_module() or requires_reboot: self.reboot_system() return True else: error_msg = 'Unable to find driver package that matches ' error_msg += 'RDMA firmware version "%s"' % fw_version logger.error(error_msg) return Azure-WALinuxAgent-a976115/azurelinuxagent/pa/rdma/ubuntu.py000066400000000000000000000122371510742556200240430ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import glob # pylint: disable=W0611 import os import re import time # pylint: disable=W0611 import azurelinuxagent.common.conf as conf import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.shellutil as shellutil from azurelinuxagent.pa.rdma.rdma import RDMAHandler class UbuntuRDMAHandler(RDMAHandler): def install_driver(self): #Install the appropriate driver package for the RDMA firmware nd_version = self.get_rdma_version() if not nd_version: logger.error("RDMA: Could not determine firmware version. No driver will be installed") return #replace . with _, we are looking for number like 144_0 nd_version = re.sub(r'\.', '_', nd_version) #Check to see if we need to reconfigure driver status,module_name = shellutil.run_get_output('modprobe -R hv_network_direct', chk_err=False) if status != 0: logger.info("RDMA: modprobe -R hv_network_direct failed. Use module name hv_network_direct") module_name = "hv_network_direct" else: module_name = module_name.strip() logger.info("RDMA: current RDMA driver %s nd_version %s" % (module_name, nd_version)) if module_name == 'hv_network_direct_%s' % nd_version: logger.info("RDMA: driver is installed and ND version matched. Skip reconfiguring driver") return #Reconfigure driver if one is available status,output = shellutil.run_get_output('modinfo hv_network_direct_%s' % nd_version) if status == 0: logger.info("RDMA: driver with ND version is installed. Link to module name") self.update_modprobed_conf(nd_version) return #Driver not found. We need to check to see if we need to update kernel if not conf.enable_rdma_update(): logger.info("RDMA: driver update is disabled. Skip kernel update") return status,output = shellutil.run_get_output('uname -r') if status != 0: return if not re.search('-azure$', output): logger.error("RDMA: skip driver update on non-Azure kernel") return kernel_version = re.sub('-azure$', '', output) kernel_version = re.sub('-', '.', kernel_version) #Find the new kernel package version status,output = shellutil.run_get_output('apt-get update') if status != 0: return status,output = shellutil.run_get_output('apt-cache show --no-all-versions linux-azure') if status != 0: return r = re.search(r'Version: (\S+)', output) if not r: logger.error("RDMA: version not found in package linux-azure.") return package_version = r.groups()[0] #Remove the ending . after package_version = re.sub(r"\.\d+$", "", package_version) logger.info('RDMA: kernel_version=%s package_version=%s' % (kernel_version, package_version)) kernel_version_array = [ int(x) for x in kernel_version.split('.') ] package_version_array = [ int(x) for x in package_version.split('.') ] if kernel_version_array < package_version_array: logger.info("RDMA: newer version available, update kernel and reboot") status,output = shellutil.run_get_output('apt-get -y install linux-azure') if status: logger.error("RDMA: kernel update failed") return self.reboot_system() else: logger.error("RDMA: no kernel update is avaiable for ND version %s" % nd_version) def update_modprobed_conf(self, nd_version): #Update /etc/modprobe.d/vmbus-rdma.conf to point to the correct driver modprobed_file = '/etc/modprobe.d/vmbus-rdma.conf' lines = '' if not os.path.isfile(modprobed_file): logger.info("RDMA: %s not found, it will be created" % modprobed_file) else: with open(modprobed_file, 'r') as f: lines = f.read() r = re.search(r'alias hv_network_direct hv_network_direct_\S+', lines) if r: lines = re.sub(r'alias hv_network_direct hv_network_direct_\S+', 'alias hv_network_direct hv_network_direct_%s' % nd_version, lines) else: lines += '\nalias hv_network_direct hv_network_direct_%s\n' % nd_version with open('/etc/modprobe.d/vmbus-rdma.conf', 'w') as f: f.write(lines) logger.info("RDMA: hv_network_direct alias updated to ND %s" % nd_version) Azure-WALinuxAgent-a976115/bin/000077500000000000000000000000001510742556200161425ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/bin/py3/000077500000000000000000000000001510742556200166555ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/bin/py3/waagent000077500000000000000000000030471510742556200202350ustar00rootroot00000000000000#!/usr/bin/env python3 # # Azure Linux Agent # # Copyright 2015 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6 and Openssl 1.0+ # # Implements parts of RFC 2131, 1541, 1497 and # http://msdn.microsoft.com/en-us/library/cc227282%28PROT.10%29.aspx # http://msdn.microsoft.com/en-us/library/cc227259%28PROT.13%29.aspx # import os import sys if sys.version_info[0] == 2: import imp else: import importlib if __name__ == '__main__' : import azurelinuxagent.agent as agent """ Invoke main method of agent """ agent.main() if __name__ == 'waagent': """ Load waagent2.0 to support old version of extensions """ if sys.version_info[0] == 3: raise ImportError("waagent2.0 doesn't support python3") bin_path = os.path.dirname(os.path.abspath(__file__)) agent20_path = os.path.join(bin_path, "waagent2.0") if not os.path.isfile(agent20_path): raise ImportError("Can't load waagent") agent20 = imp.load_source('waagent', agent20_path) __all__ = dir(agent20) Azure-WALinuxAgent-a976115/bin/waagent000077500000000000000000000030461510742556200175210ustar00rootroot00000000000000#!/usr/bin/env python # # Azure Linux Agent # # Copyright 2015 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6 and Openssl 1.0+ # # Implements parts of RFC 2131, 1541, 1497 and # http://msdn.microsoft.com/en-us/library/cc227282%28PROT.10%29.aspx # http://msdn.microsoft.com/en-us/library/cc227259%28PROT.13%29.aspx # import os import sys if sys.version_info[0] == 2: import imp else: import importlib if __name__ == '__main__' : import azurelinuxagent.agent as agent """ Invoke main method of agent """ agent.main() if __name__ == 'waagent': """ Load waagent2.0 to support old version of extensions """ if sys.version_info[0] == 3: raise ImportError("waagent2.0 doesn't support python3") bin_path = os.path.dirname(os.path.abspath(__file__)) agent20_path = os.path.join(bin_path, "waagent2.0") if not os.path.isfile(agent20_path): raise ImportError("Can't load waagent") agent20 = imp.load_source('waagent', agent20_path) __all__ = dir(agent20) Azure-WALinuxAgent-a976115/bin/waagent2.0000066400000000000000000007551201510742556200177450ustar00rootroot00000000000000#!/usr/bin/env python # # Azure Linux Agent # # Copyright 2015 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6 and Openssl 1.0+ # # Implements parts of RFC 2131, 1541, 1497 and # http://msdn.microsoft.com/en-us/library/cc227282%28PROT.10%29.aspx # http://msdn.microsoft.com/en-us/library/cc227259%28PROT.13%29.aspx # import crypt import random import array import base64 import httplib import os import os.path import platform import pwd import re import shutil import socket import SocketServer import struct import string import subprocess import sys import tempfile import textwrap import threading import time import traceback import xml.dom.minidom import fcntl import inspect import zipfile import json import datetime import xml.sax.saxutils from distutils.version import LooseVersion if not hasattr(subprocess,'check_output'): def check_output(*popenargs, **kwargs): r"""Backport from subprocess module from python 2.7""" if 'stdout' in kwargs: raise ValueError('stdout argument not allowed, it will be overridden.') process = subprocess.Popen(stdout=subprocess.PIPE, *popenargs, **kwargs) output, unused_err = process.communicate() retcode = process.poll() if retcode: cmd = kwargs.get("args") if cmd is None: cmd = popenargs[0] raise subprocess.CalledProcessError(retcode, cmd, output=output) return output # Exception classes used by this module. class CalledProcessError(Exception): def __init__(self, returncode, cmd, output=None): self.returncode = returncode self.cmd = cmd self.output = output def __str__(self): return "Command '%s' returned non-zero exit status %d" % (self.cmd, self.returncode) subprocess.check_output=check_output subprocess.CalledProcessError=CalledProcessError GuestAgentName = "WALinuxAgent" GuestAgentLongName = "Azure Linux Agent" GuestAgentVersion = "WALinuxAgent-2.0.16" ProtocolVersion = "2012-11-30" #WARNING this value is used to confirm the correct fabric protocol. Config = None WaAgent = None DiskActivated = False Openssl = "openssl" Children = [] ExtensionChildren = [] VMM_STARTUP_SCRIPT_NAME='install' VMM_CONFIG_FILE_NAME='linuxosconfiguration.xml' global RulesFiles RulesFiles = [ "/lib/udev/rules.d/75-persistent-net-generator.rules", "/etc/udev/rules.d/70-persistent-net.rules" ] VarLibDhcpDirectories = ["/var/lib/dhclient", "/var/lib/dhcpcd", "/var/lib/dhcp"] EtcDhcpClientConfFiles = ["/etc/dhcp/dhclient.conf", "/etc/dhcp3/dhclient.conf"] global LibDir LibDir = "/var/lib/waagent" global provisioned provisioned=False global provisionError provisionError=None HandlerStatusToAggStatus = {"installed":"Installing", "enabled":"Ready", "unintalled":"NotReady", "disabled":"NotReady"} WaagentConf = """\ # # Azure Linux Agent Configuration # Role.StateConsumer=None # Specified program is invoked with the argument "Ready" when we report ready status # to the endpoint server. Role.ConfigurationConsumer=None # Specified program is invoked with XML file argument specifying role configuration. Role.TopologyConsumer=None # Specified program is invoked with XML file argument specifying role topology. Provisioning.Enabled=y # Provisioning.DeleteRootPassword=y # Password authentication for root account will be unavailable. Provisioning.RegenerateSshHostKeyPair=y # Generate fresh host key pair. Provisioning.SshHostKeyPairType=rsa # Supported values are "rsa", "dsa" and "ecdsa". Provisioning.MonitorHostName=y # Monitor host name changes and publish changes via DHCP requests. ResourceDisk.Format=y # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Filesystem=ext4 # Typically ext3 or ext4. FreeBSD images should use 'ufs2' here. ResourceDisk.MountPoint=/mnt/resource # ResourceDisk.EnableSwap=n # Create and use swapfile on resource disk. ResourceDisk.SwapSizeMB=0 # Size of the swapfile. LBProbeResponder=y # Respond to load balancer probes if requested by Azure. Logs.Verbose=n # Enable verbose logs OS.RootDeviceScsiTimeout=300 # Root device timeout in seconds. OS.OpensslPath=None # If "None", the system default version is used. """ README_FILENAME="DATALOSS_WARNING_README.txt" README_FILECONTENT="""\ WARNING: THIS IS A TEMPORARY DISK. Any data stored on this drive is SUBJECT TO LOSS and THERE IS NO WAY TO RECOVER IT. Please do not use this disk for storing any personal or application data. For additional details to please refer to the MSDN documentation at : http://msdn.microsoft.com/en-us/library/windowsazure/jj672979.aspx """ ############################################################ # BEGIN DISTRO CLASS DEFS ############################################################ ############################################################ # AbstractDistro ############################################################ class AbstractDistro(object): """ AbstractDistro defines a skeleton neccesary for a concrete Distro class. Generic methods and attributes are kept here, distribution specific attributes and behavior are to be placed in the concrete child named distroDistro, where distro is the string returned by calling python platform.linux_distribution()[0]. So for CentOS the derived class is called 'centosDistro'. """ def __init__(self): """ Generic Attributes go here. These are based on 'majority rules'. This __init__() may be called or overriden by the child. """ self.agent_service_name = os.path.basename(sys.argv[0]) self.selinux=None self.service_cmd='/usr/sbin/service' self.ssh_service_restart_option='restart' self.ssh_service_name='ssh' self.ssh_config_file='/etc/ssh/sshd_config' self.hostname_file_path='/etc/hostname' self.dhcp_client_name='dhclient' self.requiredDeps = [ 'route', 'shutdown', 'ssh-keygen', 'useradd', 'usermod', 'openssl', 'sfdisk', 'fdisk', 'mkfs', 'sed', 'grep', 'sudo', 'parted' ] self.init_script_file='/etc/init.d/waagent' self.agent_package_name='WALinuxAgent' self.fileBlackList = [ "/root/.bash_history", "/var/log/waagent.log",'/etc/resolv.conf' ] self.agent_files_to_uninstall = ["/etc/waagent.conf", "/etc/logrotate.d/waagent"] self.grubKernelBootOptionsFile = '/etc/default/grub' self.grubKernelBootOptionsLine = 'GRUB_CMDLINE_LINUX_DEFAULT=' self.getpidcmd = 'pidof' self.mount_dvd_cmd = 'mount' self.sudoers_dir_base = '/etc' self.waagent_conf_file = WaagentConf self.shadow_file_mode=0600 self.shadow_file_path="/etc/shadow" self.dhcp_enabled = False def isSelinuxSystem(self): """ Checks and sets self.selinux = True if SELinux is available on system. """ if self.selinux == None: if Run("which getenforce",chk_err=False): self.selinux = False else: self.selinux = True return self.selinux def isSelinuxRunning(self): """ Calls shell command 'getenforce' and returns True if 'Enforcing'. """ if self.isSelinuxSystem(): return RunGetOutput("getenforce")[1].startswith("Enforcing") else: return False def setSelinuxEnforce(self,state): """ Calls shell command 'setenforce' with 'state' and returns resulting exit code. """ if self.isSelinuxSystem(): if state: s = '1' else: s='0' return Run("setenforce "+s) def setSelinuxContext(self,path,cn): """ Calls shell 'chcon' with 'path' and 'cn' context. Returns exit result. """ if self.isSelinuxSystem(): if not os.path.exists(path): Error("Path does not exist: {0}".format(path)) return 1 return Run('chcon ' + cn + ' ' + path) def setHostname(self,name): """ Shell call to hostname. Returns resulting exit code. """ return Run('hostname ' + name) def publishHostname(self,name): """ Set the contents of the hostname file to 'name'. Return 1 on failure. """ try: r=SetFileContents(self.hostname_file_path, name) for f in EtcDhcpClientConfFiles: if os.path.exists(f) and FindStringInFile(f,r'^[^#]*?send\s*host-name.*?(|gethostname[(,)])') == None : r=ReplaceFileContentsAtomic('/etc/dhcp/dhclient.conf', "send host-name \"" + name + "\";\n" + "\n".join(filter(lambda a: not a.startswith("send host-name"), GetFileContents('/etc/dhcp/dhclient.conf').split('\n')))) except: return 1 return r def installAgentServiceScriptFiles(self): """ Create the waagent support files for service installation. Called by registerAgentService() Abstract Virtual Function. Over-ridden in concrete Distro classes. """ pass def registerAgentService(self): """ Calls installAgentService to create service files. Shell exec service registration commands. (e.g. chkconfig --add waagent) Abstract Virtual Function. Over-ridden in concrete Distro classes. """ pass def uninstallAgentService(self): """ Call service subsystem to remove waagent script. Abstract Virtual Function. Over-ridden in concrete Distro classes. """ pass def unregisterAgentService(self): """ Calls self.stopAgentService and call self.uninstallAgentService() """ self.stopAgentService() self.uninstallAgentService() def startAgentService(self): """ Service call to start the Agent service """ return Run(self.service_cmd + ' ' + self.agent_service_name + ' start') def stopAgentService(self): """ Service call to stop the Agent service """ return Run(self.service_cmd + ' ' + self.agent_service_name + ' stop',False) def restartSshService(self): """ Service call to re(start) the SSH service """ sshRestartCmd = self.service_cmd + " " + self.ssh_service_name + " " + self.ssh_service_restart_option retcode = Run(sshRestartCmd) if retcode > 0: Error("Failed to restart SSH service with return code:" + str(retcode)) return retcode def sshDeployPublicKey(self,fprint,path): """ Generic sshDeployPublicKey - over-ridden in some concrete Distro classes due to minor differences in openssl packages deployed """ error=0 SshPubKey = OvfEnv().OpensslToSsh(fprint) if SshPubKey != None: AppendFileContents(path, SshPubKey) else: Error("Failed: " + fprint + ".crt -> " + path) error = 1 return error def checkPackageInstalled(self,p): """ Query package database for prescence of an installed package. Abstract Virtual Function. Over-ridden in concrete Distro classes. """ pass def checkPackageUpdateable(self,p): """ Online check if updated package of walinuxagent is available. Abstract Virtual Function. Over-ridden in concrete Distro classes. """ pass def deleteRootPassword(self): """ Generic root password removal. """ filepath="/etc/shadow" ReplaceFileContentsAtomic(filepath,"root:*LOCK*:14600::::::\n" + "\n".join(filter(lambda a: not a.startswith("root:"),GetFileContents(filepath).split('\n')))) os.chmod(filepath,self.shadow_file_mode) if self.isSelinuxSystem(): self.setSelinuxContext(filepath,'system_u:object_r:shadow_t:s0') Log("Root password deleted.") return 0 def changePass(self,user,password): Log("Change user password") crypt_id = Config.get("Provisioning.PasswordCryptId") if crypt_id is None: crypt_id = "6" salt_len = Config.get("Provisioning.PasswordCryptSaltLength") try: salt_len = int(salt_len) if salt_len < 0 or salt_len > 10: salt_len = 10 except (ValueError, TypeError): salt_len = 10 return self.chpasswd(user, password, crypt_id=crypt_id, salt_len=salt_len) def chpasswd(self, username, password, crypt_id=6, salt_len=10): passwd_hash = self.gen_password_hash(password, crypt_id, salt_len) cmd = "usermod -p '{0}' {1}".format(passwd_hash, username) ret, output = RunGetOutput(cmd, log_cmd=False) if ret != 0: return "Failed to set password for {0}: {1}".format(username, output) def gen_password_hash(self, password, crypt_id, salt_len): collection = string.ascii_letters + string.digits salt = ''.join(random.choice(collection) for _ in range(salt_len)) salt = "${0}${1}".format(crypt_id, salt) return crypt.crypt(password, salt) def load_ata_piix(self): return WaAgent.TryLoadAtapiix() def unload_ata_piix(self): """ Generic function to remove ata_piix.ko. """ return WaAgent.TryUnloadAtapiix() def deprovisionWarnUser(self): """ Generic user warnings used at deprovision. """ print("WARNING! Nameserver configuration in /etc/resolv.conf will be deleted.") def deprovisionDeleteFiles(self): """ Files to delete when VM is deprovisioned """ for a in VarLibDhcpDirectories: Run("rm -f " + a + "/*") # Clear LibDir, remove nameserver and root bash history for f in os.listdir(LibDir) + self.fileBlackList: try: os.remove(f) except: pass return 0 def uninstallDeleteFiles(self): """ Files to delete when agent is uninstalled. """ for f in self.agent_files_to_uninstall: try: os.remove(f) except: pass return 0 def checkDependencies(self): """ Generic dependency check. Return 1 unless all dependencies are satisfied. """ if self.checkPackageInstalled('NetworkManager'): Error(GuestAgentLongName + " is not compatible with network-manager.") return 1 try: m= __import__('pyasn1') except ImportError: Error(GuestAgentLongName + " requires python-pyasn1 for your Linux distribution.") return 1 for a in self.requiredDeps: if Run("which " + a + " > /dev/null 2>&1",chk_err=False): Error("Missing required dependency: " + a) return 1 return 0 def packagedInstall(self,buildroot): """ Called from setup.py for use by RPM. Copies generated files waagent.conf, under the buildroot. """ if not os.path.exists(buildroot+'/etc'): os.mkdir(buildroot+'/etc') SetFileContents(buildroot+'/etc/waagent.conf', MyDistro.waagent_conf_file) if not os.path.exists(buildroot+'/etc/logrotate.d'): os.mkdir(buildroot+'/etc/logrotate.d') SetFileContents(buildroot+'/etc/logrotate.d/waagent', WaagentLogrotate) self.init_script_file=buildroot+self.init_script_file # this allows us to call installAgentServiceScriptFiles() if not os.path.exists(os.path.dirname(self.init_script_file)): os.mkdir(os.path.dirname(self.init_script_file)) self.installAgentServiceScriptFiles() def GetIpv4Address(self): """ Return the ip of the first active non-loopback interface. """ addr='' iface,addr=GetFirstActiveNetworkInterfaceNonLoopback() return addr def GetMacAddress(self): return GetMacAddress() def GetInterfaceName(self): return GetFirstActiveNetworkInterfaceNonLoopback()[0] def RestartInterface(self, iface, max_retry=3): for retry in range(1, max_retry + 1): ret = Run("ifdown " + iface + " && ifup " + iface) if ret == 0: return Log("Failed to restart interface: {0}, ret={1}".format(iface, ret)) if retry < max_retry: Log("Retry restart interface in 5 seconds") time.sleep(5) def CreateAccount(self,user, password, expiration, thumbprint): return CreateAccount(user, password, expiration, thumbprint) def DeleteAccount(self,user): return DeleteAccount(user) def ActivateResourceDisk(self): """ Format, mount, and if specified in the configuration set resource disk as swap. """ global DiskActivated format = Config.get("ResourceDisk.Format") if format == None or format.lower().startswith("n"): DiskActivated = True return device = DeviceForIdePort(1) if device == None: Error("ActivateResourceDisk: Unable to detect disk topology.") return device = "/dev/" + device mountlist = RunGetOutput("mount")[1] mountpoint = GetMountPoint(mountlist, device) if(mountpoint): Log("ActivateResourceDisk: " + device + "1 is already mounted.") else: mountpoint = Config.get("ResourceDisk.MountPoint") if mountpoint == None: mountpoint = "/mnt/resource" CreateDir(mountpoint, "root", 0755) fs = Config.get("ResourceDisk.Filesystem") if fs == None: fs = "ext3" partition = device + "1" #Check partition type Log("Detect GPT...") ret = RunGetOutput("parted {0} print".format(device)) if ret[0] == 0 and "gpt" in ret[1]: Log("GPT detected.") #GPT(Guid Partition Table) is used. #Get partitions. parts = filter(lambda x : re.match("^\s*[0-9]+", x), ret[1].split("\n")) #If there are more than 1 partitions, remove all partitions #and create a new one using the entire disk space. if len(parts) > 1: for i in range(1, len(parts) + 1): Run("parted {0} rm {1}".format(device, i)) Run("parted {0} mkpart primary 0% 100%".format(device)) Run("mkfs." + fs + " " + partition + " -F") else: existingFS = RunGetOutput("sfdisk -q -c " + device + " 1", chk_err=False)[1].rstrip() if existingFS == "7" and fs != "ntfs": Run("sfdisk -c " + device + " 1 83") Run("mkfs." + fs + " " + partition) if Run("mount " + partition + " " + mountpoint, chk_err=False): #If mount failed, try to format the partition and mount again Warn("Failed to mount resource disk. Retry mounting.") Run("mkfs." + fs + " " + partition + " -F") if Run("mount " + partition + " " + mountpoint): Error("ActivateResourceDisk: Failed to mount resource disk (" + partition + ").") return Log("Resource disk (" + partition + ") is mounted at " + mountpoint + " with fstype " + fs) #Create README file under the root of resource disk SetFileContents(os.path.join(mountpoint,README_FILENAME), README_FILECONTENT) DiskActivated = True #Create swap space swap = Config.get("ResourceDisk.EnableSwap") if swap == None or swap.lower().startswith("n"): return sizeKB = int(Config.get("ResourceDisk.SwapSizeMB")) * 1024 if os.path.isfile(mountpoint + "/swapfile") and os.path.getsize(mountpoint + "/swapfile") != (sizeKB * 1024): os.remove(mountpoint + "/swapfile") if not os.path.isfile(mountpoint + "/swapfile"): Run("umask 0077 && dd if=/dev/zero of=" + mountpoint + "/swapfile bs=1024 count=" + str(sizeKB)) Run("mkswap " + mountpoint + "/swapfile") if not Run("swapon " + mountpoint + "/swapfile"): Log("Enabled " + str(sizeKB) + " KB of swap at " + mountpoint + "/swapfile") else: Error("ActivateResourceDisk: Failed to activate swap at " + mountpoint + "/swapfile") def Install(self): return Install() def mediaHasFilesystem(self,dsk): if len(dsk) == 0 : return False if Run("LC_ALL=C fdisk -l " + dsk + " | grep Disk"): return False return True def mountDVD(self,dvd,location): return RunGetOutput(self.mount_dvd_cmd + ' ' + dvd + ' ' + location) def GetHome(self): return GetHome() def getDhcpClientName(self): return self.dhcp_client_name def initScsiDiskTimeout(self): """ Set the SCSI disk timeout when the agent starts running """ self.setScsiDiskTimeout() def setScsiDiskTimeout(self): """ Iterate all SCSI disks(include hot-add) and set their timeout if their value are different from the OS.RootDeviceScsiTimeout """ try: scsiTimeout = Config.get("OS.RootDeviceScsiTimeout") for diskName in [disk for disk in os.listdir("/sys/block") if disk.startswith("sd")]: self.setBlockDeviceTimeout(diskName, scsiTimeout) except: pass def setBlockDeviceTimeout(self, device, timeout): """ Set SCSI disk timeout by set /sys/block/sd*/device/timeout """ if timeout != None and device: filePath = "/sys/block/" + device + "/device/timeout" if(GetFileContents(filePath).splitlines()[0].rstrip() != timeout): SetFileContents(filePath,timeout) Log("SetBlockDeviceTimeout: Update the device " + device + " with timeout " + timeout) def waitForSshHostKey(self, path): """ Provide a dummy waiting, since by default, ssh host key is created by waagent and the key should already been created. """ if(os.path.isfile(path)): return True else: Error("Can't find host key: {0}".format(path)) return False def isDHCPEnabled(self): return self.dhcp_enabled def stopDHCP(self): """ Stop the system DHCP client so that the agent can bind on its port. If the distro has set dhcp_enabled to True, it will need to provide an implementation of this method. """ raise NotImplementedError('stopDHCP method missing') def startDHCP(self): """ Start the system DHCP client. If the distro has set dhcp_enabled to True, it will need to provide an implementation of this method. """ raise NotImplementedError('startDHCP method missing') def translateCustomData(self, data): """ Translate the custom data from a Base64 encoding. Default to no-op. """ decodeCustomData = Config.get("Provisioning.DecodeCustomData") if decodeCustomData != None and decodeCustomData.lower().startswith("y"): return base64.b64decode(data) return data def getConfigurationPath(self): return "/etc/waagent.conf" def getProcessorCores(self): return int(RunGetOutput("grep 'processor.*:' /proc/cpuinfo |wc -l")[1]) def getTotalMemory(self): return int(RunGetOutput("grep MemTotal /proc/meminfo |awk '{print $2}'")[1])/1024 def getInterfaceNameByMac(self, mac): ret, output = RunGetOutput("ifconfig -a") if ret != 0: raise Exception("Failed to get network interface info") output = output.replace('\n', '') match = re.search(r"(eth\d).*(HWaddr|ether) {0}".format(mac), output, re.IGNORECASE) if match is None: raise Exception("Failed to get ifname with mac: {0}".format(mac)) output = match.group(0) eths = re.findall(r"eth\d", output) if eths is None or len(eths) == 0: raise Exception("Failed to get ifname with mac: {0}".format(mac)) return eths[-1] def configIpV4(self, ifName, addr, netmask=24): ret, output = RunGetOutput("ifconfig {0} up".format(ifName)) if ret != 0: raise Exception("Failed to bring up {0}: {1}".format(ifName, output)) ret, output = RunGetOutput("ifconfig {0} {1}/{2}".format(ifName, addr, netmask)) if ret != 0: raise Exception("Failed to config ipv4 for {0}: {1}".format(ifName, output)) def setDefaultGateway(self, gateway): Run("/sbin/route add default gw" + gateway, chk_err=False) def routeAdd(self, net, mask, gateway): Run("/sbin/route add -net " + net + " netmask " + mask + " gw " + gateway, chk_err=False) ############################################################ # GentooDistro ############################################################ gentoo_init_file = """\ #!/sbin/runscript command=/usr/sbin/waagent pidfile=/var/run/waagent.pid command_args=-daemon command_background=true name="Azure Linux Agent" depend() { need localmount use logger network after bootmisc modules } """ class gentooDistro(AbstractDistro): """ Gentoo distro concrete class """ def __init__(self): # super(gentooDistro,self).__init__() self.service_cmd='/sbin/service' self.ssh_service_name='sshd' self.hostname_file_path='/etc/conf.d/hostname' self.dhcp_client_name='dhcpcd' self.shadow_file_mode=0640 self.init_file=gentoo_init_file def publishHostname(self,name): try: if (os.path.isfile(self.hostname_file_path)): r=ReplaceFileContentsAtomic(self.hostname_file_path, "hostname=\"" + name + "\"\n" + "\n".join(filter(lambda a: not a.startswith("hostname="), GetFileContents(self.hostname_file_path).split("\n")))) except: return 1 return r def installAgentServiceScriptFiles(self): SetFileContents(self.init_script_file, self.init_file) os.chmod(self.init_script_file, 0755) def registerAgentService(self): self.installAgentServiceScriptFiles() return Run('rc-update add ' + self.agent_service_name + ' default') def uninstallAgentService(self): return Run('rc-update del ' + self.agent_service_name + ' default') def unregisterAgentService(self): self.stopAgentService() return self.uninstallAgentService() def checkPackageInstalled(self,p): if Run('eix -I ^' + p + '$',chk_err=False): return 0 else: return 1 def checkPackageUpdateable(self,p): if Run('eix -u ^' + p + '$',chk_err=False): return 0 else: return 1 def RestartInterface(self, iface): Run("/etc/init.d/net." + iface + " restart") ############################################################ # SuSEDistro ############################################################ suse_init_file = """\ #! /bin/sh # # Azure Linux Agent sysV init script # # Copyright 2013 Microsoft Corporation # Copyright SUSE LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # /etc/init.d/waagent # # and symbolic link # # /usr/sbin/rcwaagent # # System startup script for the waagent # ### BEGIN INIT INFO # Provides: AzureLinuxAgent # Required-Start: $network sshd # Required-Stop: $network sshd # Default-Start: 3 5 # Default-Stop: 0 1 2 6 # Description: Start the AzureLinuxAgent ### END INIT INFO PYTHON=/usr/bin/python WAZD_BIN=/usr/sbin/waagent WAZD_CONF=/etc/waagent.conf WAZD_PIDFILE=/var/run/waagent.pid test -x "$WAZD_BIN" || { echo "$WAZD_BIN not installed"; exit 5; } test -e "$WAZD_CONF" || { echo "$WAZD_CONF not found"; exit 6; } . /etc/rc.status # First reset status of this service rc_reset # Return values acc. to LSB for all commands but status: # 0 - success # 1 - misc error # 2 - invalid or excess args # 3 - unimplemented feature (e.g. reload) # 4 - insufficient privilege # 5 - program not installed # 6 - program not configured # # Note that starting an already running service, stopping # or restarting a not-running service as well as the restart # with force-reload (in case signalling is not supported) are # considered a success. case "$1" in start) echo -n "Starting AzureLinuxAgent" ## Start daemon with startproc(8). If this fails ## the echo return value is set appropriate. startproc -f ${PYTHON} ${WAZD_BIN} -daemon rc_status -v ;; stop) echo -n "Shutting down AzureLinuxAgent" ## Stop daemon with killproc(8) and if this fails ## set echo the echo return value. killproc -p ${WAZD_PIDFILE} ${PYTHON} ${WAZD_BIN} rc_status -v ;; try-restart) ## Stop the service and if this succeeds (i.e. the ## service was running before), start it again. $0 status >/dev/null && $0 restart rc_status ;; restart) ## Stop the service and regardless of whether it was ## running or not, start it again. $0 stop sleep 1 $0 start rc_status ;; force-reload|reload) rc_status ;; status) echo -n "Checking for service AzureLinuxAgent " ## Check status with checkproc(8), if process is running ## checkproc will return with exit status 0. checkproc -p ${WAZD_PIDFILE} ${PYTHON} ${WAZD_BIN} rc_status -v ;; probe) ;; *) echo "Usage: $0 {start|stop|status|try-restart|restart|force-reload|reload}" exit 1 ;; esac rc_exit """ class SuSEDistro(AbstractDistro): """ SuSE Distro concrete class Put SuSE specific behavior here... """ def __init__(self): super(SuSEDistro,self).__init__() self.service_cmd='/sbin/service' self.ssh_service_name='sshd' self.kernel_boot_options_file='/boot/grub/menu.lst' self.hostname_file_path='/etc/HOSTNAME' self.requiredDeps += [ "/sbin/insserv" ] self.init_file=suse_init_file self.dhcp_client_name='dhcpcd' if ((DistInfo(fullname=1)[0] == 'SUSE Linux Enterprise Server' and DistInfo()[1] >= '12') or \ (DistInfo(fullname=1)[0] == 'openSUSE' and DistInfo()[1] >= '13.2')): self.dhcp_client_name='wickedd-dhcp4' self.grubKernelBootOptionsFile = '/boot/grub/menu.lst' self.grubKernelBootOptionsLine = 'kernel' self.getpidcmd='pidof ' self.dhcp_enabled=True def checkPackageInstalled(self,p): if Run("rpm -q " + p,chk_err=False): return 0 else: return 1 def checkPackageUpdateable(self,p): if Run("zypper list-updates | grep " + p,chk_err=False): return 1 else: return 0 def installAgentServiceScriptFiles(self): try: SetFileContents(self.init_script_file, self.init_file) os.chmod(self.init_script_file, 0744) except: pass def registerAgentService(self): self.installAgentServiceScriptFiles() return Run('insserv ' + self.agent_service_name) def uninstallAgentService(self): return Run('insserv -r ' + self.agent_service_name) def unregisterAgentService(self): self.stopAgentService() return self.uninstallAgentService() def startDHCP(self): Run("service " + self.dhcp_client_name + " start", chk_err=False) def stopDHCP(self): Run("service " + self.dhcp_client_name + " stop", chk_err=False) ############################################################ # redhatDistro ############################################################ redhat_init_file= """\ #!/bin/bash # # Init file for AzureLinuxAgent. # # chkconfig: 2345 60 80 # description: AzureLinuxAgent # # source function library . /etc/rc.d/init.d/functions RETVAL=0 FriendlyName="AzureLinuxAgent" WAZD_BIN=/usr/sbin/waagent start() { echo -n $"Starting $FriendlyName: " $WAZD_BIN -daemon & } stop() { echo -n $"Stopping $FriendlyName: " killproc -p /var/run/waagent.pid $WAZD_BIN RETVAL=$? echo return $RETVAL } case "$1" in start) start ;; stop) stop ;; restart) stop start ;; reload) ;; report) ;; status) status $WAZD_BIN RETVAL=$? ;; *) echo $"Usage: $0 {start|stop|restart|status}" RETVAL=1 esac exit $RETVAL """ class redhatDistro(AbstractDistro): """ Redhat Distro concrete class Put Redhat specific behavior here... """ def __init__(self): super(redhatDistro,self).__init__() self.service_cmd='/sbin/service' self.ssh_service_restart_option='condrestart' self.ssh_service_name='sshd' self.hostname_file_path= None if DistInfo()[1] < '7.0' else '/etc/hostname' self.init_file=redhat_init_file self.grubKernelBootOptionsFile = '/boot/grub/menu.lst' self.grubKernelBootOptionsLine = 'kernel' def publishHostname(self,name): super(redhatDistro,self).publishHostname(name) if DistInfo()[1] < '7.0' : filepath = "/etc/sysconfig/network" if os.path.isfile(filepath): ReplaceFileContentsAtomic(filepath, "HOSTNAME=" + name + "\n" + "\n".join(filter(lambda a: not a.startswith("HOSTNAME"), GetFileContents(filepath).split('\n')))) ethernetInterface = MyDistro.GetInterfaceName() filepath = "/etc/sysconfig/network-scripts/ifcfg-" + ethernetInterface if os.path.isfile(filepath): ReplaceFileContentsAtomic(filepath, "DHCP_HOSTNAME=" + name + "\n" + "\n".join(filter(lambda a: not a.startswith("DHCP_HOSTNAME"), GetFileContents(filepath).split('\n')))) return 0 def installAgentServiceScriptFiles(self): SetFileContents(self.init_script_file, self.init_file) os.chmod(self.init_script_file, 0744) return 0 def registerAgentService(self): self.installAgentServiceScriptFiles() return Run('chkconfig --add waagent') def uninstallAgentService(self): return Run('chkconfig --del ' + self.agent_service_name) def unregisterAgentService(self): self.stopAgentService() return self.uninstallAgentService() def checkPackageInstalled(self,p): if Run("yum list installed " + p,chk_err=False): return 0 else: return 1 def checkPackageUpdateable(self,p): if Run("yum check-update | grep "+ p,chk_err=False): return 1 else: return 0 def checkDependencies(self): """ Generic dependency check. Return 1 unless all dependencies are satisfied. """ if DistInfo()[1] < '7.0' and self.checkPackageInstalled('NetworkManager'): Error(GuestAgentLongName + " is not compatible with network-manager.") return 1 try: m= __import__('pyasn1') except ImportError: Error(GuestAgentLongName + " requires python-pyasn1 for your Linux distribution.") return 1 for a in self.requiredDeps: if Run("which " + a + " > /dev/null 2>&1",chk_err=False): Error("Missing required dependency: " + a) return 1 return 0 ############################################################ # centosDistro ############################################################ class centosDistro(redhatDistro): """ CentOS Distro concrete class Put CentOS specific behavior here... """ def __init__(self): super(centosDistro,self).__init__() ############################################################ # eulerosDistro ############################################################ class eulerosDistro(redhatDistro): """ EulerOS Distro concrete class Put EulerOS specific behavior here... """ def __init__(self): super(eulerosDistro,self).__init__() ############################################################ # oracleDistro ############################################################ class oracleDistro(redhatDistro): """ Oracle Distro concrete class Put Oracle specific behavior here... """ def __init__(self): super(oracleDistro, self).__init__() ############################################################ # asianuxDistro ############################################################ class asianuxDistro(redhatDistro): """ Asianux Distro concrete class Put Asianux specific behavior here... """ def __init__(self): super(asianuxDistro,self).__init__() ############################################################ # CoreOSDistro ############################################################ class CoreOSDistro(AbstractDistro): """ CoreOS Distro concrete class Put CoreOS specific behavior here... """ CORE_UID = 500 def __init__(self): super(CoreOSDistro,self).__init__() self.requiredDeps += [ "/usr/bin/systemctl" ] self.agent_service_name = 'waagent' self.init_script_file='/etc/systemd/system/waagent.service' self.fileBlackList.append("/etc/machine-id") self.dhcp_client_name='systemd-networkd' self.getpidcmd='pidof ' self.shadow_file_mode=0640 self.waagent_path='/usr/share/oem/bin' self.python_path='/usr/share/oem/python/bin' self.dhcp_enabled=True if 'PATH' in os.environ: os.environ['PATH'] = "{0}:{1}".format(os.environ['PATH'], self.python_path) else: os.environ['PATH'] = self.python_path if 'PYTHONPATH' in os.environ: os.environ['PYTHONPATH'] = "{0}:{1}".format(os.environ['PYTHONPATH'], self.waagent_path) else: os.environ['PYTHONPATH'] = self.waagent_path def checkPackageInstalled(self,p): """ There is no package manager in CoreOS. Return 1 since it must be preinstalled. """ return 1 def checkDependencies(self): for a in self.requiredDeps: if Run("which " + a + " > /dev/null 2>&1",chk_err=False): Error("Missing required dependency: " + a) return 1 return 0 def checkPackageUpdateable(self,p): """ There is no package manager in CoreOS. Return 0 since it can't be updated via package. """ return 0 def startAgentService(self): return Run('systemctl start ' + self.agent_service_name) def stopAgentService(self): return Run('systemctl stop ' + self.agent_service_name) def restartSshService(self): """ SSH is socket activated on CoreOS. No need to restart it. """ return 0 def sshDeployPublicKey(self,fprint,path): """ We support PKCS8. """ if Run("ssh-keygen -i -m PKCS8 -f " + fprint + " >> " + path): return 1 else : return 0 def RestartInterface(self, iface): Run("systemctl restart systemd-networkd") def CreateAccount(self, user, password, expiration, thumbprint): """ Create a user account, with 'user', 'password', 'expiration', ssh keys and sudo permissions. Returns None if successful, error string on failure. """ userentry = None try: userentry = pwd.getpwnam(user) except: pass uidmin = None try: uidmin = int(GetLineStartingWith("UID_MIN", "/etc/login.defs").split()[1]) except: pass if uidmin == None: uidmin = 100 if userentry != None and userentry[2] < uidmin and userentry[2] != self.CORE_UID: Error("CreateAccount: " + user + " is a system user. Will not set password.") return "Failed to set password for system user: " + user + " (0x06)." if userentry == None: command = "useradd --create-home --password '*' " + user if expiration != None: command += " --expiredate " + expiration.split('.')[0] if Run(command): Error("Failed to create user account: " + user) return "Failed to create user account: " + user + " (0x07)." else: Log("CreateAccount: " + user + " already exists. Will update password.") if password != None: self.changePass(user, password) try: if password == None: SetFileContents("/etc/sudoers.d/waagent", user + " ALL = (ALL) NOPASSWD: ALL\n") else: SetFileContents("/etc/sudoers.d/waagent", user + " ALL = (ALL) ALL\n") os.chmod("/etc/sudoers.d/waagent", 0440) except: Error("CreateAccount: Failed to configure sudo access for user.") return "Failed to configure sudo privileges (0x08)." home = MyDistro.GetHome() if thumbprint != None: dir = home + "/" + user + "/.ssh" CreateDir(dir, user, 0700) pub = dir + "/id_rsa.pub" prv = dir + "/id_rsa" Run("ssh-keygen -y -f " + thumbprint + ".prv > " + pub) SetFileContents(prv, GetFileContents(thumbprint + ".prv")) for f in [pub, prv]: os.chmod(f, 0600) ChangeOwner(f, user) SetFileContents(dir + "/authorized_keys", GetFileContents(pub)) ChangeOwner(dir + "/authorized_keys", user) Log("Created user account: " + user) return None def startDHCP(self): Run("systemctl start " + self.dhcp_client_name, chk_err=False) def stopDHCP(self): Run("systemctl stop " + self.dhcp_client_name, chk_err=False) def translateCustomData(self, data): return base64.b64decode(data) def getConfigurationPath(self): return "/usr/share/oem/waagent.conf" ############################################################ # debianDistro ############################################################ debian_init_file = """\ #!/bin/sh ### BEGIN INIT INFO # Provides: AzureLinuxAgent # Required-Start: $network $syslog # Required-Stop: $network $syslog # Should-Start: $network $syslog # Should-Stop: $network $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: AzureLinuxAgent # Description: AzureLinuxAgent ### END INIT INFO . /lib/lsb/init-functions OPTIONS="-daemon" WAZD_BIN=/usr/sbin/waagent WAZD_PID=/var/run/waagent.pid case "$1" in start) log_begin_msg "Starting AzureLinuxAgent..." pid=$( pidofproc $WAZD_BIN ) if [ -n "$pid" ] ; then log_begin_msg "Already running." log_end_msg 0 exit 0 fi start-stop-daemon --start --quiet --oknodo --background --exec $WAZD_BIN -- $OPTIONS log_end_msg $? ;; stop) log_begin_msg "Stopping AzureLinuxAgent..." start-stop-daemon --stop --quiet --oknodo --pidfile $WAZD_PID ret=$? rm -f $WAZD_PID log_end_msg $ret ;; force-reload) $0 restart ;; restart) $0 stop $0 start ;; status) status_of_proc $WAZD_BIN && exit 0 || exit $? ;; *) log_success_msg "Usage: /etc/init.d/waagent {start|stop|force-reload|restart|status}" exit 1 ;; esac exit 0 """ class debianDistro(AbstractDistro): """ debian Distro concrete class Put debian specific behavior here... """ def __init__(self): super(debianDistro,self).__init__() self.requiredDeps += [ "/usr/sbin/update-rc.d" ] self.init_file=debian_init_file self.agent_package_name='walinuxagent' self.dhcp_client_name='dhclient' self.getpidcmd='pidof ' self.shadow_file_mode=0640 def checkPackageInstalled(self,p): """ Check that the package is installed. Return 1 if installed, 0 if not installed. This method of using dpkg-query allows wildcards to be present in the package name. """ if not Run("dpkg-query -W -f='${Status}\n' '" + p + "' | grep ' installed' 2>&1",chk_err=False): return 1 else: return 0 def checkDependencies(self): """ Debian dependency check. python-pyasn1 is NOT needed. Return 1 unless all dependencies are satisfied. NOTE: using network*manager will catch either package name in Ubuntu or debian. """ if self.checkPackageInstalled('network*manager'): Error(GuestAgentLongName + " is not compatible with network-manager.") return 1 for a in self.requiredDeps: if Run("which " + a + " > /dev/null 2>&1",chk_err=False): Error("Missing required dependency: " + a) return 1 return 0 def checkPackageUpdateable(self,p): if Run("apt-get update ; apt-get upgrade -us | grep " + p,chk_err=False): return 1 else: return 0 def installAgentServiceScriptFiles(self): """ If we are packaged - the service name is walinuxagent, do nothing. """ if self.agent_service_name == 'walinuxagent': return 0 try: SetFileContents(self.init_script_file, self.init_file) os.chmod(self.init_script_file, 0744) except OSError, e: ErrorWithPrefix('installAgentServiceScriptFiles','Exception: '+str(e)+' occured creating ' + self.init_script_file) return 1 return 0 def registerAgentService(self): if self.installAgentServiceScriptFiles() == 0: return Run('update-rc.d waagent defaults') else : return 1 def uninstallAgentService(self): return Run('update-rc.d -f ' + self.agent_service_name + ' remove') def unregisterAgentService(self): self.stopAgentService() return self.uninstallAgentService() def sshDeployPublicKey(self,fprint,path): """ We support PKCS8. """ if Run("ssh-keygen -i -m PKCS8 -f " + fprint + " >> " + path): return 1 else : return 0 ############################################################ # KaliDistro - WIP # Functioning on Kali 1.1.0a so far ############################################################ class KaliDistro(debianDistro): """ Kali Distro concrete class Put Kali specific behavior here... """ def __init__(self): super(KaliDistro,self).__init__() ############################################################ # UbuntuDistro ############################################################ ubuntu_upstart_file = """\ #walinuxagent - start Azure agent description "walinuxagent" author "Ben Howard " start on (filesystem and started rsyslog) pre-start script WALINUXAGENT_ENABLED=1 [ -r /etc/default/walinuxagent ] && . /etc/default/walinuxagent if [ "$WALINUXAGENT_ENABLED" != "1" ]; then exit 1 fi if [ ! -x /usr/sbin/waagent ]; then exit 1 fi #Load the udf module modprobe -b udf end script exec /usr/sbin/waagent -daemon """ class UbuntuDistro(debianDistro): """ Ubuntu Distro concrete class Put Ubuntu specific behavior here... """ def __init__(self): super(UbuntuDistro,self).__init__() self.init_script_file='/etc/init/waagent.conf' self.init_file=ubuntu_upstart_file self.fileBlackList = [ "/root/.bash_history", "/var/log/waagent.log"] self.dhcp_client_name=None self.getpidcmd='pidof ' def registerAgentService(self): return self.installAgentServiceScriptFiles() def uninstallAgentService(self): """ If we are packaged - the service name is walinuxagent, do nothing. """ if self.agent_service_name == 'walinuxagent': return 0 os.remove('/etc/init/' + self.agent_service_name + '.conf') def unregisterAgentService(self): """ If we are packaged - the service name is walinuxagent, do nothing. """ if self.agent_service_name == 'walinuxagent': return self.stopAgentService() return self.uninstallAgentService() def deprovisionWarnUser(self): """ Ubuntu specific warning string from Deprovision. """ print("WARNING! Nameserver configuration in /etc/resolvconf/resolv.conf.d/{tail,original} will be deleted.") def deprovisionDeleteFiles(self): """ Ubuntu uses resolv.conf by default, so removing /etc/resolv.conf will break resolvconf. Therefore, we check to see if resolvconf is in use, and if so, we remove the resolvconf artifacts. """ if os.path.realpath('/etc/resolv.conf') != '/run/resolvconf/resolv.conf': Log("resolvconf is not configured. Removing /etc/resolv.conf") self.fileBlackList.append('/etc/resolv.conf') else: Log("resolvconf is enabled; leaving /etc/resolv.conf intact") resolvConfD = '/etc/resolvconf/resolv.conf.d/' self.fileBlackList.extend([resolvConfD + 'tail', resolvConfD + 'original']) for f in os.listdir(LibDir)+self.fileBlackList: try: os.remove(f) except: pass return 0 def getDhcpClientName(self): if self.dhcp_client_name != None : return self.dhcp_client_name if DistInfo()[1] == '12.04' : self.dhcp_client_name='dhclient3' else : self.dhcp_client_name='dhclient' return self.dhcp_client_name def waitForSshHostKey(self, path): """ Wait until the ssh host key is generated by cloud init. """ for retry in range(0, 10): if(os.path.isfile(path)): return True time.sleep(1) Error("Can't find host key: {0}".format(path)) return False ############################################################ # LinuxMintDistro ############################################################ class LinuxMintDistro(UbuntuDistro): """ LinuxMint Distro concrete class Put LinuxMint specific behavior here... """ def __init__(self): super(LinuxMintDistro,self).__init__() ############################################################ # fedoraDistro ############################################################ fedora_systemd_service = """\ [Unit] Description=Azure Linux Agent After=network.target After=sshd.service ConditionFileIsExecutable=/usr/sbin/waagent ConditionPathExists=/etc/waagent.conf [Service] Type=simple ExecStart=/usr/sbin/waagent -daemon [Install] WantedBy=multi-user.target """ class fedoraDistro(redhatDistro): """ FedoraDistro concrete class Put Fedora specific behavior here... """ def __init__(self): super(fedoraDistro,self).__init__() self.service_cmd = '/usr/bin/systemctl' self.hostname_file_path = '/etc/hostname' self.init_script_file = '/usr/lib/systemd/system/' + self.agent_service_name + '.service' self.init_file = fedora_systemd_service self.grubKernelBootOptionsFile = '/etc/default/grub' self.grubKernelBootOptionsLine = 'GRUB_CMDLINE_LINUX=' def publishHostname(self, name): SetFileContents(self.hostname_file_path, name + '\n') ethernetInterface = MyDistro.GetInterfaceName() filepath = "/etc/sysconfig/network-scripts/ifcfg-" + ethernetInterface if os.path.isfile(filepath): ReplaceFileContentsAtomic(filepath, "DHCP_HOSTNAME=" + name + "\n" + "\n".join(filter(lambda a: not a.startswith("DHCP_HOSTNAME"), GetFileContents(filepath).split('\n')))) return 0 def installAgentServiceScriptFiles(self): SetFileContents(self.init_script_file, self.init_file) os.chmod(self.init_script_file, 0644) return Run(self.service_cmd + ' daemon-reload') def registerAgentService(self): self.installAgentServiceScriptFiles() return Run(self.service_cmd + ' enable ' + self.agent_service_name) def uninstallAgentService(self): """ Call service subsystem to remove waagent script. """ return Run(self.service_cmd + ' disable ' + self.agent_service_name) def unregisterAgentService(self): """ Calls self.stopAgentService and call self.uninstallAgentService() """ self.stopAgentService() self.uninstallAgentService() def startAgentService(self): """ Service call to start the Agent service """ return Run(self.service_cmd + ' start ' + self.agent_service_name) def stopAgentService(self): """ Service call to stop the Agent service """ return Run(self.service_cmd + ' stop ' + self.agent_service_name, False) def restartSshService(self): """ Service call to re(start) the SSH service """ sshRestartCmd = self.service_cmd + " " + self.ssh_service_restart_option + " " + self.ssh_service_name retcode = Run(sshRestartCmd) if retcode > 0: Error("Failed to restart SSH service with return code:" + str(retcode)) return retcode def checkPackageInstalled(self, p): """ Query package database for prescence of an installed package. """ import rpm ts = rpm.TransactionSet() rpms = ts.dbMatch(rpm.RPMTAG_PROVIDES, p) return bool(len(rpms) > 0) def deleteRootPassword(self): return Run("/sbin/usermod root -p '!!'") def packagedInstall(self,buildroot): """ Called from setup.py for use by RPM. Copies generated files waagent.conf, under the buildroot. """ if not os.path.exists(buildroot+'/etc'): os.mkdir(buildroot+'/etc') SetFileContents(buildroot+'/etc/waagent.conf', MyDistro.waagent_conf_file) if not os.path.exists(buildroot+'/etc/logrotate.d'): os.mkdir(buildroot+'/etc/logrotate.d') SetFileContents(buildroot+'/etc/logrotate.d/WALinuxAgent', WaagentLogrotate) self.init_script_file=buildroot+self.init_script_file # this allows us to call installAgentServiceScriptFiles() if not os.path.exists(os.path.dirname(self.init_script_file)): os.mkdir(os.path.dirname(self.init_script_file)) self.installAgentServiceScriptFiles() def CreateAccount(self, user, password, expiration, thumbprint): super(fedoraDistro, self).CreateAccount(user, password, expiration, thumbprint) Run('/sbin/usermod ' + user + ' -G wheel') def DeleteAccount(self, user): Run('/sbin/usermod ' + user + ' -G ""') super(fedoraDistro, self).DeleteAccount(user) ############################################################ # FreeBSD ############################################################ FreeBSDWaagentConf = """\ # # Azure Linux Agent Configuration # Role.StateConsumer=None # Specified program is invoked with the argument "Ready" when we report ready status # to the endpoint server. Role.ConfigurationConsumer=None # Specified program is invoked with XML file argument specifying role configuration. Role.TopologyConsumer=None # Specified program is invoked with XML file argument specifying role topology. Provisioning.Enabled=y # Provisioning.DeleteRootPassword=y # Password authentication for root account will be unavailable. Provisioning.RegenerateSshHostKeyPair=y # Generate fresh host key pair. Provisioning.SshHostKeyPairType=rsa # Supported values are "rsa", "dsa" and "ecdsa". Provisioning.MonitorHostName=y # Monitor host name changes and publish changes via DHCP requests. ResourceDisk.Format=y # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Filesystem=ufs2 # ResourceDisk.MountPoint=/mnt/resource # ResourceDisk.EnableSwap=n # Create and use swapfile on resource disk. ResourceDisk.SwapSizeMB=0 # Size of the swapfile. LBProbeResponder=y # Respond to load balancer probes if requested by Azure. Logs.Verbose=n # Enable verbose logs OS.RootDeviceScsiTimeout=300 # Root device timeout in seconds. OS.OpensslPath=None # If "None", the system default version is used. """ bsd_init_file="""\ #! /bin/sh # PROVIDE: waagent # REQUIRE: DAEMON cleanvar sshd # BEFORE: LOGIN # KEYWORD: nojail . /etc/rc.subr export PATH=$PATH:/usr/local/bin name="waagent" rcvar="waagent_enable" command="/usr/sbin/${name}" command_interpreter="/usr/local/bin/python" waagent_flags=" daemon &" pidfile="/var/run/waagent.pid" load_rc_config $name run_rc_command "$1" """ bsd_activate_resource_disk_txt="""\ #!/usr/bin/env python import os import sys if sys.version_info[0] == 2: import imp else: import importlib # waagent has no '.py' therefore create waagent module import manually. __name__='setupmain' #prevent waagent.__main__ from executing waagent=imp.load_source('waagent','/tmp/waagent') waagent.LoggerInit('/var/log/waagent.log','/dev/console') from waagent import RunGetOutput,Run Config=waagent.ConfigurationProvider(None) format = Config.get("ResourceDisk.Format") if format == None or format.lower().startswith("n"): sys.exit(0) device_base = 'da1' device = "/dev/" + device_base for entry in RunGetOutput("mount")[1].split(): if entry.startswith(device + "s1"): waagent.Log("ActivateResourceDisk: " + device + "s1 is already mounted.") sys.exit(0) mountpoint = Config.get("ResourceDisk.MountPoint") if mountpoint == None: mountpoint = "/mnt/resource" waagent.CreateDir(mountpoint, "root", 0755) fs = Config.get("ResourceDisk.Filesystem") if waagent.FreeBSDDistro().mediaHasFilesystem(device) == False : Run("newfs " + device + "s1") if Run("mount " + device + "s1 " + mountpoint): waagent.Error("ActivateResourceDisk: Failed to mount resource disk (" + device + "s1).") sys.exit(0) waagent.Log("Resource disk (" + device + "s1) is mounted at " + mountpoint + " with fstype " + fs) waagent.SetFileContents(os.path.join(mountpoint,waagent.README_FILENAME), waagent.README_FILECONTENT) swap = Config.get("ResourceDisk.EnableSwap") if swap == None or swap.lower().startswith("n"): sys.exit(0) sizeKB = int(Config.get("ResourceDisk.SwapSizeMB")) * 1024 if os.path.isfile(mountpoint + "/swapfile") and os.path.getsize(mountpoint + "/swapfile") != (sizeKB * 1024): os.remove(mountpoint + "/swapfile") if not os.path.isfile(mountpoint + "/swapfile"): Run("umask 0077 && dd if=/dev/zero of=" + mountpoint + "/swapfile bs=1024 count=" + str(sizeKB)) if Run("mdconfig -a -t vnode -f " + mountpoint + "/swapfile -u 0"): waagent.Error("ActivateResourceDisk: Configuring swap - Failed to create md0") if not Run("swapon /dev/md0"): waagent.Log("Enabled " + str(sizeKB) + " KB of swap at " + mountpoint + "/swapfile") else: waagent.Error("ActivateResourceDisk: Failed to activate swap at " + mountpoint + "/swapfile") """ class FreeBSDDistro(AbstractDistro): """ """ def __init__(self): """ Generic Attributes go here. These are based on 'majority rules'. This __init__() may be called or overriden by the child. """ super(FreeBSDDistro,self).__init__() self.agent_service_name = os.path.basename(sys.argv[0]) self.selinux=False self.ssh_service_name='sshd' self.ssh_config_file='/etc/ssh/sshd_config' self.hostname_file_path='/etc/hostname' self.dhcp_client_name='dhclient' self.requiredDeps = [ 'route', 'shutdown', 'ssh-keygen', 'pw' , 'openssl', 'fdisk', 'sed', 'grep' , 'sudo'] self.init_script_file='/etc/rc.d/waagent' self.init_file=bsd_init_file self.agent_package_name='WALinuxAgent' self.fileBlackList = [ "/root/.bash_history", "/var/log/waagent.log",'/etc/resolv.conf' ] self.agent_files_to_uninstall = ["/etc/waagent.conf"] self.grubKernelBootOptionsFile = '/boot/loader.conf' self.grubKernelBootOptionsLine = '' self.getpidcmd = 'pgrep -n' self.mount_dvd_cmd = 'dd bs=2048 count=33 skip=295 if=' # custom data max len is 64k self.sudoers_dir_base = '/usr/local/etc' self.waagent_conf_file = FreeBSDWaagentConf def installAgentServiceScriptFiles(self): SetFileContents(self.init_script_file, self.init_file) os.chmod(self.init_script_file, 0777) AppendFileContents("/etc/rc.conf","waagent_enable='YES'\n") return 0 def registerAgentService(self): self.installAgentServiceScriptFiles() return Run("services_mkdb " + self.init_script_file) def sshDeployPublicKey(self,fprint,path): """ We support PKCS8. """ if Run("ssh-keygen -i -m PKCS8 -f " + fprint + " >> " + path): return 1 else : return 0 def deleteRootPassword(self): """ BSD root password removal. """ filepath="/etc/master.passwd" ReplaceStringInFile(filepath,r'root:.*?:','root::') #ReplaceFileContentsAtomic(filepath,"root:*LOCK*:14600::::::\n" # + "\n".join(filter(lambda a: not a.startswith("root:"),GetFileContents(filepath).split('\n')))) os.chmod(filepath,self.shadow_file_mode) if self.isSelinuxSystem(): self.setSelinuxContext(filepath,'system_u:object_r:shadow_t:s0') RunGetOutput("pwd_mkdb -u root /etc/master.passwd") Log("Root password deleted.") return 0 def changePass(self,user,password): return RunSendStdin("pw usermod " + user + " -h 0 ",password, log_cmd=False) def load_ata_piix(self): return 0 def unload_ata_piix(self): return 0 def checkDependencies(self): """ FreeBSD dependency check. Return 1 unless all dependencies are satisfied. """ for a in self.requiredDeps: if Run("which " + a + " > /dev/null 2>&1",chk_err=False): Error("Missing required dependency: " + a) return 1 return 0 def packagedInstall(self,buildroot): pass def GetInterfaceName(self): """ Return the ip of the active ethernet interface. """ iface,inet,mac=self.GetFreeBSDEthernetInfo() return iface def RestartInterface(self, iface): Run("service netif restart") def GetIpv4Address(self): """ Return the ip of the active ethernet interface. """ iface,inet,mac=self.GetFreeBSDEthernetInfo() return inet def GetMacAddress(self): """ Return the ip of the active ethernet interface. """ iface,inet,mac=self.GetFreeBSDEthernetInfo() l=mac.split(':') r=[] for i in l: r.append(string.atoi(i,16)) return r def GetFreeBSDEthernetInfo(self): """ There is no SIOCGIFCONF on freeBSD - just parse ifconfig. Returns strings: iface, inet4_addr, and mac or 'None,None,None' if unable to parse. We will sleep and retry as the network must be up. """ code,output=RunGetOutput("ifconfig",chk_err=False) Log(output) retries=10 cmd='ifconfig | grep -A2 -B2 ether | grep -B3 inet | grep -A4 UP ' code=1 while code > 0 : if code > 0 and retries == 0: Error("GetFreeBSDEthernetInfo - Failed to detect ethernet interface") return None, None, None code,output=RunGetOutput(cmd,chk_err=False) retries-=1 if code > 0 and retries > 0 : Log("GetFreeBSDEthernetInfo - Error: retry ethernet detection " + str(retries)) if retries == 9 : c,o=RunGetOutput("ifconfig | grep -A1 -B2 ether",chk_err=False) if c == 0: t=o.replace('\n',' ') t=t.split() i=t[0][:-1] Log(RunGetOutput('id')[1]) Run('dhclient '+i) time.sleep(10) j=output.replace('\n',' ') j=j.split() iface=j[0][:-1] for i in range(len(j)): if j[i] == 'inet' : inet=j[i+1] elif j[i] == 'ether' : mac=j[i+1] return iface, inet, mac def CreateAccount(self,user, password, expiration, thumbprint): """ Create a user account, with 'user', 'password', 'expiration', ssh keys and sudo permissions. Returns None if successful, error string on failure. """ userentry = None try: userentry = pwd.getpwnam(user) except: pass uidmin = None try: if os.path.isfile("/etc/login.defs"): uidmin = int(GetLineStartingWith("UID_MIN", "/etc/login.defs").split()[1]) except: pass if uidmin == None: uidmin = 100 if userentry != None and userentry[2] < uidmin: Error("CreateAccount: " + user + " is a system user. Will not set password.") return "Failed to set password for system user: " + user + " (0x06)." if userentry == None: command = "pw useradd " + user + " -m" if expiration != None: command += " -e " + expiration.split('.')[0] if Run(command): Error("Failed to create user account: " + user) return "Failed to create user account: " + user + " (0x07)." else: Log("CreateAccount: " + user + " already exists. Will update password.") if password != None: self.changePass(user,password) try: # for older distros create sudoers.d if not os.path.isdir(MyDistro.sudoers_dir_base+'/sudoers.d/'): # create the /etc/sudoers.d/ directory os.mkdir(MyDistro.sudoers_dir_base+'/sudoers.d') # add the include of sudoers.d to the /etc/sudoers SetFileContents(MyDistro.sudoers_dir_base+'/sudoers',GetFileContents(MyDistro.sudoers_dir_base+'/sudoers')+'\n#includedir ' + MyDistro.sudoers_dir_base + '/sudoers.d\n') if password == None: SetFileContents(MyDistro.sudoers_dir_base+"/sudoers.d/waagent", user + " ALL = (ALL) NOPASSWD: ALL\n") else: SetFileContents(MyDistro.sudoers_dir_base+"/sudoers.d/waagent", user + " ALL = (ALL) ALL\n") os.chmod(MyDistro.sudoers_dir_base+"/sudoers.d/waagent", 0440) except: Error("CreateAccount: Failed to configure sudo access for user.") return "Failed to configure sudo privileges (0x08)." home = MyDistro.GetHome() if thumbprint != None: dir = home + "/" + user + "/.ssh" CreateDir(dir, user, 0700) pub = dir + "/id_rsa.pub" prv = dir + "/id_rsa" Run("ssh-keygen -y -f " + thumbprint + ".prv > " + pub) SetFileContents(prv, GetFileContents(thumbprint + ".prv")) for f in [pub, prv]: os.chmod(f, 0600) ChangeOwner(f, user) SetFileContents(dir + "/authorized_keys", GetFileContents(pub)) ChangeOwner(dir + "/authorized_keys", user) Log("Created user account: " + user) return None def DeleteAccount(self,user): """ Delete the 'user'. Clear utmp first, to avoid error. Removes the /etc/sudoers.d/waagent file. """ userentry = None try: userentry = pwd.getpwnam(user) except: pass if userentry == None: Error("DeleteAccount: " + user + " not found.") return uidmin = None try: if os.path.isfile("/etc/login.defs"): uidmin = int(GetLineStartingWith("UID_MIN", "/etc/login.defs").split()[1]) except: pass if uidmin == None: uidmin = 100 if userentry[2] < uidmin: Error("DeleteAccount: " + user + " is a system user. Will not delete account.") return Run("> /var/run/utmp") #Delete utmp to prevent error if we are the 'user' deleted pid = subprocess.Popen(['rmuser', '-y', user], stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE).pid try: os.remove(MyDistro.sudoers_dir_base+"/sudoers.d/waagent") except: pass return def ActivateResourceDiskNoThread(self): """ Format, mount, and if specified in the configuration set resource disk as swap. """ global DiskActivated Run('cp /usr/sbin/waagent /tmp/') SetFileContents('/tmp/bsd_activate_resource_disk.py',bsd_activate_resource_disk_txt) Run('chmod +x /tmp/bsd_activate_resource_disk.py') pid = subprocess.Popen(["/tmp/bsd_activate_resource_disk.py", ""]).pid Log("Spawning bsd_activate_resource_disk.py") DiskActivated = True return def Install(self): """ Install the agent service. Check dependencies. Create /etc/waagent.conf and move old version to /etc/waagent.conf.old Copy RulesFiles to /var/lib/waagent Create /etc/logrotate.d/waagent Set /etc/ssh/sshd_config ClientAliveInterval to 180 Call ApplyVNUMAWorkaround() """ if MyDistro.checkDependencies(): return 1 os.chmod(sys.argv[0], 0755) SwitchCwd() for a in RulesFiles: if os.path.isfile(a): if os.path.isfile(GetLastPathElement(a)): os.remove(GetLastPathElement(a)) shutil.move(a, ".") Warn("Moved " + a + " -> " + LibDir + "/" + GetLastPathElement(a) ) MyDistro.registerAgentService() if os.path.isfile("/etc/waagent.conf"): try: os.remove("/etc/waagent.conf.old") except: pass try: os.rename("/etc/waagent.conf", "/etc/waagent.conf.old") Warn("Existing /etc/waagent.conf has been renamed to /etc/waagent.conf.old") except: pass SetFileContents("/etc/waagent.conf", self.waagent_conf_file) if os.path.exists('/usr/local/etc/logrotate.d/'): SetFileContents("/usr/local/etc/logrotate.d/waagent", WaagentLogrotate) filepath = "/etc/ssh/sshd_config" ReplaceFileContentsAtomic(filepath, "\n".join(filter(lambda a: not a.startswith("ClientAliveInterval"), GetFileContents(filepath).split('\n'))) + "\nClientAliveInterval 180\n") Log("Configured SSH client probing to keep connections alive.") #ApplyVNUMAWorkaround() return 0 def mediaHasFilesystem(self,dsk): if Run('LC_ALL=C fdisk -p ' + dsk + ' | grep "invalid fdisk partition table found" ',False): return False return True def mountDVD(self,dvd,location): #At this point we cannot read a joliet option udf DVD in freebsd10 - so we 'dd' it into our location retcode,out = RunGetOutput(self.mount_dvd_cmd + dvd + ' of=' + location + '/ovf-env.xml') if retcode != 0: return retcode,out ovfxml = (GetFileContents(location+"/ovf-env.xml",asbin=False)) if ord(ovfxml[0]) > 128 and ord(ovfxml[1]) > 128 and ord(ovfxml[2]) > 128 : ovfxml = ovfxml[3:] # BOM is not stripped. First three bytes are > 128 and not unicode chars so we ignore them. ovfxml = ovfxml.strip(chr(0x00)) ovfxml = "".join(filter(lambda x: ord(x)<128, ovfxml)) ovfxml = re.sub(r'.*\Z','',ovfxml,0,re.DOTALL) ovfxml += '' SetFileContents(location+"/ovf-env.xml", ovfxml) return retcode,out def GetHome(self): return '/home' def initScsiDiskTimeout(self): """ Set the SCSI disk timeout by updating the kernal config """ timeout = Config.get("OS.RootDeviceScsiTimeout") if timeout: Run("sysctl kern.cam.da.default_timeout=" + timeout) def setScsiDiskTimeout(self): return def setBlockDeviceTimeout(self, device, timeout): return def getProcessorCores(self): return int(RunGetOutput("sysctl hw.ncpu | awk '{print $2}'")[1]) def getTotalMemory(self): return int(RunGetOutput("sysctl hw.realmem | awk '{print $2}'")[1])/1024 def setDefaultGateway(self, gateway): Run("/sbin/route add default " + gateway, chk_err=False) def routeAdd(self, net, mask, gateway): Run("/sbin/route add -net " + net + " " + mask + " " + gateway, chk_err=False) ############################################################ # END DISTRO CLASS DEFS ############################################################ # This lets us index into a string or an array of integers transparently. def Ord(a): """ Allows indexing into a string or an array of integers transparently. Generic utility function. """ if type(a) == type("a"): a = ord(a) return a def IsLinux(): """ Returns True if platform is Linux. Generic utility function. """ return (platform.uname()[0] == "Linux") def GetLastPathElement(path): """ Similar to basename. Generic utility function. """ return path.rsplit('/', 1)[1] def GetFileContents(filepath,asbin=False): """ Read and return contents of 'filepath'. """ mode='r' if asbin: mode+='b' c=None try: with open(filepath, mode) as F : c=F.read() except IOError, e: ErrorWithPrefix('GetFileContents','Reading from file ' + filepath + ' Exception is ' + str(e)) return None return c def SetFileContents(filepath, contents): """ Write 'contents' to 'filepath'. """ if type(contents) == str : contents=contents.encode('latin-1', 'ignore') try: with open(filepath, "wb+") as F : F.write(contents) except IOError, e: ErrorWithPrefix('SetFileContents','Writing to file ' + filepath + ' Exception is ' + str(e)) return None return 0 def AppendFileContents(filepath, contents): """ Append 'contents' to 'filepath'. """ if type(contents) == str : contents=contents.encode('latin-1') try: with open(filepath, "a+") as F : F.write(contents) except IOError, e: ErrorWithPrefix('AppendFileContents','Appending to file ' + filepath + ' Exception is ' + str(e)) return None return 0 def ReplaceFileContentsAtomic(filepath, contents): """ Write 'contents' to 'filepath' by creating a temp file, and replacing original. """ handle, temp = tempfile.mkstemp(dir = os.path.dirname(filepath)) if type(contents) == str : contents=contents.encode('latin-1') try: os.write(handle, contents) except IOError, e: ErrorWithPrefix('ReplaceFileContentsAtomic','Writing to file ' + filepath + ' Exception is ' + str(e)) return None finally: os.close(handle) try: os.rename(temp, filepath) return None except IOError, e: ErrorWithPrefix('ReplaceFileContentsAtomic','Renaming ' + temp+ ' to ' + filepath + ' Exception is ' + str(e)) try: os.remove(filepath) except IOError, e: ErrorWithPrefix('ReplaceFileContentsAtomic','Removing '+ filepath + ' Exception is ' + str(e)) try: os.rename(temp,filepath) except IOError, e: ErrorWithPrefix('ReplaceFileContentsAtomic','Removing '+ filepath + ' Exception is ' + str(e)) return 1 return 0 def GetLineStartingWith(prefix, filepath): """ Return line from 'filepath' if the line startswith 'prefix' """ for line in GetFileContents(filepath).split('\n'): if line.startswith(prefix): return line return None def Run(cmd,chk_err=True): """ Calls RunGetOutput on 'cmd', returning only the return code. If chk_err=True then errors will be reported in the log. If chk_err=False then errors will be suppressed from the log. """ retcode,out=RunGetOutput(cmd,chk_err) return retcode def RunGetOutput(cmd, chk_err=True, log_cmd=True): """ Wrapper for subprocess.check_output. Execute 'cmd'. Returns return code and STDOUT, trapping expected exceptions. Reports exceptions to Error if chk_err parameter is True """ if log_cmd: LogIfVerbose(cmd) try: output=subprocess.check_output(cmd,stderr=subprocess.STDOUT,shell=True) except subprocess.CalledProcessError,e : if chk_err and log_cmd: Error('CalledProcessError. Error Code is ' + str(e.returncode) ) Error('CalledProcessError. Command string was ' + e.cmd ) Error('CalledProcessError. Command result was ' + (e.output[:-1]).decode('latin-1')) return e.returncode,e.output.decode('latin-1') return 0,output.decode('latin-1') def RunSendStdin(cmd, input, chk_err=True, log_cmd=True): """ Wrapper for subprocess.Popen. Execute 'cmd', sending 'input' to STDIN of 'cmd'. Returns return code and STDOUT, trapping expected exceptions. Reports exceptions to Error if chk_err parameter is True """ if log_cmd: LogIfVerbose(cmd+input) try: me=subprocess.Popen([cmd], shell=True, stdin=subprocess.PIPE,stderr=subprocess.STDOUT,stdout=subprocess.PIPE) output=me.communicate(input) except OSError , e : if chk_err and log_cmd: Error('CalledProcessError. Error Code is ' + str(me.returncode) ) Error('CalledProcessError. Command string was ' + cmd ) Error('CalledProcessError. Command result was ' + output[0].decode('latin-1')) return 1,output[0].decode('latin-1') if me.returncode is not 0 and chk_err is True and log_cmd: Error('CalledProcessError. Error Code is ' + str(me.returncode) ) Error('CalledProcessError. Command string was ' + cmd ) Error('CalledProcessError. Command result was ' + output[0].decode('latin-1')) return me.returncode,output[0].decode('latin-1') def GetNodeTextData(a): """ Filter non-text nodes from DOM tree """ for b in a.childNodes: if b.nodeType == b.TEXT_NODE: return b.data def GetHome(): """ Attempt to guess the $HOME location. Return the path string. """ home = None try: home = GetLineStartingWith("HOME", "/etc/default/useradd").split('=')[1].strip() except: pass if (home == None) or (home.startswith("/") == False): home = "/home" return home def ChangeOwner(filepath, user): """ Lookup user. Attempt chown 'filepath' to 'user'. """ p = None try: p = pwd.getpwnam(user) except: pass if p != None: if not os.path.exists(filepath): Error("Path does not exist: {0}".format(filepath)) else: os.chown(filepath, p[2], p[3]) def CreateDir(dirpath, user, mode): """ Attempt os.makedirs, catch all exceptions. Call ChangeOwner afterwards. """ try: os.makedirs(dirpath, mode) except: pass ChangeOwner(dirpath, user) def CreateAccount(user, password, expiration, thumbprint): """ Create a user account, with 'user', 'password', 'expiration', ssh keys and sudo permissions. Returns None if successful, error string on failure. """ userentry = None try: userentry = pwd.getpwnam(user) except: pass uidmin = None try: uidmin = int(GetLineStartingWith("UID_MIN", "/etc/login.defs").split()[1]) except: pass if uidmin == None: uidmin = 100 if userentry != None and userentry[2] < uidmin: Error("CreateAccount: " + user + " is a system user. Will not set password.") return "Failed to set password for system user: " + user + " (0x06)." if userentry == None: command = "useradd -m " + user if expiration != None: command += " -e " + expiration.split('.')[0] if Run(command): Error("Failed to create user account: " + user) return "Failed to create user account: " + user + " (0x07)." else: Log("CreateAccount: " + user + " already exists. Will update password.") if password != None: MyDistro.changePass(user, password) try: # for older distros create sudoers.d if not os.path.isdir('/etc/sudoers.d/'): # create the /etc/sudoers.d/ directory os.mkdir('/etc/sudoers.d/') # add the include of sudoers.d to the /etc/sudoers SetFileContents('/etc/sudoers',GetFileContents('/etc/sudoers')+'\n#includedir /etc/sudoers.d\n') if password == None: SetFileContents("/etc/sudoers.d/waagent", user + " ALL = (ALL) NOPASSWD: ALL\n") else: SetFileContents("/etc/sudoers.d/waagent", user + " ALL = (ALL) ALL\n") os.chmod("/etc/sudoers.d/waagent", 0440) except: Error("CreateAccount: Failed to configure sudo access for user.") return "Failed to configure sudo privileges (0x08)." home = MyDistro.GetHome() if thumbprint != None: dir = home + "/" + user + "/.ssh" CreateDir(dir, user, 0700) pub = dir + "/id_rsa.pub" prv = dir + "/id_rsa" Run("ssh-keygen -y -f " + thumbprint + ".prv > " + pub) SetFileContents(prv, GetFileContents(thumbprint + ".prv")) for f in [pub, prv]: os.chmod(f, 0600) ChangeOwner(f, user) SetFileContents(dir + "/authorized_keys", GetFileContents(pub)) ChangeOwner(dir + "/authorized_keys", user) Log("Created user account: " + user) return None def DeleteAccount(user): """ Delete the 'user'. Clear utmp first, to avoid error. Removes the /etc/sudoers.d/waagent file. """ userentry = None try: userentry = pwd.getpwnam(user) except: pass if userentry == None: Error("DeleteAccount: " + user + " not found.") return uidmin = None try: uidmin = int(GetLineStartingWith("UID_MIN", "/etc/login.defs").split()[1]) except: pass if uidmin == None: uidmin = 100 if userentry[2] < uidmin: Error("DeleteAccount: " + user + " is a system user. Will not delete account.") return Run("> /var/run/utmp") #Delete utmp to prevent error if we are the 'user' deleted Run("userdel -f -r " + user) try: os.remove("/etc/sudoers.d/waagent") except: pass return def IsInRangeInclusive(a, low, high): """ Return True if 'a' in 'low' <= a >= 'high' """ return (a >= low and a <= high) def IsPrintable(ch): """ Return True if character is displayable. """ return IsInRangeInclusive(ch, Ord('A'), Ord('Z')) or IsInRangeInclusive(ch, Ord('a'), Ord('z')) or IsInRangeInclusive(ch, Ord('0'), Ord('9')) def HexDump(buffer, size): """ Return Hex formated dump of a 'buffer' of 'size'. """ if size < 0: size = len(buffer) result = "" for i in range(0, size): if (i % 16) == 0: result += "%06X: " % i byte = buffer[i] if type(byte) == str: byte = ord(byte.decode('latin1')) result += "%02X " % byte if (i & 15) == 7: result += " " if ((i + 1) % 16) == 0 or (i + 1) == size: j = i while ((j + 1) % 16) != 0: result += " " if (j & 7) == 7: result += " " j += 1 result += " " for j in range(i - (i % 16), i + 1): byte=buffer[j] if type(byte) == str: byte = ord(byte.decode('latin1')) k = '.' if IsPrintable(byte): k = chr(byte) result += k if (i + 1) != size: result += "\n" return result def SimpleLog(file_path,message): if not file_path or len(message) < 1: return t = time.localtime() t = "%04u/%02u/%02u %02u:%02u:%02u " % (t.tm_year, t.tm_mon, t.tm_mday, t.tm_hour, t.tm_min, t.tm_sec) lines=re.sub(re.compile(r'^(.)',re.MULTILINE),t+r'\1',message) with open(file_path, "a") as F : lines = filter(lambda x : x in string.printable, lines) F.write(lines.encode('ascii','ignore') + "\n") class Logger(object): """ The Agent's logging assumptions are: For Log, and LogWithPrefix all messages are logged to the self.file_path and to the self.con_path. Setting either path parameter to None skips that log. If Verbose is enabled, messages calling the LogIfVerbose method will be logged to file_path yet not to con_path. Error and Warn messages are normal log messages with the 'ERROR:' or 'WARNING:' prefix added. """ def __init__(self,filepath,conpath,verbose=False): """ Construct an instance of Logger. """ self.file_path=filepath self.con_path=conpath self.verbose=verbose def ThrottleLog(self,counter): """ Log everything up to 10, every 10 up to 100, then every 100. """ return (counter < 10) or ((counter < 100) and ((counter % 10) == 0)) or ((counter % 100) == 0) def LogToFile(self,message): """ Write 'message' to logfile. """ if self.file_path: try: with open(self.file_path, "a") as F : message = filter(lambda x : x in string.printable, message) F.write(message.encode('ascii','ignore') + "\n") except IOError, e: print e pass def LogToCon(self,message): """ Write 'message' to /dev/console. This supports serial port logging if the /dev/console is redirected to ttys0 in kernel boot options. """ if self.con_path: try: with open(self.con_path, "w") as C : message = filter(lambda x : x in string.printable, message) C.write(message.encode('ascii','ignore') + "\n") except IOError, e: pass def Log(self,message): """ Standard Log function. Logs to self.file_path, and con_path """ self.LogWithPrefix("", message) def LogWithPrefix(self,prefix, message): """ Prefix each line of 'message' with current time+'prefix'. """ t = time.localtime() t = "%04u/%02u/%02u %02u:%02u:%02u " % (t.tm_year, t.tm_mon, t.tm_mday, t.tm_hour, t.tm_min, t.tm_sec) t += prefix for line in message.split('\n'): line = t + line self.LogToFile(line) self.LogToCon(line) def NoLog(self,message): """ Don't Log. """ pass def LogIfVerbose(self,message): """ Only log 'message' if global Verbose is True. """ self.LogWithPrefixIfVerbose('',message) def LogWithPrefixIfVerbose(self,prefix, message): """ Only log 'message' if global Verbose is True. Prefix each line of 'message' with current time+'prefix'. """ if self.verbose == True: t = time.localtime() t = "%04u/%02u/%02u %02u:%02u:%02u " % (t.tm_year, t.tm_mon, t.tm_mday, t.tm_hour, t.tm_min, t.tm_sec) t += prefix for line in message.split('\n'): line = t + line self.LogToFile(line) self.LogToCon(line) def Warn(self,message): """ Prepend the text "WARNING:" to the prefix for each line in 'message'. """ self.LogWithPrefix("WARNING:", message) def Error(self,message): """ Call ErrorWithPrefix(message). """ ErrorWithPrefix("", message) def ErrorWithPrefix(self,prefix, message): """ Prepend the text "ERROR:" to the prefix for each line in 'message'. Errors written to logfile, and /dev/console """ self.LogWithPrefix("ERROR:", message) def LoggerInit(log_file_path,log_con_path,verbose=False): """ Create log object and export its methods to global scope. """ global Log,LogWithPrefix,LogIfVerbose,LogWithPrefixIfVerbose,Error,ErrorWithPrefix,Warn,NoLog,ThrottleLog,myLogger l=Logger(log_file_path,log_con_path,verbose) Log,LogWithPrefix,LogIfVerbose,LogWithPrefixIfVerbose,Error,ErrorWithPrefix,Warn,NoLog,ThrottleLog,myLogger = l.Log,l.LogWithPrefix,l.LogIfVerbose,l.LogWithPrefixIfVerbose,l.Error,l.ErrorWithPrefix,l.Warn,l.NoLog,l.ThrottleLog,l def Linux_ioctl_GetInterfaceMac(ifname): """ Return the mac-address bound to the socket. """ s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) info = fcntl.ioctl(s.fileno(), 0x8927, struct.pack('256s', (ifname[:15]+('\0'*241)).encode('latin-1'))) return ''.join(['%02X' % Ord(char) for char in info[18:24]]) def GetFirstActiveNetworkInterfaceNonLoopback(): """ Return the interface name, and ip addr of the first active non-loopback interface. """ iface='' expected=16 # how many devices should I expect... is_64bits = sys.maxsize > 2**32 struct_size=40 if is_64bits else 32 # for 64bit the size is 40 bytes, for 32bits it is 32 bytes. s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) buff=array.array('B', b'\0' * (expected*struct_size)) retsize=(struct.unpack('iL', fcntl.ioctl(s.fileno(), 0x8912, struct.pack('iL',expected*struct_size,buff.buffer_info()[0]))))[0] if retsize == (expected*struct_size) : Warn('SIOCGIFCONF returned more than ' + str(expected) + ' up network interfaces.') s=buff.tostring() preferred_nic = Config.get("Network.Interface") for i in range(0,struct_size*expected,struct_size): iface=s[i:i+16].split(b'\0', 1)[0] if iface == b'lo': continue elif preferred_nic is None: break elif iface == preferred_nic: break return iface.decode('latin-1'), socket.inet_ntoa(s[i+20:i+24]) def GetIpv4Address(): """ Return the ip of the first active non-loopback interface. """ iface,addr=GetFirstActiveNetworkInterfaceNonLoopback() return addr def HexStringToByteArray(a): """ Return hex string packed into a binary struct. """ b = b"" for c in range(0, len(a) // 2): b += struct.pack("B", int(a[c * 2:c * 2 + 2], 16)) return b def GetMacAddress(): """ Convienience function, returns mac addr bound to first non-loobback interface. """ ifname='' while len(ifname) < 2 : ifname=GetFirstActiveNetworkInterfaceNonLoopback()[0] a = Linux_ioctl_GetInterfaceMac(ifname) return HexStringToByteArray(a) def DeviceForIdePort(n): """ Return device name attached to ide port 'n'. """ if n > 3: return None g0 = "00000000" if n > 1: g0 = "00000001" n = n - 2 device = None path = "/sys/bus/vmbus/devices/" for vmbus in os.listdir(path): guid = GetFileContents(path + vmbus + "/device_id").lstrip('{').split('-') if guid[0] == g0 and guid[1] == "000" + str(n): for root, dirs, files in os.walk(path + vmbus): if root.endswith("/block"): device = dirs[0] break else : #older distros for d in dirs: if ':' in d and "block" == d.split(':')[0]: device = d.split(':')[1] break break return device class HttpResourceGoneError(Exception): pass class Util(object): """ Http communication class. Base of GoalState, and Agent classes. """ RetryWaitingInterval=10 def __init__(self): self.Endpoint = None def _ParseUrl(self, url): secure = False host = self.Endpoint path = url port = None #"http[s]://hostname[:port][/]" if url.startswith("http://"): url = url[7:] if "/" in url: host = url[0: url.index("/")] path = url[url.index("/"):] else: host = url path = "/" elif url.startswith("https://"): secure = True url = url[8:] if "/" in url: host = url[0: url.index("/")] path = url[url.index("/"):] else: host = url path = "/" if host is None: raise ValueError("Host is invalid:{0}".format(url)) if(":" in host): pos = host.rfind(":") port = int(host[pos + 1:]) host = host[0:pos] return host, port, secure, path def GetHttpProxy(self, secure): """ Get http_proxy and https_proxy from environment variables. Username and password is not supported now. """ host = Config.get("HttpProxy.Host") port = Config.get("HttpProxy.Port") return (host, port) def _HttpRequest(self, method, host, path, port=None, data=None, secure=False, headers=None, proxyHost=None, proxyPort=None): resp = None conn = None try: if secure: port = 443 if port is None else port if proxyHost is not None and proxyPort is not None: conn = httplib.HTTPSConnection(proxyHost, proxyPort, timeout=10) conn.set_tunnel(host, port) #If proxy is used, full url is needed. path = "https://{0}:{1}{2}".format(host, port, path) else: conn = httplib.HTTPSConnection(host, port, timeout=10) else: port = 80 if port is None else port if proxyHost is not None and proxyPort is not None: conn = httplib.HTTPConnection(proxyHost, proxyPort, timeout=10) #If proxy is used, full url is needed. path = "http://{0}:{1}{2}".format(host, port, path) else: conn = httplib.HTTPConnection(host, port, timeout=10) if headers == None: conn.request(method, path, data) else: conn.request(method, path, data, headers) resp = conn.getresponse() except httplib.HTTPException, e: Error('HTTPException {0}, args:{1}'.format(e, repr(e.args))) except IOError, e: Error('Socket IOError {0}, args:{1}'.format(e, repr(e.args))) return resp def HttpRequest(self, method, url, data=None, headers=None, maxRetry=3, chkProxy=False): """ Sending http request to server On error, sleep 10 and maxRetry times. Return the output buffer or None. """ LogIfVerbose("HTTP Req: {0} {1}".format(method, url)) LogIfVerbose("HTTP Req: Data={0}".format(data)) LogIfVerbose("HTTP Req: Header={0}".format(headers)) try: host, port, secure, path = self._ParseUrl(url) except ValueError, e: Error("Failed to parse url:{0}".format(url)) return None #Check proxy proxyHost, proxyPort = (None, None) if chkProxy: proxyHost, proxyPort = self.GetHttpProxy(secure) #If httplib module is not built with ssl support. Fallback to http if secure and not hasattr(httplib, "HTTPSConnection"): Warn("httplib is not built with ssl support") secure = False proxyHost, proxyPort = self.GetHttpProxy(secure) #If httplib module doesn't support https tunnelling. Fallback to http if secure and \ proxyHost is not None and \ proxyPort is not None and \ not hasattr(httplib.HTTPSConnection, "set_tunnel"): Warn("httplib doesn't support https tunnelling(new in python 2.7)") secure = False proxyHost, proxyPort = self.GetHttpProxy(secure) resp = self._HttpRequest(method, host, path, port=port, data=data, secure=secure, headers=headers, proxyHost=proxyHost, proxyPort=proxyPort) for retry in range(0, maxRetry): if resp is not None and \ (resp.status == httplib.OK or \ resp.status == httplib.CREATED or \ resp.status == httplib.ACCEPTED): return resp; if resp is not None and resp.status == httplib.GONE: raise HttpResourceGoneError("Http resource gone.") Error("Retry={0}".format(retry)) Error("HTTP Req: {0} {1}".format(method, url)) Error("HTTP Req: Data={0}".format(data)) Error("HTTP Req: Header={0}".format(headers)) if resp is None: Error("HTTP Err: response is empty.".format(retry)) else: Error("HTTP Err: Status={0}".format(resp.status)) Error("HTTP Err: Reason={0}".format(resp.reason)) Error("HTTP Err: Header={0}".format(resp.getheaders())) Error("HTTP Err: Body={0}".format(resp.read())) time.sleep(self.__class__.RetryWaitingInterval) resp = self._HttpRequest(method, host, path, port=port, data=data, secure=secure, headers=headers, proxyHost=proxyHost, proxyPort=proxyPort) return None def HttpGet(self, url, headers=None, maxRetry=3, chkProxy=False): return self.HttpRequest("GET", url, headers=headers, maxRetry=maxRetry, chkProxy=chkProxy) def HttpHead(self, url, headers=None, maxRetry=3, chkProxy=False): return self.HttpRequest("HEAD", url, headers=headers, maxRetry=maxRetry, chkProxy=chkProxy) def HttpPost(self, url, data, headers=None, maxRetry=3, chkProxy=False): return self.HttpRequest("POST", url, data=data, headers=headers, maxRetry=maxRetry, chkProxy=chkProxy) def HttpPut(self, url, data, headers=None, maxRetry=3, chkProxy=False): return self.HttpRequest("PUT", url, data=data, headers=headers, maxRetry=maxRetry, chkProxy=chkProxy) def HttpDelete(self, url, headers=None, maxRetry=3, chkProxy=False): return self.HttpRequest("DELETE", url, headers=headers, maxRetry=maxRetry, chkProxy=chkProxy) def HttpGetWithoutHeaders(self, url, maxRetry=3, chkProxy=False): """ Return data from an HTTP get on 'url'. """ resp = self.HttpGet(url, headers=None, maxRetry=maxRetry, chkProxy=chkProxy) return resp.read() if resp is not None else None def HttpGetWithHeaders(self, url, maxRetry=3, chkProxy=False): """ Return data from an HTTP get on 'url' with x-ms-agent-name and x-ms-version headers. """ resp = self.HttpGet(url, headers={ "x-ms-agent-name": GuestAgentName, "x-ms-version": ProtocolVersion }, maxRetry=maxRetry, chkProxy=chkProxy) return resp.read() if resp is not None else None def HttpSecureGetWithHeaders(self, url, transportCert, maxRetry=3, chkProxy=False): """ Return output of get using ssl cert. """ resp = self.HttpGet(url, headers={ "x-ms-agent-name": GuestAgentName, "x-ms-version": ProtocolVersion, "x-ms-cipher-name": "DES_EDE3_CBC", "x-ms-guest-agent-public-x509-cert": transportCert }, maxRetry=maxRetry, chkProxy=chkProxy) return resp.read() if resp is not None else None def HttpPostWithHeaders(self, url, data, maxRetry=3, chkProxy=False): headers = { "x-ms-agent-name": GuestAgentName, "Content-Type": "text/xml; charset=utf-8", "x-ms-version": ProtocolVersion } try: return self.HttpPost(url, data=data, headers=headers, maxRetry=maxRetry, chkProxy=chkProxy) except HttpResourceGoneError as e: Error("Failed to post: {0} {1}".format(url, e)) return None __StorageVersion="2014-02-14" def GetBlobType(url): restutil = Util() #Check blob type LogIfVerbose("Check blob type.") timestamp = time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime()) blobPropResp = restutil.HttpHead(url, { "x-ms-date" : timestamp, 'x-ms-version' : __StorageVersion }, chkProxy=True); blobType = None if blobPropResp is None: Error("Can't get status blob type.") return None blobType = blobPropResp.getheader("x-ms-blob-type") LogIfVerbose("Blob type={0}".format(blobType)) return blobType def PutBlockBlob(url, data): restutil = Util() LogIfVerbose("Upload block blob") timestamp = time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime()) ret = restutil.HttpPut(url, data, { "x-ms-date" : timestamp, "x-ms-blob-type" : "BlockBlob", "Content-Length": str(len(data)), "x-ms-version" : __StorageVersion }, chkProxy=True) if ret is None: Error("Failed to upload block blob for status.") return -1 return 0 def PutPageBlob(url, data): restutil = Util() LogIfVerbose("Replace old page blob") timestamp = time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime()) #Align to 512 bytes pageBlobSize = ((len(data) + 511) / 512) * 512 ret = restutil.HttpPut(url, "", { "x-ms-date" : timestamp, "x-ms-blob-type" : "PageBlob", "Content-Length": "0", "x-ms-blob-content-length" : str(pageBlobSize), "x-ms-version" : __StorageVersion }, chkProxy=True) if ret is None: Error("Failed to clean up page blob for status") return -1 if url.index('?') < 0: url = "{0}?comp=page".format(url) else: url = "{0}&comp=page".format(url) LogIfVerbose("Upload page blob") pageMax = 4 * 1024 * 1024 #Max page size: 4MB start = 0 end = 0 while end < len(data): end = min(len(data), start + pageMax) contentSize = end - start #Align to 512 bytes pageEnd = ((end + 511) / 512) * 512 bufSize = pageEnd - start buf = bytearray(bufSize) buf[0 : contentSize] = data[start : end] ret = restutil.HttpPut(url, buffer(buf), { "x-ms-date" : timestamp, "x-ms-range" : "bytes={0}-{1}".format(start, pageEnd - 1), "x-ms-page-write" : "update", "x-ms-version" : __StorageVersion, "Content-Length": str(pageEnd - start) }, chkProxy=True) if ret is None: Error("Failed to upload page blob for status") return -1 start = end return 0 def UploadStatusBlob(url, data): LogIfVerbose("Upload status blob") LogIfVerbose("Status={0}".format(data)) blobType = GetBlobType(url) if blobType == "BlockBlob": return PutBlockBlob(url, data) elif blobType == "PageBlob": return PutPageBlob(url, data) else: Error("Unknown blob type: {0}".format(blobType)) return -1 class TCPHandler(SocketServer.BaseRequestHandler): """ Callback object for LoadBalancerProbeServer. Recv and send LB probe messages. """ def __init__(self,lb_probe): super(TCPHandler,self).__init__() self.lb_probe=lb_probe def GetHttpDateTimeNow(self): """ Return formatted gmtime "Date: Fri, 25 Mar 2011 04:53:10 GMT" """ return time.strftime("%a, %d %b %Y %H:%M:%S GMT", time.gmtime()) def handle(self): """ Log LB probe messages, read the socket buffer, send LB probe response back to server. """ self.lb_probe.ProbeCounter = (self.lb_probe.ProbeCounter + 1) % 1000000 log = [NoLog, LogIfVerbose][ThrottleLog(self.lb_probe.ProbeCounter)] strCounter = str(self.lb_probe.ProbeCounter) if self.lb_probe.ProbeCounter == 1: Log("Receiving LB probes.") log("Received LB probe # " + strCounter) self.request.recv(1024) self.request.send("HTTP/1.1 200 OK\r\nContent-Length: 2\r\nContent-Type: text/html\r\nDate: " + self.GetHttpDateTimeNow() + "\r\n\r\nOK") class LoadBalancerProbeServer(object): """ Threaded object to receive and send LB probe messages. Load Balancer messages but be recv'd by the load balancing server, or this node may be shut-down. """ def __init__(self, port): self.ProbeCounter = 0 self.server = SocketServer.TCPServer((self.get_ip(), port), TCPHandler) self.server_thread = threading.Thread(target = self.server.serve_forever) self.server_thread.setDaemon(True) self.server_thread.start() def shutdown(self): self.server.shutdown() def get_ip(self): for retry in range(1,6): ip = MyDistro.GetIpv4Address() if ip == None : Log("LoadBalancerProbeServer: GetIpv4Address() returned None, sleeping 10 before retry " + str(retry+1) ) time.sleep(10) else: return ip class ConfigurationProvider(object): """ Parse amd store key:values in waagent.conf """ def __init__(self, walaConfigFile): self.values = dict() if 'MyDistro' not in globals(): global MyDistro MyDistro = GetMyDistro() if walaConfigFile is None: walaConfigFile = MyDistro.getConfigurationPath() if os.path.isfile(walaConfigFile) == False: raise Exception("Missing configuration in {0}".format(walaConfigFile)) try: for line in GetFileContents(walaConfigFile).split('\n'): if not line.startswith("#") and "=" in line: parts = line.split()[0].split('=') value = parts[1].strip("\" ") if value != "None": self.values[parts[0]] = value else: self.values[parts[0]] = None except: Error("Unable to parse {0}".format(walaConfigFile)) raise return def get(self, key): return self.values.get(key) class EnvMonitor(object): """ Montor changes to dhcp and hostname. If dhcp clinet process re-start has occurred, reset routes, dhcp with fabric. """ def __init__(self): self.shutdown = False self.HostName = socket.gethostname() self.server_thread = threading.Thread(target = self.monitor) self.server_thread.setDaemon(True) self.server_thread.start() self.published = False def monitor(self): """ Monitor dhcp client pid and hostname. If dhcp clinet process re-start has occurred, reset routes, dhcp with fabric. """ publish = Config.get("Provisioning.MonitorHostName") dhcpcmd = MyDistro.getpidcmd+ ' ' + MyDistro.getDhcpClientName() dhcppid = RunGetOutput(dhcpcmd)[1] while not self.shutdown: for a in RulesFiles: if os.path.isfile(a): if os.path.isfile(GetLastPathElement(a)): os.remove(GetLastPathElement(a)) shutil.move(a, ".") Log("EnvMonitor: Moved " + a + " -> " + LibDir) MyDistro.setScsiDiskTimeout() if publish != None and publish.lower().startswith("y"): try: if socket.gethostname() != self.HostName: Log("EnvMonitor: Detected host name change: " + self.HostName + " -> " + socket.gethostname()) self.HostName = socket.gethostname() WaAgent.UpdateAndPublishHostName(self.HostName) dhcppid = RunGetOutput(dhcpcmd)[1] self.published = True except: pass else: self.published = True pid = "" if not os.path.isdir("/proc/" + dhcppid.strip()): pid = RunGetOutput(dhcpcmd)[1] if pid != "" and pid != dhcppid: Log("EnvMonitor: Detected dhcp client restart. Restoring routing table.") WaAgent.RestoreRoutes() dhcppid = pid for child in Children: if child.poll() != None: Children.remove(child) time.sleep(5) def SetHostName(self, name): """ Generic call to MyDistro.setHostname(name). Complian to Log on error. """ if socket.gethostname() == name: self.published = True elif MyDistro.setHostname(name): Error("Error: SetHostName: Cannot set hostname to " + name) return ("Error: SetHostName: Cannot set hostname to " + name) def IsHostnamePublished(self): """ Return self.published """ return self.published def ShutdownService(self): """ Stop server comminucation and join the thread to main thread. """ self.shutdown = True self.server_thread.join() class Certificates(object): """ Object containing certificates of host and provisioned user. Parses and splits certificates into files. """ # # 2010-12-15 # 2 # Pkcs7BlobWithPfxContents # MIILTAY... # # def __init__(self): self.reinitialize() def reinitialize(self): """ Reset the Role, Incarnation """ self.Incarnation = None self.Role = None def Parse(self, xmlText): """ Parse multiple certificates into seperate files. """ self.reinitialize() SetFileContents("Certificates.xml", xmlText) dom = xml.dom.minidom.parseString(xmlText) for a in [ "CertificateFile", "Version", "Incarnation", "Format", "Data", ]: if not dom.getElementsByTagName(a): Error("Certificates.Parse: Missing " + a) return None node = dom.childNodes[0] if node.localName != "CertificateFile": Error("Certificates.Parse: root not CertificateFile") return None SetFileContents("Certificates.p7m", "MIME-Version: 1.0\n" + "Content-Disposition: attachment; filename=\"Certificates.p7m\"\n" + "Content-Type: application/x-pkcs7-mime; name=\"Certificates.p7m\"\n" + "Content-Transfer-Encoding: base64\n\n" + GetNodeTextData(dom.getElementsByTagName("Data")[0])) if Run(Openssl + " cms -decrypt -in Certificates.p7m -inkey TransportPrivate.pem -recip TransportCert.pem | " + Openssl + " pkcs12 -nodes -password pass: -out Certificates.pem"): Error("Certificates.Parse: Failed to extract certificates from CMS message.") return self # There may be multiple certificates in this package. Split them. file = open("Certificates.pem") pindex = 1 cindex = 1 output = open("temp.pem", "w") for line in file.readlines(): output.write(line) if re.match(r'[-]+END .*?(KEY|CERTIFICATE)[-]+$',line): output.close() if re.match(r'[-]+END .*?KEY[-]+$',line): os.rename("temp.pem", str(pindex) + ".prv") pindex += 1 else: os.rename("temp.pem", str(cindex) + ".crt") cindex += 1 output = open("temp.pem", "w") output.close() os.remove("temp.pem") keys = dict() index = 1 filename = str(index) + ".crt" while os.path.isfile(filename): thumbprint = (RunGetOutput(Openssl + " x509 -in " + filename + " -fingerprint -noout")[1]).rstrip().split('=')[1].replace(':', '').upper() pubkey=RunGetOutput(Openssl + " x509 -in " + filename + " -pubkey -noout")[1] keys[pubkey] = thumbprint os.rename(filename, thumbprint + ".crt") os.chmod(thumbprint + ".crt", 0600) MyDistro.setSelinuxContext(thumbprint + '.crt','unconfined_u:object_r:ssh_home_t:s0') index += 1 filename = str(index) + ".crt" index = 1 filename = str(index) + ".prv" while os.path.isfile(filename): pubkey = RunGetOutput(Openssl + " rsa -in " + filename + " -pubout 2> /dev/null ")[1] os.rename(filename, keys[pubkey] + ".prv") os.chmod(keys[pubkey] + ".prv", 0600) MyDistro.setSelinuxContext( keys[pubkey] + '.prv','unconfined_u:object_r:ssh_home_t:s0') index += 1 filename = str(index) + ".prv" return self class SharedConfig(object): """ Parse role endpoint server and goal state config. """ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # def __init__(self): self.reinitialize() def reinitialize(self): """ Reset members. """ self.RdmaMacAddress = None self.RdmaIPv4Address = None self.xmlText = None def Parse(self, xmlText): """ Parse and write configuration to file SharedConfig.xml. """ LogIfVerbose(xmlText) self.reinitialize() self.xmlText = xmlText dom = xml.dom.minidom.parseString(xmlText) for a in [ "SharedConfig", "Deployment", "Service", "ServiceInstance", "Incarnation", "Role", ]: if not dom.getElementsByTagName(a): Error("SharedConfig.Parse: Missing " + a) node = dom.childNodes[0] if node.localName != "SharedConfig": Error("SharedConfig.Parse: root not SharedConfig") nodes = dom.getElementsByTagName("Instance") if nodes is not None and len(nodes) != 0: node = nodes[0] if node.hasAttribute("rdmaMacAddress"): addr = node.getAttribute("rdmaMacAddress") self.RdmaMacAddress = addr[0:2] for i in range(1, 6): self.RdmaMacAddress += ":" + addr[2 * i : 2 *i + 2] if node.hasAttribute("rdmaIPv4Address"): self.RdmaIPv4Address = node.getAttribute("rdmaIPv4Address") return self def Save(self): LogIfVerbose("Save SharedConfig.xml") SetFileContents("SharedConfig.xml", self.xmlText) def InvokeTopologyConsumer(self): program = Config.get("Role.TopologyConsumer") if program != None: try: Children.append(subprocess.Popen([program, LibDir + "/SharedConfig.xml"])) except OSError, e : ErrorWithPrefix('Agent.Run','Exception: '+ str(e) +' occured launching ' + program ) def Process(self): global rdma_configured if not rdma_configured and self.RdmaMacAddress is not None and self.RdmaIPv4Address is not None: handler = RdmaHandler(self.RdmaMacAddress, self.RdmaIPv4Address) handler.start() rdma_configured = True self.InvokeTopologyConsumer() rdma_configured = False class RdmaError(Exception): pass class RdmaHandler(object): """ Handle rdma configuration. """ def __init__(self, mac, ip_addr, dev="/dev/hvnd_rdma", dat_conf_files=['/etc/dat.conf', '/etc/rdma/dat.conf', '/usr/local/etc/dat.conf']): self.mac = mac self.ip_addr = ip_addr self.dev = dev self.dat_conf_files = dat_conf_files self.data = ('rdmaMacAddress="{0}" rdmaIPv4Address="{1}"' '').format(self.mac, self.ip_addr) def start(self): """ Start a new thread to process rdma """ threading.Thread(target=self.process).start() def process(self): try: self.set_dat_conf() self.set_rdma_dev() self.set_rdma_ip() except RdmaError as e: Error("Failed to config rdma device: {0}".format(e)) def set_dat_conf(self): """ Agent needs to search all possible locations for dat.conf """ Log("Set dat.conf") for dat_conf_file in self.dat_conf_files: if not os.path.isfile(dat_conf_file): continue try: self.write_dat_conf(dat_conf_file) except IOError as e: raise RdmaError("Failed to write to dat.conf: {0}".format(e)) def write_dat_conf(self, dat_conf_file): Log("Write config to {0}".format(dat_conf_file)) old = ("ofa-v2-ib0 u2.0 nonthreadsafe default libdaplofa.so.2 " "dapl.2.0 \"\S+ 0\"") new = ("ofa-v2-ib0 u2.0 nonthreadsafe default libdaplofa.so.2 " "dapl.2.0 \"{0} 0\"").format(self.ip_addr) lines = GetFileContents(dat_conf_file) lines = re.sub(old, new, lines) SetFileContents(dat_conf_file, lines) def set_rdma_dev(self): """ Write config string to /dev/hvnd_rdma """ Log("Set /dev/hvnd_rdma") self.wait_rdma_dev() self.write_rdma_dev_conf() def write_rdma_dev_conf(self): Log("Write rdma config to {0}: {1}".format(self.dev, self.data)) try: with open(self.dev, "w") as c: c.write(self.data) except IOError, e: raise RdmaError("Error writing {0}, {1}".format(self.dev, e)) def wait_rdma_dev(self): Log("Wait for /dev/hvnd_rdma") retry = 0 while retry < 120: if os.path.exists(self.dev): return time.sleep(1) retry += 1 raise RdmaError("The device doesn't show up in 120 seconds") def set_rdma_ip(self): Log("Set ip addr for rdma") try: if_name = MyDistro.getInterfaceNameByMac(self.mac) #Azure is using 12 bits network mask for infiniband. MyDistro.configIpV4(if_name, self.ip_addr, 12) except Exception as e: raise RdmaError("Failed to config rdma device: {0}".format(e)) class ExtensionsConfig(object): """ Parse ExtensionsConfig, downloading and unpacking them to /var/lib/waagent. Install if true, remove if it is set to false. """ # # # # # # # {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"1BE9A13AA1321C7C515EF109746998BAB6D86FD1", #"protectedSettings":"MIIByAYJKoZIhvcNAQcDoIIBuTCCAbUCAQAxggFxMIIBbQIBADBVMEExPzA9BgoJkiaJk/IsZAEZFi9XaW5kb3dzIEF6dXJlIFNlcnZpY2UgTWFuYWdlbWVudCBmb3IgR #Xh0ZW5zaW9ucwIQZi7dw+nhc6VHQTQpCiiV2zANBgkqhkiG9w0BAQEFAASCAQCKr09QKMGhwYe+O4/a8td+vpB4eTR+BQso84cV5KCAnD6iUIMcSYTrn9aveY6v6ykRLEw8GRKfri2d6 #tvVDggUrBqDwIgzejGTlCstcMJItWa8Je8gHZVSDfoN80AEOTws9Fp+wNXAbSuMJNb8EnpkpvigAWU2v6pGLEFvSKC0MCjDTkjpjqciGMcbe/r85RG3Zo21HLl0xNOpjDs/qqikc/ri43Y76E/X #v1vBSHEGMFprPy/Hwo3PqZCnulcbVzNnaXN3qi/kxV897xGMPPC3IrO7Nc++AT9qRLFI0841JLcLTlnoVG1okPzK9w6ttksDQmKBSHt3mfYV+skqs+EOMDsGCSqGSIb3DQEHATAUBggqh #kiG9w0DBwQITgu0Nu3iFPuAGD6/QzKdtrnCI5425fIUy7LtpXJGmpWDUA==","publicSettings":{"port":"3000"}}}]} # # #https://ostcextensions.blob.core.test-cint.azure-test.net/vhds/eg-plugin7-vm.eg-plugin7-vm.eg-plugin7-vm.status?sr=b&sp=rw& #se=9999-01-01&sk=key1&sv=2012-02-12&sig=wRUIDN1x2GC06FWaetBP9sjjifOWvRzS2y2XBB4qoBU%3D def __init__(self): self.reinitialize() def reinitialize(self): """ Reset members. """ self.Extensions = None self.Plugins = None self.Util = None def Parse(self, xmlText): """ Write configuration to file ExtensionsConfig.xml. Log plugin specific activity to /var/log/azure/.//CommandExecution.log. If state is enabled: if the plugin is installed: if the new plugin's version is higher if DisallowMajorVersionUpgrade is false or if true, the version is a minor version do upgrade: download the new archive do the updateCommand. disable the old plugin and remove enable the new plugin if the new plugin's version is the same or lower: create the new .settings file from the configuration received do the enableCommand if the plugin is not installed: download/unpack archive and call the installCommand/Enable if state is disabled: call disableCommand if state is uninstall: call uninstallCommand remove old plugin directory. """ self.reinitialize() self.Util=Util() dom = xml.dom.minidom.parseString(xmlText) LogIfVerbose(xmlText) self.plugin_log_dir='/var/log/azure' if not os.path.exists(self.plugin_log_dir): os.mkdir(self.plugin_log_dir) try: self.Extensions=dom.getElementsByTagName("Extensions") pg = dom.getElementsByTagName("Plugins") if len(pg) > 0: self.Plugins = pg[0].getElementsByTagName("Plugin") else: self.Plugins = [] incarnation=self.Extensions[0].getAttribute("goalStateIncarnation") SetFileContents('ExtensionsConfig.'+incarnation+'.xml', xmlText) except Exception, e: Error('ERROR: Error parsing ExtensionsConfig: {0}.'.format(e)) return None for p in self.Plugins: if len(p.getAttribute("location"))<1: # this plugin is inside the PluginSettings continue p.setAttribute('restricted','false') previous_version = None version=p.getAttribute("version") name=p.getAttribute("name") plog_dir=self.plugin_log_dir+'/'+name +'/'+ version if not os.path.exists(plog_dir): os.makedirs(plog_dir) p.plugin_log=plog_dir+'/CommandExecution.log' handler=name + '-' + version if p.getAttribute("isJson") != 'true': Error("Plugin " + name+" version: " +version+" is not a JSON Extension. Skipping.") continue Log("Found Plugin: " + name + ' version: ' + version) if p.getAttribute("state") == 'disabled' or p.getAttribute("state") == 'uninstall': #disable zip_dir=LibDir+"/" + name + '-' + version mfile=None for root, dirs, files in os.walk(zip_dir): for f in files: if f in ('HandlerManifest.json'): mfile=os.path.join(root,f) if mfile != None: break if mfile == None : Error('HandlerManifest.json not found.') continue manifest = GetFileContents(mfile) p.setAttribute('manifestdata',manifest) if self.launchCommand(p.plugin_log,name,version,'disableCommand') == None : self.SetHandlerState(handler, 'Enabled') Error('Unable to disable '+name) SimpleLog(p.plugin_log,'ERROR: Unable to disable '+name) else : self.SetHandlerState(handler, 'Disabled') Log(name+' is disabled') SimpleLog(p.plugin_log,name+' is disabled') # uninstall if needed if p.getAttribute("state") == 'uninstall': if self.launchCommand(p.plugin_log,name,version,'uninstallCommand') == None : self.SetHandlerState(handler, 'Installed') Error('Unable to uninstall '+name) SimpleLog(p.plugin_log,'Unable to uninstall '+name) else : self.SetHandlerState(handler, 'NotInstalled') Log(name+' uninstallCommand completed .') # remove the plugin Run('rm -rf ' + LibDir + '/' + name +'-'+ version + '*') Log(name +'-'+ version + ' extension files deleted.') SimpleLog(p.plugin_log,name +'-'+ version + ' extension files deleted.') continue # state is enabled # if the same plugin exists and the version is newer or # does not exist then download and unzip the new plugin plg_dir=None latest_version_installed = LooseVersion("0.0") for item in os.listdir(LibDir): itemPath = os.path.join(LibDir, item) if os.path.isdir(itemPath) and name in item: try: #Split plugin dir name with '-' to get intalled plugin name and version sperator = item.rfind('-') if sperator < 0: continue installed_plg_name = item[0:sperator] installed_plg_version = LooseVersion(item[sperator + 1:]) #Check installed plugin name and compare installed version to get the latest version installed if installed_plg_name == name and installed_plg_version > latest_version_installed: plg_dir = itemPath previous_version = str(installed_plg_version) latest_version_installed = installed_plg_version except Exception as e: Warn("Invalid plugin dir name: {0} {1}".format(item, e)) continue if plg_dir == None or LooseVersion(version) > LooseVersion(previous_version) : location=p.getAttribute("location") Log("Downloading plugin manifest: " + name + " from " + location) SimpleLog(p.plugin_log,"Downloading plugin manifest: " + name + " from " + location) self.Util.Endpoint=location.split('/')[2] Log("Plugin server is: " + self.Util.Endpoint) SimpleLog(p.plugin_log,"Plugin server is: " + self.Util.Endpoint) manifest=self.Util.HttpGetWithoutHeaders(location, chkProxy=True) if manifest == None: Error("Unable to download plugin manifest" + name + " from primary location. Attempting with failover location.") SimpleLog(p.plugin_log,"Unable to download plugin manifest" + name + " from primary location. Attempting with failover location.") failoverlocation=p.getAttribute("failoverlocation") self.Util.Endpoint=failoverlocation.split('/')[2] Log("Plugin failover server is: " + self.Util.Endpoint) SimpleLog(p.plugin_log,"Plugin failover server is: " + self.Util.Endpoint) manifest=self.Util.HttpGetWithoutHeaders(failoverlocation, chkProxy=True) #if failoverlocation also fail what to do then? if manifest == None: AddExtensionEvent(name,WALAEventOperation.Download,False,0,version,"Download mainfest fail "+failoverlocation) Log("Plugin manifest " + name + " downloading failed from failover location.") SimpleLog(p.plugin_log,"Plugin manifest " + name + " downloading failed from failover location.") filepath=LibDir+"/" + name + '.' + incarnation + '.manifest' if os.path.splitext(location)[-1] == '.xml' : #if this is an xml file we may have a BOM if ord(manifest[0]) > 128 and ord(manifest[1]) > 128 and ord(manifest[2]) > 128: manifest=manifest[3:] SetFileContents(filepath,manifest) #Get the bundle url from the manifest p.setAttribute('manifestdata',manifest) man_dom = xml.dom.minidom.parseString(manifest) bundle_uri = "" for mp in man_dom.getElementsByTagName("Plugin"): if GetNodeTextData(mp.getElementsByTagName("Version")[0]) == version: bundle_uri = GetNodeTextData(mp.getElementsByTagName("Uri")[0]) break if len(mp.getElementsByTagName("DisallowMajorVersionUpgrade")): if GetNodeTextData(mp.getElementsByTagName("DisallowMajorVersionUpgrade")[0]) == 'true' and previous_version !=None and previous_version.split('.')[0] != version.split('.')[0] : Log('DisallowMajorVersionUpgrade is true, this major version is restricted from upgrade.') SimpleLog(p.plugin_log,'DisallowMajorVersionUpgrade is true, this major version is restricted from upgrade.') p.setAttribute('restricted','true') continue if len(bundle_uri) < 1 : Error("Unable to fetch Bundle URI from manifest for " + name + " v " + version) SimpleLog(p.plugin_log,"Unable to fetch Bundle URI from manifest for " + name + " v " + version) continue Log("Bundle URI = " + bundle_uri) SimpleLog(p.plugin_log,"Bundle URI = " + bundle_uri) # Download the zipfile archive and save as '.zip' bundle=self.Util.HttpGetWithoutHeaders(bundle_uri, chkProxy=True) if bundle == None: AddExtensionEvent(name,WALAEventOperation.Download,True,0,version,"Download zip fail "+bundle_uri) Error("Unable to download plugin bundle" + bundle_uri ) SimpleLog(p.plugin_log,"Unable to download plugin bundle" + bundle_uri ) continue AddExtensionEvent(name,WALAEventOperation.Download,True,0,version,"Download Success") b=bytearray(bundle) filepath=LibDir+"/" + os.path.basename(bundle_uri) + '.zip' SetFileContents(filepath,b) Log("Plugin bundle" + bundle_uri + "downloaded successfully length = " + str(len(bundle))) SimpleLog(p.plugin_log,"Plugin bundle" + bundle_uri + "downloaded successfully length = " + str(len(bundle))) # unpack the archive z=zipfile.ZipFile(filepath) zip_dir=LibDir+"/" + name + '-' + version z.extractall(zip_dir) Log('Extracted ' + bundle_uri + ' to ' + zip_dir) SimpleLog(p.plugin_log,'Extracted ' + bundle_uri + ' to ' + zip_dir) # zip no file perms in .zip so set all the scripts to +x Run( "find " + zip_dir +" -type f | xargs chmod u+x ") #write out the base64 config data so the plugin can process it. mfile=None for root, dirs, files in os.walk(zip_dir): for f in files: if f in ('HandlerManifest.json'): mfile=os.path.join(root,f) if mfile != None: break if mfile == None : Error('HandlerManifest.json not found.') SimpleLog(p.plugin_log,'HandlerManifest.json not found.') continue manifest = GetFileContents(mfile) p.setAttribute('manifestdata',manifest) # create the status and config dirs Run('mkdir -p ' + root + '/status') Run('mkdir -p ' + root + '/config') # write out the configuration data to goalStateIncarnation.settings file in the config path. config='' seqNo='0' if len(dom.getElementsByTagName("PluginSettings")) != 0 : pslist=dom.getElementsByTagName("PluginSettings")[0].getElementsByTagName("Plugin") for ps in pslist: if name == ps.getAttribute("name") and version == ps.getAttribute("version"): Log("Found RuntimeSettings for " + name + " V " + version) SimpleLog(p.plugin_log,"Found RuntimeSettings for " + name + " V " + version) config=GetNodeTextData(ps.getElementsByTagName("RuntimeSettings")[0]) seqNo=ps.getElementsByTagName("RuntimeSettings")[0].getAttribute("seqNo") break if config == '': Log("No RuntimeSettings for " + name + " V " + version) SimpleLog(p.plugin_log,"No RuntimeSettings for " + name + " V " + version) SetFileContents(root +"/config/" + seqNo +".settings", config ) #create HandlerEnvironment.json handler_env='[{ "name": "'+name+'", "seqNo": "'+seqNo+'", "version": 1.0, "handlerEnvironment": { "logFolder": "'+os.path.dirname(p.plugin_log)+'", "configFolder": "' + root + '/config", "statusFolder": "' + root + '/status", "heartbeatFile": "'+ root + '/heartbeat.log"}}]' SetFileContents(root+'/HandlerEnvironment.json',handler_env) self.SetHandlerState(handler, 'NotInstalled') cmd = '' getcmd='installCommand' if plg_dir != None and previous_version != None and LooseVersion(version) > LooseVersion(previous_version): previous_handler=name+'-'+previous_version if self.GetHandlerState(previous_handler) != 'NotInstalled': getcmd='updateCommand' # disable the old plugin if it exists if self.launchCommand(p.plugin_log,name,previous_version,'disableCommand') == None : self.SetHandlerState(previous_handler, 'Enabled') Error('Unable to disable old plugin '+name+' version ' + previous_version) SimpleLog(p.plugin_log,'Unable to disable old plugin '+name+' version ' + previous_version) else : self.SetHandlerState(previous_handler, 'Disabled') Log(name+' version ' + previous_version + ' is disabled') SimpleLog(p.plugin_log,name+' version ' + previous_version + ' is disabled') try: Log("Copy status file from old plugin dir to new") old_plg_dir = plg_dir new_plg_dir = os.path.join(LibDir, "{0}-{1}".format(name, version)) old_ext_status_dir = os.path.join(old_plg_dir, "status") new_ext_status_dir = os.path.join(new_plg_dir, "status") if os.path.isdir(old_ext_status_dir): for status_file in os.listdir(old_ext_status_dir): status_file_path = os.path.join(old_ext_status_dir, status_file) if os.path.isfile(status_file_path): shutil.copy2(status_file_path, new_ext_status_dir) mrseq_file = os.path.join(old_plg_dir, "mrseq") if os.path.isfile(mrseq_file): shutil.copy(mrseq_file, new_plg_dir) except Exception as e: Error("Failed to copy status file.") isupgradeSuccess = True if getcmd=='updateCommand': if self.launchCommand(p.plugin_log,name,version,getcmd,previous_version) == None : Error('Update failed for '+name+'-'+version) SimpleLog(p.plugin_log,'Update failed for '+name+'-'+version) isupgradeSuccess=False else : Log('Update complete'+name+'-'+version) SimpleLog(p.plugin_log,'Update complete'+name+'-'+version) # if we updated - call unistall for the old plugin if self.launchCommand(p.plugin_log,name,previous_version,'uninstallCommand') == None : self.SetHandlerState(previous_handler, 'Installed') Error('Uninstall failed for '+name+'-'+previous_version) SimpleLog(p.plugin_log,'Uninstall failed for '+name+'-'+previous_version) isupgradeSuccess=False else : self.SetHandlerState(previous_handler, 'NotInstalled') Log('Uninstall complete'+ previous_handler ) SimpleLog(p.plugin_log,'Uninstall complete'+ name +'-' + previous_version) try: #rm old plugin dir if os.path.isdir(plg_dir): shutil.rmtree(plg_dir) Log(name +'-'+ previous_version + ' extension files deleted.') SimpleLog(p.plugin_log,name +'-'+ previous_version + ' extension files deleted.') except Exception as e: Error("Failed to remove old plugin directory") AddExtensionEvent(name,WALAEventOperation.Upgrade,isupgradeSuccess,0,previous_version) else : # run install if self.launchCommand(p.plugin_log,name,version,getcmd) == None : self.SetHandlerState(handler, 'NotInstalled') Error('Installation failed for '+name+'-'+version) SimpleLog(p.plugin_log,'Installation failed for '+name+'-'+version) else : self.SetHandlerState(handler, 'Installed') Log('Installation completed for '+name+'-'+version) SimpleLog(p.plugin_log,'Installation completed for '+name+'-'+version) #end if plg_dir == none or version > = prev # change incarnation of settings file so it knows how to name status... zip_dir=LibDir+"/" + name + '-' + version mfile=None for root, dirs, files in os.walk(zip_dir): for f in files: if f in ('HandlerManifest.json'): mfile=os.path.join(root,f) if mfile != None: break if mfile == None : Error('HandlerManifest.json not found.') SimpleLog(p.plugin_log,'HandlerManifest.json not found.') continue manifest = GetFileContents(mfile) p.setAttribute('manifestdata',manifest) config='' seqNo='0' if len(dom.getElementsByTagName("PluginSettings")) != 0 : try: pslist=dom.getElementsByTagName("PluginSettings")[0].getElementsByTagName("Plugin") except: Error('Error parsing ExtensionsConfig.') SimpleLog(p.plugin_log,'Error parsing ExtensionsConfig.') continue for ps in pslist: if name == ps.getAttribute("name") and version == ps.getAttribute("version"): Log("Found RuntimeSettings for " + name + " V " + version) SimpleLog(p.plugin_log,"Found RuntimeSettings for " + name + " V " + version) config=GetNodeTextData(ps.getElementsByTagName("RuntimeSettings")[0]) seqNo=ps.getElementsByTagName("RuntimeSettings")[0].getAttribute("seqNo") break if config == '': Error("No RuntimeSettings for " + name + " V " + version) SimpleLog(p.plugin_log,"No RuntimeSettings for " + name + " V " + version) SetFileContents(root +"/config/" + seqNo +".settings", config ) # state is still enable if (self.GetHandlerState(handler) == 'NotInstalled'): # run install first if true if self.launchCommand(p.plugin_log,name,version,'installCommand') == None : self.SetHandlerState(handler, 'NotInstalled') Error('Installation failed for '+name+'-'+version) SimpleLog(p.plugin_log,'Installation failed for '+name+'-'+version) else : self.SetHandlerState(handler, 'Installed') Log('Installation completed for '+name+'-'+version) SimpleLog(p.plugin_log,'Installation completed for '+name+'-'+version) if (self.GetHandlerState(handler) != 'NotInstalled'): if self.launchCommand(p.plugin_log,name,version,'enableCommand') == None : self.SetHandlerState(handler, 'Installed') Error('Enable failed for '+name+'-'+version) SimpleLog(p.plugin_log,'Enable failed for '+name+'-'+version) else : self.SetHandlerState(handler, 'Enabled') Log('Enable completed for '+name+'-'+version) SimpleLog(p.plugin_log,'Enable completed for '+name+'-'+version) # this plugin processing is complete Log('Processing completed for '+name+'-'+version) SimpleLog(p.plugin_log,'Processing completed for '+name+'-'+version) #end plugin processing loop Log('Finished processing ExtensionsConfig.xml') try: SimpleLog(p.plugin_log,'Finished processing ExtensionsConfig.xml') except: pass return self def launchCommand(self,plugin_log,name,version,command,prev_version=None): commandToEventOperation={ "installCommand":WALAEventOperation.Install, "uninstallCommand":WALAEventOperation.UnIsntall, "updateCommand": WALAEventOperation.Upgrade, "enableCommand": WALAEventOperation.Enable, "disableCommand": WALAEventOperation.Disable, } isSuccess=True start = datetime.datetime.now() r=self.__launchCommandWithoutEventLog(plugin_log,name,version,command,prev_version) if r==None: isSuccess=False Duration = int((datetime.datetime.now() - start).seconds) if commandToEventOperation.get(command): AddExtensionEvent(name,commandToEventOperation[command],isSuccess,Duration,version) return r def __launchCommandWithoutEventLog(self,plugin_log,name,version,command,prev_version=None): # get the manifest and read the command mfile=None zip_dir=LibDir+"/" + name + '-' + version for root, dirs, files in os.walk(zip_dir): for f in files: if f in ('HandlerManifest.json'): mfile=os.path.join(root,f) if mfile != None: break if mfile == None : Error('HandlerManifest.json not found.') SimpleLog(plugin_log,'HandlerManifest.json not found.') return None manifest = GetFileContents(mfile) try: jsn = json.loads(manifest) except: Error('Error parsing HandlerManifest.json.') SimpleLog(plugin_log,'Error parsing HandlerManifest.json.') return None if type(jsn)==list: jsn=jsn[0] if jsn.has_key('handlerManifest') : cmd = jsn['handlerManifest'][command] else : Error('Key handlerManifest not found. Handler cannot be installed.') SimpleLog(plugin_log,'Key handlerManifest not found. Handler cannot be installed.') if len(cmd) == 0 : Error('Unable to read ' + command ) SimpleLog(plugin_log,'Unable to read ' + command ) return None # for update we send the path of the old installation arg='' if prev_version != None : arg=' ' + LibDir+'/' + name + '-' + prev_version dirpath=os.path.dirname(mfile) LogIfVerbose('Command is '+ dirpath+'/'+ cmd) # launch pid=None try: child = subprocess.Popen(dirpath+'/'+cmd+arg,shell=True,cwd=dirpath,stdout=subprocess.PIPE) except Exception as e: Error('Exception launching ' + cmd + str(e)) SimpleLog(plugin_log,'Exception launching ' + cmd + str(e)) pid = child.pid if pid == None or pid < 1 : ExtensionChildren.append((-1,root)) Error('Error launching ' + cmd + '.') SimpleLog(plugin_log,'Error launching ' + cmd + '.') else : ExtensionChildren.append((pid,root)) Log("Spawned "+ cmd + " PID " + str(pid)) SimpleLog(plugin_log,"Spawned "+ cmd + " PID " + str(pid)) # wait until install/upgrade is finished timeout = 300 # 5 minutes retry = timeout/5 while retry > 0 and child.poll() == None: LogIfVerbose(cmd + ' still running with PID ' + str(pid)) time.sleep(5) retry-=1 if retry==0: Error('Process exceeded timeout of ' + str(timeout) + ' seconds. Terminating process ' + str(pid)) SimpleLog(plugin_log,'Process exceeded timeout of ' + str(timeout) + ' seconds. Terminating process ' + str(pid)) os.kill(pid,9) return None code = child.wait() if code == None or code != 0: Error('Process ' + str(pid) + ' returned non-zero exit code (' + str(code) + ')') SimpleLog(plugin_log,'Process ' + str(pid) + ' returned non-zero exit code (' + str(code) + ')') return None Log(command + ' completed.') SimpleLog(plugin_log,command + ' completed.') return 0 def ReportHandlerStatus(self): """ Collect all status reports. """ # { "version": "1.0", "timestampUTC": "2014-03-31T21:28:58Z", # "aggregateStatus": { # "guestAgentStatus": { "version": "2.0.4PRE", "status": "Ready", "formattedMessage": { "lang": "en-US", "message": "GuestAgent is running and accepting new configurations." } }, # "handlerAggregateStatus": [{ # "handlerName": "ExampleHandlerLinux", "handlerVersion": "1.0", "status": "Ready", "runtimeSettingsStatus": { # "sequenceNumber": "2", "settingsStatus": { "timestampUTC": "2014-03-31T23:46:00Z", "status": { "name": "ExampleHandlerLinux", "operation": "Command Execution Finished", "configurationAppliedTime": "2014-03-31T23:46:00Z", "status": "success", "formattedMessage": { "lang": "en-US", "message": "Finished executing command" }, # "substatus": [ # { "name": "StdOut", "status": "success", "formattedMessage": { "lang": "en-US", "message": "Goodbye world!" } }, # { "name": "StdErr", "status": "success", "formattedMessage": { "lang": "en-US", "message": "" } } # ] # } } } } # ] # }} try: incarnation=self.Extensions[0].getAttribute("goalStateIncarnation") except: Error('Error parsing attribute "goalStateIncarnation". Unable to send status reports') return -1 status='' statuses='' for p in self.Plugins: if p.getAttribute("state") == 'uninstall' or p.getAttribute("restricted") == 'true' : continue version=p.getAttribute("version") name=p.getAttribute("name") if p.getAttribute("isJson") != 'true': LogIfVerbose("Plugin " + name+" version: " +version+" is not a JSON Extension. Skipping.") continue reportHeartbeat = False if len(p.getAttribute("manifestdata"))<1: Error("Failed to get manifestdata.") else: reportHeartbeat = json.loads(p.getAttribute("manifestdata"))[0]['handlerManifest']['reportHeartbeat'] if len(statuses)>0: statuses+=',' statuses+=self.GenerateAggStatus(name, version, reportHeartbeat) tstamp=time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime()) #header #agent state if provisioned == False: if provisionError == None : agent_state='Provisioning' agent_msg='Guest Agent is starting.' else: agent_state='Provisioning Error.' agent_msg=provisionError else: agent_state='Ready' agent_msg='GuestAgent is running and accepting new configurations.' status='{"version":"1.0","timestampUTC":"'+tstamp+'","aggregateStatus":{"guestAgentStatus":{"version":"'+GuestAgentVersion+'","status":"'+agent_state+'","formattedMessage":{"lang":"en-US","message":"'+agent_msg+'"}},"handlerAggregateStatus":['+statuses+']}}' try: uri=GetNodeTextData(self.Extensions[0].getElementsByTagName("StatusUploadBlob")[0]).replace('&','&') except: Error('Error parsing element "StatusUploadBlob". Unable to send status reports') return -1 LogIfVerbose('Status report '+status+' sent to ' + uri) return UploadStatusBlob(uri, status.encode("utf-8")) def GetCurrentSequenceNumber(self, plugin_base_dir): """ Get the settings file with biggest file number in config folder """ config_dir = os.path.join(plugin_base_dir, 'config') seq_no = 0 for subdir, dirs, files in os.walk(config_dir): for file in files: try: cur_seq_no = int(os.path.basename(file).split('.')[0]) if cur_seq_no > seq_no: seq_no = cur_seq_no except ValueError: continue return str(seq_no) def GenerateAggStatus(self, name, version, reportHeartbeat = False): """ Generate the status which Azure can understand by the status and heartbeat reported by extension """ plugin_base_dir = LibDir+'/'+name+'-'+version+'/' current_seq_no = self.GetCurrentSequenceNumber(plugin_base_dir) status_file=os.path.join(plugin_base_dir, 'status/', current_seq_no +'.status') heartbeat_file = os.path.join(plugin_base_dir, 'heartbeat.log') handler_state_file = os.path.join(plugin_base_dir, 'config', 'HandlerState') agg_state = 'NotReady' handler_state = None status_obj = None status_code = None formatted_message = None localized_message = None if os.path.exists(handler_state_file): handler_state = GetFileContents(handler_state_file).lower() if HandlerStatusToAggStatus.has_key(handler_state): agg_state = HandlerStatusToAggStatus[handler_state] if reportHeartbeat: if os.path.exists(heartbeat_file): d=int(time.time()-os.stat(heartbeat_file).st_mtime) if d > 600 : # not updated for more than 10 min agg_state = 'Unresponsive' else: try: heartbeat = json.loads(GetFileContents(heartbeat_file))[0]["heartbeat"] agg_state = heartbeat.get("status") status_code = heartbeat.get("code") formatted_message = heartbeat.get("formattedMessage") localized_message = heartbeat.get("message") except: Error("Incorrect heartbeat file. Ignore it. ") else: agg_state = 'Unresponsive' #get status file reported by extension if os.path.exists(status_file): # raw status generated by extension is an array, get the first item and remove the unnecessary element try: status_obj = json.loads(GetFileContents(status_file))[0] del status_obj["version"] except: Error("Incorrect status file. Will NOT settingsStatus in settings. ") agg_status_obj = {"handlerName": name, "handlerVersion": version, "status": agg_state, "runtimeSettingsStatus" : {"sequenceNumber": current_seq_no}} if status_obj: agg_status_obj["runtimeSettingsStatus"]["settingsStatus"] = status_obj if status_code != None: agg_status_obj["code"] = status_code if formatted_message: agg_status_obj["formattedMessage"] = formatted_message if localized_message: agg_status_obj["message"] = localized_message agg_status_string = json.dumps(agg_status_obj) LogIfVerbose("Handler Aggregated Status:" + agg_status_string) return agg_status_string def SetHandlerState(self, handler, state=''): zip_dir=LibDir+"/" + handler mfile=None for root, dirs, files in os.walk(zip_dir): for f in files: if f in ('HandlerManifest.json'): mfile=os.path.join(root,f) if mfile != None: break if mfile == None : Error('SetHandlerState(): HandlerManifest.json not found, cannot set HandlerState.') return None Log("SetHandlerState: "+handler+", "+state) return SetFileContents(os.path.dirname(mfile)+'/config/HandlerState', state) def GetHandlerState(self, handler): handlerState = GetFileContents(handler+'/config/HandlerState') if (handlerState): return handlerState.rstrip('\r\n') else: return 'NotInstalled' class HostingEnvironmentConfig(object): """ Parse Hosting enviromnet config and store in HostingEnvironmentConfig.xml """ # # # # # # # # # # # # # # # # # # # # # # # # # # def __init__(self): self.reinitialize() def reinitialize(self): """ Reset Members. """ self.StoredCertificates = None self.Deployment = None self.Incarnation = None self.Role = None self.HostingEnvironmentSettings = None self.ApplicationSettings = None self.Certificates = None self.ResourceReferences = None def Parse(self, xmlText): """ Parse and create HostingEnvironmentConfig.xml. """ self.reinitialize() SetFileContents("HostingEnvironmentConfig.xml", xmlText) dom = xml.dom.minidom.parseString(xmlText) for a in [ "HostingEnvironmentConfig", "Deployment", "Service", "ServiceInstance", "Incarnation", "Role", ]: if not dom.getElementsByTagName(a): Error("HostingEnvironmentConfig.Parse: Missing " + a) return None node = dom.childNodes[0] if node.localName != "HostingEnvironmentConfig": Error("HostingEnvironmentConfig.Parse: root not HostingEnvironmentConfig") return None self.ApplicationSettings = dom.getElementsByTagName("Setting") self.Certificates = dom.getElementsByTagName("StoredCertificate") return self def DecryptPassword(self, e): """ Return decrypted password. """ SetFileContents("password.p7m", "MIME-Version: 1.0\n" + "Content-Disposition: attachment; filename=\"password.p7m\"\n" + "Content-Type: application/x-pkcs7-mime; name=\"password.p7m\"\n" + "Content-Transfer-Encoding: base64\n\n" + textwrap.fill(e, 64)) return RunGetOutput(Openssl + " cms -decrypt -in password.p7m -inkey Certificates.pem -recip Certificates.pem")[1] def ActivateResourceDisk(self): return MyDistro.ActivateResourceDisk() def Process(self): """ Execute ActivateResourceDisk in separate thread. Create the user account. Launch ConfigurationConsumer if specified in the config. """ no_thread = False if DiskActivated == False: for m in inspect.getmembers(MyDistro): if 'ActivateResourceDiskNoThread' in m: no_thread = True break if no_thread == True : MyDistro.ActivateResourceDiskNoThread() else : diskThread = threading.Thread(target = self.ActivateResourceDisk) diskThread.start() User = None Pass = None Expiration = None Thumbprint = None for b in self.ApplicationSettings: sname = b.getAttribute("name") svalue = b.getAttribute("value") if User != None and Pass != None: if User != "root" and User != "" and Pass != "": CreateAccount(User, Pass, Expiration, Thumbprint) else: Error("Not creating user account: " + User) for c in self.Certificates: csha1 = c.getAttribute("certificateId").split(':')[1].upper() if os.path.isfile(csha1 + ".prv"): Log("Private key with thumbprint: " + csha1 + " was retrieved.") if os.path.isfile(csha1 + ".crt"): Log("Public cert with thumbprint: " + csha1 + " was retrieved.") program = Config.get("Role.ConfigurationConsumer") if program != None: try: Children.append(subprocess.Popen([program, LibDir + "/HostingEnvironmentConfig.xml"])) except OSError, e : ErrorWithPrefix('HostingEnvironmentConfig.Process','Exception: '+ str(e) +' occured launching ' + program ) class GoalState(Util): """ Primary container for all configuration except OvfXml. Encapsulates http communication with endpoint server. Initializes and populates: self.HostingEnvironmentConfig self.SharedConfig self.ExtensionsConfig self.Certificates """ # # # 2010-12-15 # 1 # # Started # # 16001 # # # # c6d5526c-5ac2-4200-b6e2-56f2b70c5ab2 # # # MachineRole_IN_0 # Started # # http://10.115.153.40:80/machine/c6d5526c-5ac2-4200-b6e2-56f2b70c5ab2/MachineRole%5FIN%5F0?comp=config&type=hostingEnvironmentConfig&incarnation=1 # http://10.115.153.40:80/machine/c6d5526c-5ac2-4200-b6e2-56f2b70c5ab2/MachineRole%5FIN%5F0?comp=config&type=sharedConfig&incarnation=1 # http://10.115.153.40:80/machine/c6d5526c-5ac2-4200-b6e2-56f2b70c5ab2/MachineRole%5FIN%5F0?comp=certificates&incarnation=1 # http://100.67.238.230:80/machine/9c87aa94-3bda-45e3-b2b7-0eb0fca7baff/1552dd64dc254e6884f8d5b8b68aa18f.eg%2Dplug%2Dvm?comp=config&type=extensionsConfig&incarnation=2 # http://100.67.238.230:80/machine/9c87aa94-3bda-45e3-b2b7-0eb0fca7baff/1552dd64dc254e6884f8d5b8b68aa18f.eg%2Dplug%2Dvm?comp=config&type=fullConfig&incarnation=2 # # # # # # # There is only one Role for VM images. # # Of primary interest is: # LBProbePorts -- an http server needs to run here # We also note Container/ContainerID and RoleInstance/InstanceId to form the health report. # And of course, Incarnation # def __init__(self, Agent): self.Agent = Agent self.Endpoint = Agent.Endpoint self.TransportCert = Agent.TransportCert self.reinitialize() def reinitialize(self): self.Incarnation = None # integer self.ExpectedState = None # "Started" self.HostingEnvironmentConfigUrl = None self.HostingEnvironmentConfigXml = None self.HostingEnvironmentConfig = None self.SharedConfigUrl = None self.SharedConfigXml = None self.SharedConfig = None self.CertificatesUrl = None self.CertificatesXml = None self.Certificates = None self.ExtensionsConfigUrl = None self.ExtensionsConfigXml = None self.ExtensionsConfig = None self.RoleInstanceId = None self.ContainerId = None self.LoadBalancerProbePort = None # integer, ?list of integers def Parse(self, xmlText): """ Request configuration data from endpoint server. Parse and populate contained configuration objects. Calls Certificates().Parse() Calls SharedConfig().Parse Calls ExtensionsConfig().Parse Calls HostingEnvironmentConfig().Parse """ self.reinitialize() LogIfVerbose(xmlText) node = xml.dom.minidom.parseString(xmlText).childNodes[0] if node.localName != "GoalState": Error("GoalState.Parse: root not GoalState") return None for a in node.childNodes: if a.nodeType == node.ELEMENT_NODE: if a.localName == "Incarnation": self.Incarnation = GetNodeTextData(a) elif a.localName == "Machine": for b in a.childNodes: if b.nodeType == node.ELEMENT_NODE: if b.localName == "ExpectedState": self.ExpectedState = GetNodeTextData(b) Log("ExpectedState: " + self.ExpectedState) elif b.localName == "LBProbePorts": for c in b.childNodes: if c.nodeType == node.ELEMENT_NODE and c.localName == "Port": self.LoadBalancerProbePort = int(GetNodeTextData(c)) elif a.localName == "Container": for b in a.childNodes: if b.nodeType == node.ELEMENT_NODE: if b.localName == "ContainerId": self.ContainerId = GetNodeTextData(b) Log("ContainerId: " + self.ContainerId) elif b.localName == "RoleInstanceList": for c in b.childNodes: if c.localName == "RoleInstance": for d in c.childNodes: if d.nodeType == node.ELEMENT_NODE: if d.localName == "InstanceId": self.RoleInstanceId = GetNodeTextData(d) Log("RoleInstanceId: " + self.RoleInstanceId) elif d.localName == "State": pass elif d.localName == "Configuration": for e in d.childNodes: if e.nodeType == node.ELEMENT_NODE: LogIfVerbose(e.localName) if e.localName == "HostingEnvironmentConfig": self.HostingEnvironmentConfigUrl = GetNodeTextData(e) LogIfVerbose("HostingEnvironmentConfigUrl:" + self.HostingEnvironmentConfigUrl) self.HostingEnvironmentConfigXml = self.HttpGetWithHeaders(self.HostingEnvironmentConfigUrl) self.HostingEnvironmentConfig = HostingEnvironmentConfig().Parse(self.HostingEnvironmentConfigXml) elif e.localName == "SharedConfig": self.SharedConfigUrl = GetNodeTextData(e) LogIfVerbose("SharedConfigUrl:" + self.SharedConfigUrl) self.SharedConfigXml = self.HttpGetWithHeaders(self.SharedConfigUrl) self.SharedConfig = SharedConfig().Parse(self.SharedConfigXml) self.SharedConfig.Save() elif e.localName == "ExtensionsConfig": self.ExtensionsConfigUrl = GetNodeTextData(e) LogIfVerbose("ExtensionsConfigUrl:" + self.ExtensionsConfigUrl) self.ExtensionsConfigXml = self.HttpGetWithHeaders(self.ExtensionsConfigUrl) elif e.localName == "Certificates": self.CertificatesUrl = GetNodeTextData(e) LogIfVerbose("CertificatesUrl:" + self.CertificatesUrl) self.CertificatesXml = self.HttpSecureGetWithHeaders(self.CertificatesUrl, self.TransportCert) self.Certificates = Certificates().Parse(self.CertificatesXml) if self.Incarnation == None: Error("GoalState.Parse: Incarnation missing") return None if self.ExpectedState == None: Error("GoalState.Parse: ExpectedState missing") return None if self.RoleInstanceId == None: Error("GoalState.Parse: RoleInstanceId missing") return None if self.ContainerId == None: Error("GoalState.Parse: ContainerId missing") return None SetFileContents("GoalState." + self.Incarnation + ".xml", xmlText) return self def Process(self): """ Calls HostingEnvironmentConfig.Process() """ LogIfVerbose("Process goalstate") self.HostingEnvironmentConfig.Process() self.SharedConfig.Process() class OvfEnv(object): """ Read, and process provisioning info from provisioning file OvfEnv.xml """ # # # # # 1.0 # # LinuxProvisioningConfiguration # HostName # UserName # UserPassword # false # # # # EB0C0AB4B2D5FC35F2F0658D19F44C8283E2DD62 # $HOME/UserName/.ssh/authorized_keys # # # # # EB0C0AB4B2D5FC35F2F0658D19F44C8283E2DD62 # $HOME/UserName/.ssh/id_rsa # # # # # # # def __init__(self): self.reinitialize() def reinitialize(self): """ Reset members. """ self.WaNs = "http://schemas.microsoft.com/windowsazure" self.OvfNs = "http://schemas.dmtf.org/ovf/environment/1" self.MajorVersion = 1 self.MinorVersion = 0 self.ComputerName = None self.AdminPassword = None self.UserName = None self.UserPassword = None self.CustomData = None self.DisableSshPasswordAuthentication = True self.SshPublicKeys = [] self.SshKeyPairs = [] def Parse(self, xmlText, isDeprovision = False): """ Parse xml tree, retreiving user and ssh key information. Return self. """ self.reinitialize() LogIfVerbose(re.sub(".*?<", "*<", xmlText)) dom = xml.dom.minidom.parseString(xmlText) if len(dom.getElementsByTagNameNS(self.OvfNs, "Environment")) != 1: Error("Unable to parse OVF XML.") section = None newer = False for p in dom.getElementsByTagNameNS(self.WaNs, "ProvisioningSection"): for n in p.childNodes: if n.localName == "Version": verparts = GetNodeTextData(n).split('.') major = int(verparts[0]) minor = int(verparts[1]) if major > self.MajorVersion: newer = True if major != self.MajorVersion: break if minor > self.MinorVersion: newer = True section = p if newer == True: Warn("Newer provisioning configuration detected. Please consider updating waagent.") if section == None: Error("Could not find ProvisioningSection with major version=" + str(self.MajorVersion)) return None self.ComputerName = GetNodeTextData(section.getElementsByTagNameNS(self.WaNs, "HostName")[0]) self.UserName = GetNodeTextData(section.getElementsByTagNameNS(self.WaNs, "UserName")[0]) if isDeprovision == True: return self try: self.UserPassword = GetNodeTextData(section.getElementsByTagNameNS(self.WaNs, "UserPassword")[0]) except: pass CDSection=None try: CDSection=section.getElementsByTagNameNS(self.WaNs, "CustomData") if len(CDSection) > 0 : self.CustomData=GetNodeTextData(CDSection[0]) if len(self.CustomData)>0: SetFileContents(LibDir + '/CustomData', bytearray(MyDistro.translateCustomData(self.CustomData), 'utf-8')) Log('Wrote ' + LibDir + '/CustomData') else : Error(' contains no data!') except Exception, e: Error( str(e)+' occured creating ' + LibDir + '/CustomData') disableSshPass = section.getElementsByTagNameNS(self.WaNs, "DisableSshPasswordAuthentication") if len(disableSshPass) != 0: self.DisableSshPasswordAuthentication = (GetNodeTextData(disableSshPass[0]).lower() == "true") for pkey in section.getElementsByTagNameNS(self.WaNs, "PublicKey"): LogIfVerbose(repr(pkey)) fp = None path = None for c in pkey.childNodes: if c.localName == "Fingerprint": fp = GetNodeTextData(c).upper() LogIfVerbose(fp) if c.localName == "Path": path = GetNodeTextData(c) LogIfVerbose(path) self.SshPublicKeys += [[fp, path]] for keyp in section.getElementsByTagNameNS(self.WaNs, "KeyPair"): fp = None path = None LogIfVerbose(repr(keyp)) for c in keyp.childNodes: if c.localName == "Fingerprint": fp = GetNodeTextData(c).upper() LogIfVerbose(fp) if c.localName == "Path": path = GetNodeTextData(c) LogIfVerbose(path) self.SshKeyPairs += [[fp, path]] return self def PrepareDir(self, filepath): """ Create home dir for self.UserName Change owner and return path. """ home = MyDistro.GetHome() # Expand HOME variable if present in path path = os.path.normpath(filepath.replace("$HOME", home)) if (path.startswith("/") == False) or (path.endswith("/") == True): return None dir = path.rsplit('/', 1)[0] if dir != "": CreateDir(dir, "root", 0700) if path.startswith(os.path.normpath(home + "/" + self.UserName + "/")): ChangeOwner(dir, self.UserName) return path def NumberToBytes(self, i): """ Pack number into bytes. Retun as string. """ result = [] while i: result.append(chr(i & 0xFF)) i >>= 8 result.reverse() return ''.join(result) def BitsToString(self, a): """ Return string representation of bits in a. """ index=7 s = "" c = 0 for bit in a: c = c | (bit << index) index = index - 1 if index == -1: s = s + struct.pack('>B', c) c = 0 index = 7 return s def OpensslToSsh(self, file): """ Return base-64 encoded key appropriate for ssh. """ from pyasn1.codec.der import decoder as der_decoder try: f = open(file).read().replace('\n','').split("KEY-----")[1].split('-')[0] k=der_decoder.decode(self.BitsToString(der_decoder.decode(base64.b64decode(f))[0][1]))[0] n=k[0] e=k[1] keydata="" keydata += struct.pack('>I',len("ssh-rsa")) keydata += "ssh-rsa" keydata += struct.pack('>I',len(self.NumberToBytes(e))) keydata += self.NumberToBytes(e) keydata += struct.pack('>I',len(self.NumberToBytes(n)) + 1) keydata += "\0" keydata += self.NumberToBytes(n) except Exception, e: print("OpensslToSsh: Exception " + str(e)) return None return "ssh-rsa " + base64.b64encode(keydata) + "\n" def Process(self): """ Process all certificate and key info. DisableSshPasswordAuthentication if configured. CreateAccount(user) Wait for WaAgent.EnvMonitor.IsHostnamePublished(). Restart ssh service. """ error = None if self.ComputerName == None : return "Error: Hostname missing" error=WaAgent.EnvMonitor.SetHostName(self.ComputerName) if error: return error if self.DisableSshPasswordAuthentication: filepath = "/etc/ssh/sshd_config" # Disable RFC 4252 and RFC 4256 authentication schemes. ReplaceFileContentsAtomic(filepath, "\n".join(filter(lambda a: not (a.startswith("PasswordAuthentication") or a.startswith("ChallengeResponseAuthentication")), GetFileContents(filepath).split('\n'))) + "\nPasswordAuthentication no\nChallengeResponseAuthentication no\n") Log("Disabled SSH password-based authentication methods.") if self.AdminPassword != None: MyDistro.changePass('root',self.AdminPassword) if self.UserName != None: error = MyDistro.CreateAccount(self.UserName, self.UserPassword, None, None) sel = MyDistro.isSelinuxRunning() if sel : MyDistro.setSelinuxEnforce(0) home = MyDistro.GetHome() for pkey in self.SshPublicKeys: Log("Deploy public key:{0}".format(pkey[0])) if not os.path.isfile(pkey[0] + ".crt"): Error("PublicKey not found: " + pkey[0]) error = "Failed to deploy public key (0x09)." continue path = self.PrepareDir(pkey[1]) if path == None: Error("Invalid path: " + pkey[1] + " for PublicKey: " + pkey[0]) error = "Invalid path for public key (0x03)." continue Run(Openssl + " x509 -in " + pkey[0] + ".crt -noout -pubkey > " + pkey[0] + ".pub") MyDistro.setSelinuxContext(pkey[0] + '.pub','unconfined_u:object_r:ssh_home_t:s0') MyDistro.sshDeployPublicKey(pkey[0] + '.pub',path) MyDistro.setSelinuxContext(path,'unconfined_u:object_r:ssh_home_t:s0') if path.startswith(os.path.normpath(home + "/" + self.UserName + "/")): ChangeOwner(path, self.UserName) for keyp in self.SshKeyPairs: Log("Deploy key pair:{0}".format(keyp[0])) if not os.path.isfile(keyp[0] + ".prv"): Error("KeyPair not found: " + keyp[0]) error = "Failed to deploy key pair (0x0A)." continue path = self.PrepareDir(keyp[1]) if path == None: Error("Invalid path: " + keyp[1] + " for KeyPair: " + keyp[0]) error = "Invalid path for key pair (0x05)." continue SetFileContents(path, GetFileContents(keyp[0] + ".prv")) os.chmod(path, 0600) Run("ssh-keygen -y -f " + keyp[0] + ".prv > " + path + ".pub") MyDistro.setSelinuxContext(path,'unconfined_u:object_r:ssh_home_t:s0') MyDistro.setSelinuxContext(path + '.pub','unconfined_u:object_r:ssh_home_t:s0') if path.startswith(os.path.normpath(home + "/" + self.UserName + "/")): ChangeOwner(path, self.UserName) ChangeOwner(path + ".pub", self.UserName) if sel : MyDistro.setSelinuxEnforce(1) while not WaAgent.EnvMonitor.IsHostnamePublished(): time.sleep(1) MyDistro.restartSshService() return error class WALAEvent(object): def __init__(self): self.providerId="" self.eventId=1 self.OpcodeName="" self.KeywordName="" self.TaskName="" self.TenantName="" self.RoleName="" self.RoleInstanceName="" self.ContainerId="" self.ExecutionMode="IAAS" self.OSVersion="" self.GAVersion="" self.RAM=0 self.Processors=0 def ToXml(self): strEventid=u''.format(self.eventId) strProviderid=u''.format(self.providerId) strRecordFormat = u'' strRecordNoQuoteFormat = u'' strMtStr=u'mt:wstr' strMtUInt64=u'mt:uint64' strMtBool=u'mt:bool' strMtFloat=u'mt:float64' strEventsData=u"" for attName in self.__dict__: if attName in ["eventId","filedCount","providerId"]: continue attValue = self.__dict__[attName] if type(attValue) is int: strEventsData+=strRecordFormat.format(attName,attValue,strMtUInt64) continue if type(attValue) is str: attValue = xml.sax.saxutils.quoteattr(attValue) strEventsData+=strRecordNoQuoteFormat.format(attName,attValue,strMtStr) continue if str(type(attValue)).count("'unicode'") >0 : attValue = xml.sax.saxutils.quoteattr(attValue) strEventsData+=strRecordNoQuoteFormat.format(attName,attValue,strMtStr) continue if type(attValue) is bool: strEventsData+=strRecordFormat.format(attName,attValue,strMtBool) continue if type(attValue) is float: strEventsData+=strRecordFormat.format(attName,attValue,strMtFloat) continue Log("Warning: property "+attName+":"+str(type(attValue))+":type"+str(type(attValue))+"Can't convert to events data:"+":type not supported") return u"{0}{1}{2}".format(strProviderid,strEventid,strEventsData) def Save(self): eventfolder = LibDir+"/events" if not os.path.exists(eventfolder): os.mkdir(eventfolder) os.chmod(eventfolder,0700) if len(os.listdir(eventfolder)) > 1000: raise Exception("WriteToFolder:Too many file under "+eventfolder+" exit") filename = os.path.join(eventfolder,str(int(time.time()*1000000))) with open(filename+".tmp",'wb+') as hfile: hfile.write(self.ToXml().encode("utf-8")) os.rename(filename+".tmp",filename+".tld") class WALAEventOperation: HeartBeat="HeartBeat" Provision = "Provision" Install = "Install" UnIsntall = "UnInstall" Disable = "Disable" Enable = "Enable" Download = "Download" Upgrade = "Upgrade" Update = "Update" def AddExtensionEvent(name,op,isSuccess,duration=0,version="1.0",message="",type="",isInternal=False): event = ExtensionEvent() event.Name=name event.Version=version event.IsInternal=isInternal event.Operation=op event.OperationSuccess=isSuccess event.Message=message event.Duration=duration event.ExtensionType=type try: event.Save() except: Error("Error "+traceback.format_exc()) class ExtensionEvent(WALAEvent): def __init__(self): WALAEvent.__init__(self) self.eventId=1 self.providerId="69B669B9-4AF8-4C50-BDC4-6006FA76E975" self.Name="" self.Version="" self.IsInternal=False self.Operation="" self.OperationSuccess=True self.ExtensionType="" self.Message="" self.Duration=0 class WALAEventMonitor(WALAEvent): def __init__(self,postMethod): WALAEvent.__init__(self) self.post = postMethod self.sysInfo={} self.eventdir = LibDir+"/events" self.issysteminfoinitilized = False def StartEventsLoop(self): eventThread = threading.Thread(target = self.EventsLoop) eventThread.setDaemon(True) eventThread.start() def EventsLoop(self): LastReportHeartBeatTime = datetime.datetime.min try: while True: if (datetime.datetime.now()-LastReportHeartBeatTime) > \ datetime.timedelta(minutes=30): LastReportHeartBeatTime = datetime.datetime.now() AddExtensionEvent(op=WALAEventOperation.HeartBeat,name="WALA",isSuccess=True) self.postNumbersInOneLoop=0 self.CollectAndSendWALAEvents() time.sleep(60) except: Error("Exception in events loop:"+traceback.format_exc()) def SendEvent(self,providerid,events): dataFormat = u'{1}'\ '' data = dataFormat.format(providerid,events) self.post("/machine/?comp=telemetrydata", data) def CollectAndSendWALAEvents(self): if not os.path.exists(self.eventdir): return #Throtting, can't send more than 3 events in 15 seconds eventSendNumber=0 eventFiles = os.listdir(self.eventdir) events = {} for file in eventFiles: if not file.endswith(".tld"): continue with open(os.path.join(self.eventdir,file),"rb") as hfile: #if fail to open or delete the file, throw exception xmlStr = hfile.read().decode("utf-8",'ignore') os.remove(os.path.join(self.eventdir,file)) params="" eventid="" providerid="" #if exception happen during process an event, catch it and continue try: xmlStr = self.AddSystemInfo(xmlStr) for node in xml.dom.minidom.parseString(xmlStr.encode("utf-8")).childNodes[0].childNodes: if node.tagName == "Param": params+=node.toxml() if node.tagName == "Event": eventid=node.getAttribute("id") if node.tagName == "Provider": providerid = node.getAttribute("id") except: Error(traceback.format_exc()) continue if len(params)==0 or len(eventid)==0 or len(providerid)==0: Error("Empty filed in params:"+params+" event id:"+eventid+" provider id:"+providerid) continue eventstr = u''.format(eventid,params) if not events.get(providerid): events[providerid]="" if len(events[providerid]) >0 and len(events.get(providerid)+eventstr)>= 63*1024: eventSendNumber+=1 self.SendEvent(providerid,events.get(providerid)) if eventSendNumber %3 ==0: time.sleep(15) events[providerid]="" if len(eventstr) >= 63*1024: Error("Signle event too large abort "+eventstr[:300]) continue events[providerid]=events.get(providerid)+eventstr for key in events.keys(): if len(events[key]) > 0: eventSendNumber+=1 self.SendEvent(key,events[key]) if eventSendNumber%3 == 0: time.sleep(15) def AddSystemInfo(self,eventData): if not self.issysteminfoinitilized: self.issysteminfoinitilized=True try: self.sysInfo["OSVersion"]=platform.system()+":"+"-".join(DistInfo(1))+":"+platform.release() self.sysInfo["GAVersion"]=GuestAgentVersion self.sysInfo["RAM"]=MyDistro.getTotalMemory() self.sysInfo["Processors"]=MyDistro.getProcessorCores() sharedConfig = xml.dom.minidom.parse("/var/lib/waagent/SharedConfig.xml").childNodes[0] hostEnvConfig= xml.dom.minidom.parse("/var/lib/waagent/HostingEnvironmentConfig.xml").childNodes[0] gfiles = RunGetOutput("ls -t /var/lib/waagent/GoalState.*.xml")[1] goalStateConfi = xml.dom.minidom.parse(gfiles.split("\n")[0]).childNodes[0] self.sysInfo["TenantName"]=hostEnvConfig.getElementsByTagName("Deployment")[0].getAttribute("name") self.sysInfo["RoleName"]=hostEnvConfig.getElementsByTagName("Role")[0].getAttribute("name") self.sysInfo["RoleInstanceName"]=sharedConfig.getElementsByTagName("Instance")[0].getAttribute("id") self.sysInfo["ContainerId"]=goalStateConfi.getElementsByTagName("ContainerId")[0].childNodes[0].nodeValue except: Error(traceback.format_exc()) eventObject = xml.dom.minidom.parseString(eventData.encode("utf-8")).childNodes[0] for node in eventObject.childNodes: if node.tagName == "Param": name = node.getAttribute("Name") if self.sysInfo.get(name): node.setAttribute("Value",xml.sax.saxutils.escape(str(self.sysInfo[name]))) return eventObject.toxml() class Agent(Util): """ Primary object container for the provisioning process. """ def __init__(self): self.GoalState = None self.Endpoint = None self.LoadBalancerProbeServer = None self.HealthReportCounter = 0 self.TransportCert = "" self.EnvMonitor = None self.SendData = None self.DhcpResponse = None def CheckVersions(self): """ Query endpoint server for wire protocol version. Fail if our desired protocol version is not seen. """ # # # # 2010-12-15 # # # 2010-12-15 # 2010-28-10 # # global ProtocolVersion protocolVersionSeen = False node = xml.dom.minidom.parseString(self.HttpGetWithoutHeaders("/?comp=versions")).childNodes[0] if node.localName != "Versions": Error("CheckVersions: root not Versions") return False for a in node.childNodes: if a.nodeType == node.ELEMENT_NODE and a.localName == "Supported": for b in a.childNodes: if b.nodeType == node.ELEMENT_NODE and b.localName == "Version": v = GetNodeTextData(b) LogIfVerbose("Fabric supported wire protocol version: " + v) if v == ProtocolVersion: protocolVersionSeen = True if a.nodeType == node.ELEMENT_NODE and a.localName == "Preferred": v = GetNodeTextData(a.getElementsByTagName("Version")[0]) Log("Fabric preferred wire protocol version: " + v) if not protocolVersionSeen: Warn("Agent supported wire protocol version: " + ProtocolVersion + " was not advertised by Fabric.") else: Log("Negotiated wire protocol version: " + ProtocolVersion) return True def Unpack(self, buffer, offset, range): """ Unpack bytes into python values. """ result = 0 for i in range: result = (result << 8) | Ord(buffer[offset + i]) return result def UnpackLittleEndian(self, buffer, offset, length): """ Unpack little endian bytes into python values. """ return self.Unpack(buffer, offset, list(range(length - 1, -1, -1))) def UnpackBigEndian(self, buffer, offset, length): """ Unpack big endian bytes into python values. """ return self.Unpack(buffer, offset, list(range(0, length))) def HexDump3(self, buffer, offset, length): """ Dump range of buffer in formatted hex. """ return ''.join(['%02X' % Ord(char) for char in buffer[offset:offset + length]]) def HexDump2(self, buffer): """ Dump buffer in formatted hex. """ return self.HexDump3(buffer, 0, len(buffer)) def BuildDhcpRequest(self): """ Build DHCP request string. """ # # typedef struct _DHCP { # UINT8 Opcode; /* op: BOOTREQUEST or BOOTREPLY */ # UINT8 HardwareAddressType; /* htype: ethernet */ # UINT8 HardwareAddressLength; /* hlen: 6 (48 bit mac address) */ # UINT8 Hops; /* hops: 0 */ # UINT8 TransactionID[4]; /* xid: random */ # UINT8 Seconds[2]; /* secs: 0 */ # UINT8 Flags[2]; /* flags: 0 or 0x8000 for broadcast */ # UINT8 ClientIpAddress[4]; /* ciaddr: 0 */ # UINT8 YourIpAddress[4]; /* yiaddr: 0 */ # UINT8 ServerIpAddress[4]; /* siaddr: 0 */ # UINT8 RelayAgentIpAddress[4]; /* giaddr: 0 */ # UINT8 ClientHardwareAddress[16]; /* chaddr: 6 byte ethernet MAC address */ # UINT8 ServerName[64]; /* sname: 0 */ # UINT8 BootFileName[128]; /* file: 0 */ # UINT8 MagicCookie[4]; /* 99 130 83 99 */ # /* 0x63 0x82 0x53 0x63 */ # /* options -- hard code ours */ # # UINT8 MessageTypeCode; /* 53 */ # UINT8 MessageTypeLength; /* 1 */ # UINT8 MessageType; /* 1 for DISCOVER */ # UINT8 End; /* 255 */ # } DHCP; # # tuple of 244 zeros # (struct.pack_into would be good here, but requires Python 2.5) sendData = [0] * 244 transactionID = os.urandom(4) macAddress = MyDistro.GetMacAddress() # Opcode = 1 # HardwareAddressType = 1 (ethernet/MAC) # HardwareAddressLength = 6 (ethernet/MAC/48 bits) for a in range(0, 3): sendData[a] = [1, 1, 6][a] # fill in transaction id (random number to ensure response matches request) for a in range(0, 4): sendData[4 + a] = Ord(transactionID[a]) LogIfVerbose("BuildDhcpRequest: transactionId:%s,%04X" % (self.HexDump2(transactionID), self.UnpackBigEndian(sendData, 4, 4))) # fill in ClientHardwareAddress for a in range(0, 6): sendData[0x1C + a] = Ord(macAddress[a]) # DHCP Magic Cookie: 99, 130, 83, 99 # MessageTypeCode = 53 DHCP Message Type # MessageTypeLength = 1 # MessageType = DHCPDISCOVER # End = 255 DHCP_END for a in range(0, 8): sendData[0xEC + a] = [99, 130, 83, 99, 53, 1, 1, 255][a] return array.array("B", sendData) def IntegerToIpAddressV4String(self, a): """ Build DHCP request string. """ return "%u.%u.%u.%u" % ((a >> 24) & 0xFF, (a >> 16) & 0xFF, (a >> 8) & 0xFF, a & 0xFF) def RouteAdd(self, net, mask, gateway): """ Add specified route using /sbin/route add -net. """ net = self.IntegerToIpAddressV4String(net) mask = self.IntegerToIpAddressV4String(mask) gateway = self.IntegerToIpAddressV4String(gateway) Log("Route add: net={0}, mask={1}, gateway={2}".format(net, mask, gateway)) MyDistro.routeAdd(net, mask, gateway) def SetDefaultGateway(self, gateway): """ Set default gateway """ gateway = self.IntegerToIpAddressV4String(gateway) Log("Set default gateway: {0}".format(gateway)) MyDistro.setDefaultGateway(gateway) def HandleDhcpResponse(self, sendData, receiveBuffer): """ Parse DHCP response: Set default gateway. Set default routes. Retrieve endpoint server. Returns endpoint server or None on error. """ LogIfVerbose("HandleDhcpResponse") bytesReceived = len(receiveBuffer) if bytesReceived < 0xF6: Error("HandleDhcpResponse: Too few bytes received " + str(bytesReceived)) return None LogIfVerbose("BytesReceived: " + hex(bytesReceived)) LogWithPrefixIfVerbose("DHCP response:", HexDump(receiveBuffer, bytesReceived)) # check transactionId, cookie, MAC address # cookie should never mismatch # transactionId and MAC address may mismatch if we see a response meant from another machine for offsets in [list(range(4, 4 + 4)), list(range(0x1C, 0x1C + 6)), list(range(0xEC, 0xEC + 4))]: for offset in offsets: sentByte = Ord(sendData[offset]) receivedByte = Ord(receiveBuffer[offset]) if sentByte != receivedByte: LogIfVerbose("HandleDhcpResponse: sent cookie:" + self.HexDump3(sendData, 0xEC, 4)) LogIfVerbose("HandleDhcpResponse: rcvd cookie:" + self.HexDump3(receiveBuffer, 0xEC, 4)) LogIfVerbose("HandleDhcpResponse: sent transactionID:" + self.HexDump3(sendData, 4, 4)) LogIfVerbose("HandleDhcpResponse: rcvd transactionID:" + self.HexDump3(receiveBuffer, 4, 4)) LogIfVerbose("HandleDhcpResponse: sent ClientHardwareAddress:" + self.HexDump3(sendData, 0x1C, 6)) LogIfVerbose("HandleDhcpResponse: rcvd ClientHardwareAddress:" + self.HexDump3(receiveBuffer, 0x1C, 6)) LogIfVerbose("HandleDhcpResponse: transactionId, cookie, or MAC address mismatch") return None endpoint = None # # Walk all the returned options, parsing out what we need, ignoring the others. # We need the custom option 245 to find the the endpoint we talk to, # as well as, to handle some Linux DHCP client incompatibilities, # options 3 for default gateway and 249 for routes. And 255 is end. # i = 0xF0 # offset to first option while i < bytesReceived: option = Ord(receiveBuffer[i]) length = 0 if (i + 1) < bytesReceived: length = Ord(receiveBuffer[i + 1]) LogIfVerbose("DHCP option " + hex(option) + " at offset:" + hex(i) + " with length:" + hex(length)) if option == 255: LogIfVerbose("DHCP packet ended at offset " + hex(i)) break elif option == 249: # http://msdn.microsoft.com/en-us/library/cc227282%28PROT.10%29.aspx LogIfVerbose("Routes at offset:" + hex(i) + " with length:" + hex(length)) if length < 5: Error("Data too small for option " + str(option)) j = i + 2 while j < (i + length + 2): maskLengthBits = Ord(receiveBuffer[j]) maskLengthBytes = (((maskLengthBits + 7) & ~7) >> 3) mask = 0xFFFFFFFF & (0xFFFFFFFF << (32 - maskLengthBits)) j += 1 net = self.UnpackBigEndian(receiveBuffer, j, maskLengthBytes) net <<= (32 - maskLengthBytes * 8) net &= mask j += maskLengthBytes gateway = self.UnpackBigEndian(receiveBuffer, j, 4) j += 4 self.RouteAdd(net, mask, gateway) if j != (i + length + 2): Error("HandleDhcpResponse: Unable to parse routes") elif option == 3 or option == 245: if i + 5 < bytesReceived: if length != 4: Error("HandleDhcpResponse: Endpoint or Default Gateway not 4 bytes") return None gateway = self.UnpackBigEndian(receiveBuffer, i + 2, 4) IpAddress = self.IntegerToIpAddressV4String(gateway) if option == 3: self.SetDefaultGateway(gateway) name = "DefaultGateway" else: endpoint = IpAddress name = "Azure wire protocol endpoint" LogIfVerbose(name + ": " + IpAddress + " at " + hex(i)) else: Error("HandleDhcpResponse: Data too small for option " + str(option)) else: LogIfVerbose("Skipping DHCP option " + hex(option) + " at " + hex(i) + " with length " + hex(length)) i += length + 2 return endpoint def DoDhcpWork(self): """ Discover the wire server via DHCP option 245. And workaround incompatibility with Azure DHCP servers. """ ShortSleep = False # Sleep 1 second before retrying DHCP queries. ifname=None sleepDurations = [0, 10, 30, 60, 60] maxRetry = len(sleepDurations) lastTry = (maxRetry - 1) for retry in range(0, maxRetry): try: #Open DHCP port if iptables is enabled. Run("iptables -D INPUT -p udp --dport 68 -j ACCEPT",chk_err=False) # We supress error logging on error. Run("iptables -I INPUT -p udp --dport 68 -j ACCEPT",chk_err=False) # We supress error logging on error. strRetry = str(retry) prefix = "DoDhcpWork: try=" + strRetry LogIfVerbose(prefix) sendData = self.BuildDhcpRequest() LogWithPrefixIfVerbose("DHCP request:", HexDump(sendData, len(sendData))) sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP) sock.setsockopt(socket.SOL_SOCKET, socket.SO_BROADCAST, 1) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) missingDefaultRoute = True try: if DistInfo()[0] == 'FreeBSD': missingDefaultRoute = True else: routes = RunGetOutput("route -n")[1] for line in routes.split('\n'): if line.startswith("0.0.0.0 ") or line.startswith("default "): missingDefaultRoute = False except: pass if missingDefaultRoute: # This is required because sending after binding to 0.0.0.0 fails with # network unreachable when the default gateway is not set up. ifname=MyDistro.GetInterfaceName() Log("DoDhcpWork: Missing default route - adding broadcast route for DHCP.") if DistInfo()[0] == 'FreeBSD': Run("route add -net 255.255.255.255 -iface " + ifname,chk_err=False) else: Run("route add 255.255.255.255 dev " + ifname,chk_err=False) if MyDistro.isDHCPEnabled(): MyDistro.stopDHCP() sock.bind(("0.0.0.0", 68)) sock.sendto(sendData, ("", 67)) sock.settimeout(10) Log("DoDhcpWork: Setting socket.timeout=10, entering recv") receiveBuffer = sock.recv(1024) endpoint = self.HandleDhcpResponse(sendData, receiveBuffer) if endpoint == None: LogIfVerbose("DoDhcpWork: No endpoint found") if endpoint != None or retry == lastTry: if endpoint != None: self.SendData = sendData self.DhcpResponse = receiveBuffer if retry == lastTry: LogIfVerbose("DoDhcpWork: try=" + strRetry) return endpoint sleepDuration = [sleepDurations[retry % len(sleepDurations)], 1][ShortSleep] LogIfVerbose("DoDhcpWork: sleep=" + str(sleepDuration)) time.sleep(sleepDuration) except Exception, e: ErrorWithPrefix(prefix, str(e)) ErrorWithPrefix(prefix, traceback.format_exc()) finally: sock.close() if missingDefaultRoute: #We added this route - delete it Log("DoDhcpWork: Removing broadcast route for DHCP.") if DistInfo()[0] == 'FreeBSD': Run("route del -net 255.255.255.255 -iface " + ifname,chk_err=False) else: Run("route del 255.255.255.255 dev " + ifname,chk_err=False) # We supress error logging on error. if MyDistro.isDHCPEnabled(): MyDistro.startDHCP() return None def UpdateAndPublishHostName(self, name): """ Set hostname locally and publish to iDNS """ Log("Setting host name: " + name) MyDistro.publishHostname(name) ethernetInterface = MyDistro.GetInterfaceName() MyDistro.RestartInterface(ethernetInterface) self.RestoreRoutes() def RestoreRoutes(self): """ If there is a DHCP response, then call HandleDhcpResponse. """ if self.SendData != None and self.DhcpResponse != None: self.HandleDhcpResponse(self.SendData, self.DhcpResponse) def UpdateGoalState(self): """ Retreive goal state information from endpoint server. Parse xml and initialize Agent.GoalState object. Return object or None on error. """ goalStateXml = None maxRetry = 9 log = NoLog for retry in range(1, maxRetry + 1): strRetry = str(retry) log("retry UpdateGoalState,retry=" + strRetry) goalStateXml = self.HttpGetWithHeaders("/machine/?comp=goalstate") if goalStateXml != None: break log = Log time.sleep(retry) if not goalStateXml: Error("UpdateGoalState failed.") return Log("Retrieved GoalState from Azure Fabric.") self.GoalState = GoalState(self).Parse(goalStateXml) return self.GoalState def ReportReady(self): """ Send health report 'Ready' to server. This signals the fabric that our provosion is completed, and the host is ready for operation. """ counter = (self.HealthReportCounter + 1) % 1000000 self.HealthReportCounter = counter healthReport = ("" + self.GoalState.Incarnation + "" + self.GoalState.ContainerId + "" + self.GoalState.RoleInstanceId + "Ready") a = self.HttpPostWithHeaders("/machine?comp=health", healthReport) if a != None: return a.getheader("x-ms-latest-goal-state-incarnation-number") return None def ReportNotReady(self, status, desc): """ Send health report 'Provisioning' to server. This signals the fabric that our provosion is starting. """ healthReport = ("" + self.GoalState.Incarnation + "" + self.GoalState.ContainerId + "" + self.GoalState.RoleInstanceId + "NotReady" + "
" + status + "" + desc + "
" + "
") a = self.HttpPostWithHeaders("/machine?comp=health", healthReport) if a != None: return a.getheader("x-ms-latest-goal-state-incarnation-number") return None def ReportRoleProperties(self, thumbprint): """ Send roleProperties and thumbprint to server. """ roleProperties = ("" + "" + self.GoalState.ContainerId + "" + "" + "" + self.GoalState.RoleInstanceId + "" + "" + "") a = self.HttpPostWithHeaders("/machine?comp=roleProperties", roleProperties) Log("Posted Role Properties. CertificateThumbprint=" + thumbprint) return a def LoadBalancerProbeServer_Shutdown(self): """ Shutdown the LoadBalancerProbeServer. """ if self.LoadBalancerProbeServer != None: self.LoadBalancerProbeServer.shutdown() self.LoadBalancerProbeServer = None def GenerateTransportCert(self): """ Create ssl certificate for https communication with endpoint server. """ Run(Openssl + " req -x509 -nodes -subj /CN=LinuxTransport -days 32768 -newkey rsa:2048 -keyout TransportPrivate.pem -out TransportCert.pem") cert = "" for line in GetFileContents("TransportCert.pem").split('\n'): if not "CERTIFICATE" in line: cert += line.rstrip() return cert def DoVmmStartup(self): """ Spawn the VMM startup script. """ Log("Starting Microsoft System Center VMM Initialization Process") pid = subprocess.Popen(["/bin/bash","/mnt/cdrom/secure/"+VMM_STARTUP_SCRIPT_NAME,"-p /mnt/cdrom/secure/ "]).pid time.sleep(5) sys.exit(0) def TryUnloadAtapiix(self): """ If global modloaded is True, then we loaded the ata_piix kernel module, unload it. """ if modloaded: Run("rmmod ata_piix.ko",chk_err=False) Log("Unloaded ata_piix.ko driver for ATAPI CD-ROM") def TryLoadAtapiix(self): """ Load the ata_piix kernel module if it exists. If successful, set global modloaded to True. If unable to load module leave modloaded False. """ global modloaded modloaded=False retcode,krn=RunGetOutput('uname -r') krn_pth='/lib/modules/'+krn.strip('\n')+'/kernel/drivers/ata/ata_piix.ko' if Run("lsmod | grep ata_piix",chk_err=False) == 0 : Log("Module " + krn_pth + " driver for ATAPI CD-ROM is already present.") return 0 if retcode: Error("Unable to provision: Failed to call uname -r") return "Unable to provision: Failed to call uname" if os.path.isfile(krn_pth): retcode,output=RunGetOutput("insmod " + krn_pth,chk_err=False) else: Log("Module " + krn_pth + " driver for ATAPI CD-ROM does not exist.") return 1 if retcode != 0: Error('Error calling insmod for '+ krn_pth + ' driver for ATAPI CD-ROM') return retcode time.sleep(1) # check 3 times if the mod is loaded for i in range(3): if Run('lsmod | grep ata_piix'): continue else : modloaded=True break if not modloaded: Error('Unable to load '+ krn_pth + ' driver for ATAPI CD-ROM') return 1 Log("Loaded " + krn_pth + " driver for ATAPI CD-ROM") # we have succeeded loading the ata_piix mod if it can be done. def SearchForVMMStartup(self): """ Search for a DVD/CDROM containing VMM's VMM_CONFIG_FILE_NAME. Call TryLoadAtapiix in case we must load the ata_piix module first. If VMM_CONFIG_FILE_NAME is found, call DoVmmStartup. Else, return to Azure Provisioning process. """ self.TryLoadAtapiix() if os.path.exists('/mnt/cdrom/secure') == False: CreateDir("/mnt/cdrom/secure", "root", 0700) mounted=False for dvds in [re.match(r'(sr[0-9]|hd[c-z]|cdrom[0-9]|cd[0-9]?)',x) for x in os.listdir('/dev/')]: if dvds == None: continue dvd = '/dev/'+dvds.group(0) if Run("LC_ALL=C fdisk -l " + dvd + " | grep Disk",chk_err=False): continue # Not mountable else: for retry in range(1,6): retcode,output=RunGetOutput("mount -v " + dvd + " /mnt/cdrom/secure") Log(output[:-1]) if retcode == 0: Log("mount succeeded on attempt #" + str(retry) ) mounted=True break if 'is already mounted on /mnt/cdrom/secure' in output: Log("Device " + dvd + " is already mounted on /mnt/cdrom/secure." + str(retry) ) mounted=True break Log("mount failed on attempt #" + str(retry) ) Log("mount loop sleeping 5...") time.sleep(5) if not mounted: # unable to mount continue if not os.path.isfile("/mnt/cdrom/secure/"+VMM_CONFIG_FILE_NAME): #nope - mount the next drive if mounted: Run("umount "+dvd,chk_err=False) mounted=False continue else : # it is the vmm startup self.DoVmmStartup() Log("VMM Init script not found. Provisioning for Azure") return def Provision(self): """ Responible for: Regenerate ssh keys, Mount, read, and parse ovfenv.xml from provisioning dvd rom Process the ovfenv.xml info Call ReportRoleProperties If configured, delete root password. Return None on success, error string on error. """ enabled = Config.get("Provisioning.Enabled") if enabled != None and enabled.lower().startswith("n"): return Log("Provisioning image started.") type = Config.get("Provisioning.SshHostKeyPairType") if type == None: type = "rsa" regenerateKeys = Config.get("Provisioning.RegenerateSshHostKeyPair") if regenerateKeys == None or regenerateKeys.lower().startswith("y"): Run("rm -f /etc/ssh/ssh_host_*key*") Run("ssh-keygen -N '' -t " + type + " -f /etc/ssh/ssh_host_" + type + "_key") MyDistro.restartSshService() #SetFileContents(LibDir + "/provisioned", "") dvd = None for dvds in [re.match(r'(sr[0-9]|hd[c-z]|cdrom[0-9]|cd[0-9]?)',x) for x in os.listdir('/dev/')]: if dvds == None : continue dvd = '/dev/'+dvds.group(0) if dvd == None: # No DVD device detected Error("No DVD device detected, unable to provision.") return "No DVD device detected, unable to provision." if MyDistro.mediaHasFilesystem(dvd) is False : out=MyDistro.load_ata_piix() if out: return out for i in range(10): # we may have to wait if os.path.exists(dvd): break Log("Waiting for DVD - sleeping 1 - "+str(i+1)+" try...") time.sleep(1) if os.path.exists('/mnt/cdrom/secure') == False: CreateDir("/mnt/cdrom/secure", "root", 0700) #begin mount loop - 5 tries - 5 sec wait between for retry in range(1,6): location='/mnt/cdrom/secure' retcode,output=MyDistro.mountDVD(dvd,location) Log(output[:-1]) if retcode == 0: Log("mount succeeded on attempt #" + str(retry) ) break if 'is already mounted on /mnt/cdrom/secure' in output: Log("Device " + dvd + " is already mounted on /mnt/cdrom/secure." + str(retry) ) break Log("mount failed on attempt #" + str(retry) ) Log("mount loop sleeping 5...") time.sleep(5) if not os.path.isfile("/mnt/cdrom/secure/ovf-env.xml"): Error("Unable to provision: Missing ovf-env.xml on DVD.") return "Failed to retrieve provisioning data (0x02)." ovfxml = (GetFileContents(u"/mnt/cdrom/secure/ovf-env.xml",asbin=False)) # use unicode here to ensure correct codec gets used. if ord(ovfxml[0]) > 128 and ord(ovfxml[1]) > 128 and ord(ovfxml[2]) > 128 : ovfxml = ovfxml[3:] # BOM is not stripped. First three bytes are > 128 and not unicode chars so we ignore them. ovfxml=ovfxml.strip(chr(0x00)) # we may have NULLs. ovfxml=ovfxml[ovfxml.find('.*?<", "*<", ovfxml)) Run("umount " + dvd,chk_err=False) MyDistro.unload_ata_piix() error = None if ovfxml != None: Log("Provisioning image using OVF settings in the DVD.") ovfobj = OvfEnv().Parse(ovfxml) if ovfobj != None: error = ovfobj.Process() if error : Error ("Provisioning image FAILED " + error) return ("Provisioning image FAILED " + error) Log("Ovf XML process finished") # This is done here because regenerated SSH host key pairs may be potentially overwritten when processing the ovfxml fingerprint = RunGetOutput("ssh-keygen -lf /etc/ssh/ssh_host_" + type + "_key.pub")[1].rstrip().split()[1].replace(':','') self.ReportRoleProperties(fingerprint) delRootPass = Config.get("Provisioning.DeleteRootPassword") if delRootPass != None and delRootPass.lower().startswith("y"): MyDistro.deleteRootPassword() Log("Provisioning image completed.") return error def Run(self): """ Called by 'waagent -daemon.' Main loop to process the goal state. State is posted every 25 seconds when provisioning has been completed. Search for VMM enviroment, start VMM script if found. Perform DHCP and endpoint server discovery by calling DoDhcpWork(). Check wire protocol versions. Set SCSI timeout on root device. Call GenerateTransportCert() to create ssl certs for server communication. Call UpdateGoalState(). If not provisioned, call ReportNotReady("Provisioning", "Starting") Call Provision(), set global provisioned = True if successful. Call goalState.Process() Start LBProbeServer if indicated in waagent.conf. Start the StateConsumer if indicated in waagent.conf. ReportReady if provisioning is complete. If provisioning failed, call ReportNotReady("ProvisioningFailed", provisionError) """ SetFileContents("/var/run/waagent.pid", str(os.getpid()) + "\n") reportHandlerStatusCount = 0 # Determine if we are in VMM. Spawn VMM_STARTUP_SCRIPT_NAME if found. self.SearchForVMMStartup() ipv4='' while ipv4 == '' or ipv4 == '0.0.0.0' : ipv4=MyDistro.GetIpv4Address() if ipv4 == '' or ipv4 == '0.0.0.0' : Log("Waiting for network.") time.sleep(10) Log("IPv4 address: " + ipv4) mac='' mac=MyDistro.GetMacAddress() if len(mac)>0 : Log("MAC address: " + ":".join(["%02X" % Ord(a) for a in mac])) # Consume Entropy in ACPI table provided by Hyper-V try: SetFileContents("/dev/random", GetFileContents("/sys/firmware/acpi/tables/OEM0")) except: pass Log("Probing for Azure environment.") self.Endpoint = self.DoDhcpWork() while self.Endpoint == None: Log("Azure environment not detected.") Log("Retry environment detection in 60 seconds") time.sleep(60) self.Endpoint = self.DoDhcpWork() Log("Discovered Azure endpoint: " + self.Endpoint) if not self.CheckVersions(): Error("Agent.CheckVersions failed") sys.exit(1) self.EnvMonitor = EnvMonitor() # Set SCSI timeout on SCSI disks MyDistro.initScsiDiskTimeout() global provisioned global provisionError global Openssl Openssl = Config.get("OS.OpensslPath") if Openssl == None: Openssl = "openssl" self.TransportCert = self.GenerateTransportCert() eventMonitor = None incarnation = None # goalStateIncarnationFromHealthReport currentPort = None # loadBalancerProbePort goalState = None # self.GoalState, instance of GoalState provisioned = os.path.exists(LibDir + "/provisioned") program = Config.get("Role.StateConsumer") provisionError = None lbProbeResponder = True setting = Config.get("LBProbeResponder") if setting != None and setting.lower().startswith("n"): lbProbeResponder = False while True: if (goalState == None) or (incarnation == None) or (goalState.Incarnation != incarnation): try: goalState = self.UpdateGoalState() except HttpResourceGoneError as e: Warn("Incarnation is out of date:{0}".format(e)) incarnation = None continue if goalState == None : Warn("Failed to fetch goalstate") continue if provisioned == False: self.ReportNotReady("Provisioning", "Starting") goalState.Process() if provisioned == False: provisionError = self.Provision() if provisionError == None : provisioned = True SetFileContents(LibDir + "/provisioned", "") lastCtime = "NOTFIND" try: walaConfigFile = MyDistro.getConfigurationPath() lastCtime = time.ctime(os.path.getctime(walaConfigFile)) except: pass #Get Ctime of wala config, can help identify the base image of this VM AddExtensionEvent(name="WALA",op=WALAEventOperation.Provision,isSuccess=True, message="WALA Config Ctime:"+lastCtime) executeCustomData = Config.get("Provisioning.ExecuteCustomData") if executeCustomData != None and executeCustomData.lower().startswith("y"): if os.path.exists(LibDir + '/CustomData'): Run('chmod +x ' + LibDir + '/CustomData') Run(LibDir + '/CustomData') else: Error(LibDir + '/CustomData does not exist.') # # only one port supported # restart server if new port is different than old port # stop server if no longer a port # goalPort = goalState.LoadBalancerProbePort if currentPort != goalPort: try: self.LoadBalancerProbeServer_Shutdown() currentPort = goalPort if currentPort != None and lbProbeResponder == True: self.LoadBalancerProbeServer = LoadBalancerProbeServer(currentPort) if self.LoadBalancerProbeServer == None : lbProbeResponder = False Log("Unable to create LBProbeResponder.") except Exception, e: Error("Failed to launch LBProbeResponder: {0}".format(e)) currentPort = None # Report SSH key fingerprint type = Config.get("Provisioning.SshHostKeyPairType") if type == None: type = "rsa" host_key_path = "/etc/ssh/ssh_host_" + type + "_key.pub" if(MyDistro.waitForSshHostKey(host_key_path)): fingerprint = RunGetOutput("ssh-keygen -lf /etc/ssh/ssh_host_" + type + "_key.pub")[1].rstrip().split()[1].replace(':','') self.ReportRoleProperties(fingerprint) if program != None and DiskActivated == True: try: Children.append(subprocess.Popen([program, "Ready"])) except OSError, e : ErrorWithPrefix('SharedConfig.Parse','Exception: '+ str(e) +' occured launching ' + program ) program = None sleepToReduceAccessDenied = 3 time.sleep(sleepToReduceAccessDenied) if provisionError != None: incarnation = self.ReportNotReady("ProvisioningFailed", provisionError) else: incarnation = self.ReportReady() # Process our extensions. if goalState.ExtensionsConfig == None and goalState.ExtensionsConfigXml != None : reportHandlerStatusCount = 0 #Reset count when new goal state comes goalState.ExtensionsConfig = ExtensionsConfig().Parse(goalState.ExtensionsConfigXml) # report the status/heartbeat results of extension processing if goalState.ExtensionsConfig != None : ret = goalState.ExtensionsConfig.ReportHandlerStatus() if ret != 0: Error("Failed to report handler status") elif reportHandlerStatusCount % 1000 == 0: #Agent report handler status every 25 seconds. Reduce the log entries by adding a count Log("Successfully reported handler status") reportHandlerStatusCount += 1 if not eventMonitor: eventMonitor = WALAEventMonitor(self.HttpPostWithHeaders) eventMonitor.StartEventsLoop() time.sleep(25 - sleepToReduceAccessDenied) WaagentLogrotate = """\ /var/log/waagent.log { monthly rotate 6 notifempty missingok } """ def GetMountPoint(mountlist, device): """ Example of mountlist: /dev/sda1 on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0") none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) /dev/sdb1 on /mnt/resource type ext4 (rw) """ if (mountlist and device): for entry in mountlist.split('\n'): if(re.search(device, entry)): tokens = entry.split() #Return the 3rd column of this line return tokens[2] if len(tokens) > 2 else None return None def FindInLinuxKernelCmdline(option): """ Return match object if 'option' is present in the kernel boot options of the grub configuration. """ m=None matchs=r'^.*?'+MyDistro.grubKernelBootOptionsLine+r'.*?'+option+r'.*$' try: m=FindStringInFile(MyDistro.grubKernelBootOptionsFile,matchs) except IOError, e: Error('FindInLinuxKernelCmdline: Exception opening ' + MyDistro.grubKernelBootOptionsFile + 'Exception:' + str(e)) return m def AppendToLinuxKernelCmdline(option): """ Add 'option' to the kernel boot options of the grub configuration. """ if not FindInLinuxKernelCmdline(option): src=r'^(.*?'+MyDistro.grubKernelBootOptionsLine+r')(.*?)("?)$' rep=r'\1\2 '+ option + r'\3' try: ReplaceStringInFile(MyDistro.grubKernelBootOptionsFile,src,rep) except IOError, e : Error('AppendToLinuxKernelCmdline: Exception opening ' + MyDistro.grubKernelBootOptionsFile + 'Exception:' + str(e)) return 1 Run("update-grub",chk_err=False) return 0 def RemoveFromLinuxKernelCmdline(option): """ Remove 'option' to the kernel boot options of the grub configuration. """ if FindInLinuxKernelCmdline(option): src=r'^(.*?'+MyDistro.grubKernelBootOptionsLine+r'.*?)('+option+r')(.*?)("?)$' rep=r'\1\3\4' try: ReplaceStringInFile(MyDistro.grubKernelBootOptionsFile,src,rep) except IOError, e : Error('RemoveFromLinuxKernelCmdline: Exception opening ' + MyDistro.grubKernelBootOptionsFile + 'Exception:' + str(e)) return 1 Run("update-grub",chk_err=False) return 0 def FindStringInFile(fname,matchs): """ Return match object if found in file. """ try: ms=re.compile(matchs) for l in (open(fname,'r')).readlines(): m=re.search(ms,l) if m: return m except: raise return None def ReplaceStringInFile(fname,src,repl): """ Replace 'src' with 'repl' in file. """ try: sr=re.compile(src) if FindStringInFile(fname,src): updated='' for l in (open(fname,'r')).readlines(): n=re.sub(sr,repl,l) updated+=n ReplaceFileContentsAtomic(fname,updated) except : raise return def ApplyVNUMAWorkaround(): """ If kernel version has NUMA bug, add 'numa=off' to kernel boot options. """ VersionParts = platform.release().replace('-', '.').split('.') if int(VersionParts[0]) > 2: return if int(VersionParts[1]) > 6: return if int(VersionParts[2]) > 37: return if AppendToLinuxKernelCmdline("numa=off") == 0 : Log("Your kernel version " + platform.release() + " has a NUMA-related bug: NUMA has been disabled.") else : "Error adding 'numa=off'. NUMA has not been disabled." def RevertVNUMAWorkaround(): """ Remove 'numa=off' from kernel boot options. """ if RemoveFromLinuxKernelCmdline("numa=off") == 0 : Log('NUMA has been re-enabled') else : Log('NUMA has not been re-enabled') def Install(): """ Install the agent service. Check dependencies. Create /etc/waagent.conf and move old version to /etc/waagent.conf.old Copy RulesFiles to /var/lib/waagent Create /etc/logrotate.d/waagent Set /etc/ssh/sshd_config ClientAliveInterval to 180 Call ApplyVNUMAWorkaround() """ if MyDistro.checkDependencies(): return 1 os.chmod(sys.argv[0], 0755) SwitchCwd() for a in RulesFiles: if os.path.isfile(a): if os.path.isfile(GetLastPathElement(a)): os.remove(GetLastPathElement(a)) shutil.move(a, ".") Warn("Moved " + a + " -> " + LibDir + "/" + GetLastPathElement(a) ) MyDistro.registerAgentService() if os.path.isfile("/etc/waagent.conf"): try: os.remove("/etc/waagent.conf.old") except: pass try: os.rename("/etc/waagent.conf", "/etc/waagent.conf.old") Warn("Existing /etc/waagent.conf has been renamed to /etc/waagent.conf.old") except: pass SetFileContents("/etc/waagent.conf", MyDistro.waagent_conf_file) SetFileContents("/etc/logrotate.d/waagent", WaagentLogrotate) filepath = "/etc/ssh/sshd_config" ReplaceFileContentsAtomic(filepath, "\n".join(filter(lambda a: not a.startswith("ClientAliveInterval"), GetFileContents(filepath).split('\n'))) + "\nClientAliveInterval 180\n") Log("Configured SSH client probing to keep connections alive.") ApplyVNUMAWorkaround() return 0 def GetMyDistro(dist_class_name=''): """ Return MyDistro object. NOTE: Logging is not initialized at this point. """ if dist_class_name == '': if 'Linux' in platform.system(): Distro=DistInfo()[0] else : # I know this is not Linux! if 'FreeBSD' in platform.system(): Distro=platform.system() Distro=Distro.strip('"') Distro=Distro.strip(' ') dist_class_name=Distro+'Distro' else: Distro=dist_class_name if not globals().has_key(dist_class_name): print Distro+' is not a supported distribution.' return None return globals()[dist_class_name]() # the distro class inside this module. def DistInfo(fullname=0): if 'FreeBSD' in platform.system(): release = re.sub('\-.*\Z', '', str(platform.release())) distinfo = ['FreeBSD', release] return distinfo if 'linux_distribution' in dir(platform): distinfo = list(platform.linux_distribution(full_distribution_name=fullname)) distinfo[0] = distinfo[0].strip() # remove trailing whitespace in distro name if os.path.exists("/etc/euleros-release"): distinfo[0] = "euleros" return distinfo else: return platform.dist() def PackagedInstall(buildroot): """ Called from setup.py for use by RPM. Generic implementation Creates directories and files /etc/waagent.conf, /etc/init.d/waagent, /usr/sbin/waagent, /etc/logrotate.d/waagent, /etc/sudoers.d/waagent under buildroot. Copies generated files waagent.conf, into place and exits. """ MyDistro=GetMyDistro() if MyDistro == None : sys.exit(1) MyDistro.packagedInstall(buildroot) def LibraryInstall(buildroot): pass def Uninstall(): """ Uninstall the agent service. Copy RulesFiles back to original locations. Delete agent-related files. Call RevertVNUMAWorkaround(). """ SwitchCwd() for a in RulesFiles: if os.path.isfile(GetLastPathElement(a)): try: shutil.move(GetLastPathElement(a), a) Warn("Moved " + LibDir + "/" + GetLastPathElement(a) + " -> " + a ) except: pass MyDistro.unregisterAgentService() MyDistro.uninstallDeleteFiles() RevertVNUMAWorkaround() return 0 def Deprovision(force, deluser): """ Remove user accounts created by provisioning. Disables root password if Provisioning.DeleteRootPassword = 'y' Stop agent service. Remove SSH host keys if they were generated by the provision. Set hostname to 'localhost.localdomain'. Delete cached system configuration files in /var/lib and /var/lib/waagent. """ #Append blank line at the end of file, so the ctime of this file is changed every time Run("echo ''>>"+ MyDistro.getConfigurationPath()) SwitchCwd() ovfxml = GetFileContents(LibDir+"/ovf-env.xml") ovfobj = None if ovfxml != None: ovfobj = OvfEnv().Parse(ovfxml, True) print("WARNING! The waagent service will be stopped.") print("WARNING! All SSH host key pairs will be deleted.") print("WARNING! Cached DHCP leases will be deleted.") MyDistro.deprovisionWarnUser() delRootPass = Config.get("Provisioning.DeleteRootPassword") if delRootPass != None and delRootPass.lower().startswith("y"): print("WARNING! root password will be disabled. You will not be able to login as root.") if ovfobj != None and deluser == True: print("WARNING! " + ovfobj.UserName + " account and entire home directory will be deleted.") if force == False and not raw_input('Do you want to proceed (y/n)? ').startswith('y'): return 1 MyDistro.stopAgentService() # Remove SSH host keys regenerateKeys = Config.get("Provisioning.RegenerateSshHostKeyPair") if regenerateKeys == None or regenerateKeys.lower().startswith("y"): Run("rm -f /etc/ssh/ssh_host_*key*") # Remove root password if delRootPass != None and delRootPass.lower().startswith("y"): MyDistro.deleteRootPassword() # Remove distribution specific networking configuration MyDistro.publishHostname('localhost.localdomain') MyDistro.deprovisionDeleteFiles() if deluser == True: MyDistro.DeleteAccount(ovfobj.UserName) return 0 def SwitchCwd(): """ Switch to cwd to /var/lib/waagent. Create if not present. """ CreateDir(LibDir, "root", 0700) os.chdir(LibDir) def Usage(): """ Print the arguments to waagent. """ print("usage: " + sys.argv[0] + " [-verbose] [-force] [-help|-install|-uninstall|-deprovision[+user]|-version|-serialconsole|-daemon]") return 0 def main(): """ Instantiate MyDistro, exit if distro class is not defined. Parse command-line arguments, exit with usage() on error. Instantiate ConfigurationProvider. Call appropriate non-daemon methods and exit. If daemon mode, enter Agent.Run() loop. """ if GuestAgentVersion == "": print("WARNING! This is a non-standard agent that does not include a valid version string.") if len(sys.argv) == 1: sys.exit(Usage()) LoggerInit('/var/log/waagent.log','/dev/console') global LinuxDistro LinuxDistro=DistInfo()[0] global MyDistro MyDistro=GetMyDistro() if MyDistro == None : sys.exit(1) args = [] conf_file = None global force force = False for a in sys.argv[1:]: if re.match("^([-/]*)(help|usage|\?)", a): sys.exit(Usage()) elif re.match("^([-/]*)version", a): print(GuestAgentVersion + " running on " + LinuxDistro) sys.exit(0) elif re.match("^([-/]*)verbose", a): myLogger.verbose = True elif re.match("^([-/]*)force", a): force = True elif re.match("^(?:[-/]*)conf=.+", a): conf_file = re.match("^(?:[-/]*)conf=(.+)", a).groups()[0] elif re.match("^([-/]*)(setup|install)", a): sys.exit(MyDistro.Install()) elif re.match("^([-/]*)(uninstall)", a): sys.exit(Uninstall()) else: args.append(a) global Config Config = ConfigurationProvider(conf_file) logfile = Config.get("Logs.File") if logfile is not None: myLogger.file_path = logfile logconsole = Config.get("Logs.Console") if logconsole is not None and logconsole.lower().startswith("n"): myLogger.con_path = None verbose = Config.get("Logs.Verbose") if verbose != None and verbose.lower().startswith("y"): myLogger.verbose=True global daemon daemon = False for a in args: if re.match("^([-/]*)deprovision\+user", a): sys.exit(Deprovision(force, True)) elif re.match("^([-/]*)deprovision", a): sys.exit(Deprovision(force, False)) elif re.match("^([-/]*)daemon", a): daemon = True elif re.match("^([-/]*)serialconsole", a): AppendToLinuxKernelCmdline("console=ttyS0 earlyprintk=ttyS0") Log("Configured kernel to use ttyS0 as the boot console.") sys.exit(0) else: print("Invalid command line parameter:" + a) sys.exit(1) if daemon == False: sys.exit(Usage()) global modloaded modloaded = False while True: try: SwitchCwd() Log(GuestAgentLongName + " Version: " + GuestAgentVersion) if IsLinux(): Log("Linux Distribution Detected : " + LinuxDistro) global WaAgent WaAgent = Agent() WaAgent.Run() except Exception, e: Error(traceback.format_exc()) Error("Exception: " + str(e)) Log("Restart agent in 15 seconds") time.sleep(15) if __name__ == '__main__' : main() Azure-WALinuxAgent-a976115/ci/000077500000000000000000000000001510742556200157655ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/ci/nosetests.sh000077500000000000000000000016421510742556200203560ustar00rootroot00000000000000#!/usr/bin/env bash set -u EXIT_CODE=0 echo "=========================================" echo "**** nosetests non-sudo tests ****" echo "=========================================" nosetests --ignore-files test_cgroupconfigurator_sudo.py --ignore-files test_signature_validation_sudo.py tests $NOSEOPTS || EXIT_CODE=$(($EXIT_CODE || $?)) echo EXIT_CODE no_sudo nosetests = $EXIT_CODE [[ -f .coverage ]] && \ sudo mv .coverage coverage.$(uuidgen).no_sudo.data echo "=========================================" echo "**** nosetests sudo tests ****" echo "=========================================" sudo env "PATH=$PATH" nosetests tests/ga/test_cgroupconfigurator_sudo.py tests/ga/test_signature_validation_sudo.py $NOSEOPTS || EXIT_CODE=$(($EXIT_CODE || $?)) echo EXIT_CODE with_sudo nosetests = $EXIT_CODE [[ -f .coverage ]] && \ sudo mv .coverage coverage.$(uuidgen).with_sudo.data exit "$EXIT_CODE" Azure-WALinuxAgent-a976115/ci/pylintrc000066400000000000000000000061731510742556200175630ustar00rootroot00000000000000[MESSAGES CONTROL] disable=C, # (C) convention, for programming standard violation broad-except, # W0703: *Catching too general exception %s* broad-exception-raised, # W0719: Raising too general exception: Exception consider-using-dict-comprehension, # R1717: *Consider using a dictionary comprehension* consider-using-from-import, # R0402: Use 'from foo import bar' instead consider-using-in, # R1714: *Consider merging these comparisons with "in" to %r* consider-using-max-builtin, # R1731: Consider using 'a = max(a, b)' instead of unnecessary if block consider-using-min-builtin, # R1730: Consider using 'a = min(a, b)' instead of unnecessary if block consider-using-set-comprehension, # R1718: *Consider using a set comprehension* consider-using-with, # R1732: *Emitted if a resource-allocating assignment or call may be replaced by a 'with' block* duplicate-code, # R0801: *Similar lines in %s files* fixme, # Used when a warning note as FIXME or TODO is detected logging-format-interpolation, # W1202: Use lazy % formatting in logging functions logging-fstring-interpolation, # W1203: Use lazy % or .format() formatting in logging functions no-else-break, # R1723: *Unnecessary "%s" after "break"* no-else-continue, # R1724: *Unnecessary "%s" after "continue"* no-else-raise, # R1720: *Unnecessary "%s" after "raise"* no-else-return, # R1705: *Unnecessary "%s" after "return"* protected-access, # W0212: Access to a protected member of a client class raise-missing-from, # W0707: *Consider explicitly re-raising using the 'from' keyword* redundant-u-string-prefix, # The u prefix for strings is no longer necessary in Python >=3.0 simplifiable-if-expression, # R1719: *The if expression can be replaced with %s* simplifiable-if-statement, # R1703: *The if statement can be replaced with %s* super-with-arguments, # R1725: *Consider using Python 3 style super) without arguments* too-few-public-methods, # R0903: *Too few public methods %s/%s)* too-many-ancestors, # R0901: *Too many ancestors %s/%s)* too-many-arguments, # R0913: *Too many arguments %s/%s)* too-many-boolean-expressions, # R0916: *Too many boolean expressions in if statement %s/%s)* too-many-branches, # R0912: *Too many branches %s/%s)* too-many-instance-attributes, # R0902: *Too many instance attributes %s/%s)* too-many-locals, # R0914: *Too many local variables %s/%s)* too-many-nested-blocks, # R1702: *Too many nested blocks %s/%s)* too-many-public-methods, # R0904: *Too many public methods %s/%s)* too-many-return-statements, # R0911: *Too many return statements %s/%s)* too-many-statements, # R0915: *Too many statements %s/%s)* unspecified-encoding, # W1514: Using open without explicitly specifying an encoding use-a-generator, # R1729: *Use a generator instead '%s%s)'* use-yield-from, # R1737: Use 'yield from' directly instead of yielding each element one by one useless-object-inheritance, # R0205: *Class %r inherits from object, can be safely removed from bases in python3* useless-return, # R1711: *Useless return at end of function or method* Azure-WALinuxAgent-a976115/ci/pytest.ini000066400000000000000000000001211510742556200200100ustar00rootroot00000000000000[pytest] filterwarnings = ignore:distro.linux_distribution\(\) is deprecated Azure-WALinuxAgent-a976115/ci/pytest.sh000077500000000000000000000014601510742556200176550ustar00rootroot00000000000000#!/usr/bin/env bash set -u EXIT_CODE=0 echo "=========================================" echo "**** pytest *** non-sudo tests ****" echo "=========================================" pytest --verbose --config-file ci/pytest.ini --ignore-glob '*/test_cgroupconfigurator_sudo.py' --ignore-glob '*/test_signature_validation_sudo.py' tests || EXIT_CODE=$(($EXIT_CODE || $?)) echo EXIT_CODE pytests non-sudo = $EXIT_CODE echo "=========================================" echo "**** pytest *** sudo tests ****" echo "=========================================" sudo env "PATH=$PATH" pytest --verbose --config-file ci/pytest.ini tests/ga/test_cgroupconfigurator_sudo.py tests/ga/test_signature_validation_sudo.py || EXIT_CODE=$(($EXIT_CODE || $?)) echo EXIT_CODE pytests sudo = $EXIT_CODE exit "$EXIT_CODE" Azure-WALinuxAgent-a976115/config/000077500000000000000000000000001510742556200166375ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/config/66-azure-storage.rules000066400000000000000000000034451510742556200227420ustar00rootroot00000000000000# Azure specific rules. ACTION!="add|change", GOTO="walinuxagent_end" SUBSYSTEM!="block", GOTO="walinuxagent_end" ATTRS{ID_VENDOR}!="Msft", GOTO="walinuxagent_end" ATTRS{ID_MODEL}!="Virtual_Disk", GOTO="walinuxagent_end" # Match the known ID parts for root and resource disks. ATTRS{device_id}=="?00000000-0000-*", ENV{fabric_name}="root", GOTO="wa_azure_names" ATTRS{device_id}=="?00000000-0001-*", ENV{fabric_name}="resource", GOTO="wa_azure_names" # Gen2 disk. ATTRS{device_id}=="{f8b3781a-1e82-4818-a1c3-63d806ec15bb}", ENV{fabric_scsi_controller}="scsi0", GOTO="azure_datadisk" # Create symlinks for data disks attached. ATTRS{device_id}=="{f8b3781b-1e82-4818-a1c3-63d806ec15bb}", ENV{fabric_scsi_controller}="scsi1", GOTO="azure_datadisk" ATTRS{device_id}=="{f8b3781c-1e82-4818-a1c3-63d806ec15bb}", ENV{fabric_scsi_controller}="scsi2", GOTO="azure_datadisk" ATTRS{device_id}=="{f8b3781d-1e82-4818-a1c3-63d806ec15bb}", ENV{fabric_scsi_controller}="scsi3", GOTO="azure_datadisk" GOTO="walinuxagent_end" # Parse out the fabric n ame based off of scsi indicators. LABEL="azure_datadisk" ENV{DEVTYPE}=="partition", PROGRAM="/bin/sh -c 'readlink /sys/class/block/%k/../device|cut -d: -f4'", ENV{fabric_name}="$env{fabric_scsi_controller}/lun$result" ENV{DEVTYPE}=="disk", PROGRAM="/bin/sh -c 'readlink /sys/class/block/%k/device|cut -d: -f4'", ENV{fabric_name}="$env{fabric_scsi_controller}/lun$result" ENV{fabric_name}=="scsi0/lun0", ENV{fabric_name}="root" ENV{fabric_name}=="scsi0/lun1", ENV{fabric_name}="resource" # Don't create a symlink for the cd-rom. ENV{fabric_name}=="scsi0/lun2", GOTO="walinuxagent_end" # Create the symlinks. LABEL="wa_azure_names" ENV{DEVTYPE}=="disk", SYMLINK+="disk/azure/$env{fabric_name}" ENV{DEVTYPE}=="partition", SYMLINK+="disk/azure/$env{fabric_name}-part%n" LABEL="walinuxagent_end" Azure-WALinuxAgent-a976115/config/99-azure-product-uuid.rules000066400000000000000000000005271510742556200237260ustar00rootroot00000000000000SUBSYSTEM!="dmi", GOTO="product_uuid-exit" ATTR{sys_vendor}!="Microsoft Corporation", GOTO="product_uuid-exit" ATTR{product_name}!="Virtual Machine", GOTO="product_uuid-exit" TEST!="/sys/devices/virtual/dmi/id/product_uuid", GOTO="product_uuid-exit" RUN+="/bin/chmod 0444 /sys/devices/virtual/dmi/id/product_uuid" LABEL="product_uuid-exit" Azure-WALinuxAgent-a976115/config/alpine/000077500000000000000000000000001510742556200201075ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/config/alpine/waagent.conf000066400000000000000000000061071510742556200224100ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Enable extension handling. Do not disable this unless you do not need password reset, # backup, monitoring, or any extension handling whatsoever. Extensions.Enabled=y # Which provisioning agent to use. Supported values are "auto" (default), "waagent", # "cloud-init", or "disabled". Provisioning.Agent=auto # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=y # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=y # Supported values are "rsa", "dsa", "ecdsa", "ed25519", and "auto". # The "auto" option is supported on OpenSSH 5.9 (2011) and later. Provisioning.SshHostKeyPairType=rsa # Monitor host name changes and publish changes via DHCP requests. Provisioning.MonitorHostName=n # Decode CustomData from Base64. Provisioning.DecodeCustomData=y # Execute CustomData after provisioning. Provisioning.ExecuteCustomData=y # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=y # File system on the resource disk # Typically ext3 or ext4. FreeBSD images should use 'ufs2' here. ResourceDisk.Filesystem=ext4 # Mount point for the resource disk ResourceDisk.MountPoint=/mnt/resource # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=n # Size of the swapfile. ResourceDisk.SwapSizeMB=0 # Comma-separated list of mount options. See mount(8) for valid options. ResourceDisk.MountOptions=None # Respond to load balancer probes if requested by Microsoft Azure. LBProbeResponder=y # Enable logging to serial console (y|n) # When stdout is not enough... # 'y' if not set Logs.Console=y # Enable verbose logging (y|n) Logs.Verbose=n # Preferred network interface to communicate with Azure platform Network.Interface=eth0 # Is FIPS enabled OS.EnableFIPS=n # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # If "None", the system default version is used. OS.OpensslPath=None # Set the path to SSH keys and configuration files OS.SshDir=/etc/ssh # If set, agent will use proxy server to access internet #HttpProxy.Host=None #HttpProxy.Port=None # Enable or disable goal state processing auto-update, default is enabled # When turned off, it remains on latest version installed on the vm # Added this new option AutoUpdate.UpdateToLatestVersion in place of AutoUpdate.Enabled, and encourage users to transition to this new option # See wiki[https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion] for more details # TODO: Update the wiki link and point to readme page or public facing doc # AutoUpdate.UpdateToLatestVersion=y # Determine the update family, this should not be changed # AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is enabled # EnableOverProvisioning=y # Allow fallback to HTTP if HTTPS is unavailable # Note: Allowing HTTP (vs. HTTPS) may cause security risks # OS.AllowHTTP=n # Add firewall rules to protect access to Azure host node services OS.EnableFirewall=y Azure-WALinuxAgent-a976115/config/arch/000077500000000000000000000000001510742556200175545ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/config/arch/waagent.conf000066400000000000000000000064031510742556200220540ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Which provisioning agent to use. Supported values are "auto" (default), "waagent", # "cloud-init", or "disabled". Provisioning.Agent=auto # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=n # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=y # Supported values are "rsa", "dsa", "ecdsa", "ed25519", and "auto". # The "auto" option is supported on OpenSSH 5.9 (2011) and later. Provisioning.SshHostKeyPairType=rsa # Monitor host name changes and publish changes via DHCP requests. Provisioning.MonitorHostName=y # Decode CustomData from Base64. Provisioning.DecodeCustomData=n # Execute CustomData after provisioning. Provisioning.ExecuteCustomData=n # Algorithm used by crypt when generating password hash. #Provisioning.PasswordCryptId=6 # Length of random salt used when generating password hash. #Provisioning.PasswordCryptSaltLength=10 # Allow reset password of sys user Provisioning.AllowResetSysUser=n # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=y # File system on the resource disk # Typically ext3 or ext4. FreeBSD images should use 'ufs2' here. ResourceDisk.Filesystem=ext4 # Mount point for the resource disk ResourceDisk.MountPoint=/mnt/resource # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=n # Size of the swapfile. ResourceDisk.SwapSizeMB=0 # Comma-separated list of mount options. See mount(8) for valid options. ResourceDisk.MountOptions=None # Respond to load balancer probes if requested by Windows Azure. LBProbeResponder=y # Enable verbose logging (y|n) Logs.Verbose=n # Enable Console logging, default is y # Logs.Console=y # Is FIPS enabled OS.EnableFIPS=n # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # If "None", the system default version is used. OS.OpensslPath=None # Set the path to SSH keys and configuration files OS.SshDir=/etc/ssh # If set, agent will use proxy server to access internet #HttpProxy.Host=None #HttpProxy.Port=None # Detect Scvmm environment, default is n # DetectScvmmEnv=n # # Lib.Dir=/var/lib/waagent # # DVD.MountPoint=/mnt/cdrom/secure # # Pid.File=/var/run/waagent.pid # # Extension.LogDir=/var/log/azure # # Home.Dir=/home # Enable RDMA management and set up, should only be used in HPC images # OS.EnableRDMA=y # Enable or disable goal state processing auto-update, default is enabled # When turned off, it remains on latest version installed on the vm # Added this new option AutoUpdate.UpdateToLatestVersion in place of AutoUpdate.Enabled, and encourage users to transition to this new option # See wiki[https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion] for more details # AutoUpdate.UpdateToLatestVersion=y # Determine the update family, this should not be changed # AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is enabled # EnableOverProvisioning=y # Allow fallback to HTTP if HTTPS is unavailable # Note: Allowing HTTP (vs. HTTPS) may cause security risks # OS.AllowHTTP=n # Add firewall rules to protect access to Azure host node services OS.EnableFirewall=y Azure-WALinuxAgent-a976115/config/bigip/000077500000000000000000000000001510742556200177315ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/config/bigip/waagent.conf000066400000000000000000000062171510742556200222340ustar00rootroot00000000000000# # Windows Azure Linux Agent Configuration # # Specified program is invoked with the argument "Ready" when we report ready status # to the endpoint server. Role.StateConsumer=None # Specified program is invoked with XML file argument specifying role # configuration. Role.ConfigurationConsumer=None # Specified program is invoked with XML file argument specifying role topology. Role.TopologyConsumer=None # Enable extension handling. Do not disable this unless you do not need password reset, # backup, monitoring, or any extension handling whatsoever. Extensions.Enabled=y # Which provisioning agent to use. Supported values are "auto" (default), "waagent", # "cloud-init", or "disabled". Provisioning.Agent=auto # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=y # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=y # Supported values are "rsa", "dsa", "ecdsa", "ed25519", and "auto". # The "auto" option is supported on OpenSSH 5.9 (2011) and later. Provisioning.SshHostKeyPairType=rsa # Monitor host name changes and publish changes via DHCP requests. # waagent cannot do this on BIG-IP VE Provisioning.MonitorHostName=n # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=y # File system on the resource disk # Typically ext3 or ext4. FreeBSD images should use 'ufs2' here. ResourceDisk.Filesystem=ext4 # Mount point for the resource disk ResourceDisk.MountPoint=/mnt/resource # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=n # Size of the swapfile. ResourceDisk.SwapSizeMB=0 # Respond to load balancer probes if requested by Windows Azure. LBProbeResponder=y # Enable verbose logging (y|n) Logs.Verbose=n # Enable Console logging, default is y # Logs.Console=y # Is FIPS enabled OS.EnableFIPS=n # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # If "None", the system default version is used. OS.OpensslPath=None # Set the path to SSH keys and configuration files OS.SshDir=/etc/ssh # Specify location of waagent lib dir on BIG-IP Lib.Dir=/shared/vadc/azure/waagent/ # Specify location of sshd config file on BIG-IP OS.SshdConfigPath=/config/ssh/sshd_config # Disable RDMA management and set up OS.EnableRDMA=n # Enable or disable goal state processing auto-update, default is enabled # When turned off, it remains on latest version installed on the vm # Added this new option AutoUpdate.UpdateToLatestVersion in place of AutoUpdate.Enabled, and encourage users to transition to this new option # See wiki[https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion] for more details # AutoUpdate.UpdateToLatestVersion=y # Determine the update family, this should not be changed # AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is enabled # EnableOverProvisioning=y # Allow fallback to HTTP if HTTPS is unavailable # Note: Allowing HTTP (vs. HTTPS) may cause security risks # OS.AllowHTTP=n # Add firewall rules to protect access to Azure host node services OS.EnableFirewall=y Azure-WALinuxAgent-a976115/config/chainguard/000077500000000000000000000000001510742556200207445ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/config/chainguard/waagent.conf000066400000000000000000000071601510742556200232450ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Enable extension handling. Do not disable this unless you do not need password reset, # backup, monitoring, or any extension handling whatsoever. Extensions.Enabled=n # Which provisioning agent to use. Supported values are "auto" (default), "waagent", # "cloud-init", or "disabled". Provisioning.Agent=auto # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=y # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=y # Supported values are "rsa", "dsa", "ecdsa", "ed25519", and "auto". # The "auto" option is supported on OpenSSH 5.9 (2011) and later. Provisioning.SshHostKeyPairType=auto # Monitor host name changes and publish changes via DHCP requests. Provisioning.MonitorHostName=y # Decode CustomData from Base64. Provisioning.DecodeCustomData=n # Execute CustomData after provisioning. Provisioning.ExecuteCustomData=n # Algorithm used by crypt when generating password hash. #Provisioning.PasswordCryptId=6 # Length of random salt used when generating password hash. #Provisioning.PasswordCryptSaltLength=10 # Allow reset password of sys user Provisioning.AllowResetSysUser=n # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=y # File system on the resource disk # Typically ext3 or ext4. FreeBSD images should use 'ufs2' here. ResourceDisk.Filesystem=ext4 # Mount point for the resource disk ResourceDisk.MountPoint=/mnt/resource # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=n # Size of the swapfile. ResourceDisk.SwapSizeMB=0 # Comma-separated list of mount options. See mount(8) for valid options. ResourceDisk.MountOptions=None # Enable verbose logging (y|n) Logs.Verbose=n # Enable Console logging, default is y # Logs.Console=y # Enable periodic log collection, default is y Logs.Collect=y # How frequently to collect logs, default is each hour Logs.CollectPeriod=3600 # Is FIPS enabled OS.EnableFIPS=n # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # If "None", the system default version is used. OS.OpensslPath=None # Set the SSH ClientAliveInterval # OS.SshClientAliveInterval=180 # Set the path to SSH keys and configuration files OS.SshDir=/etc/ssh # If set, agent will use proxy server to access internet #HttpProxy.Host=None #HttpProxy.Port=None # Detect Scvmm environment, default is n # DetectScvmmEnv=n # # Lib.Dir=/var/lib/waagent # # DVD.MountPoint=/mnt/cdrom/secure # # Pid.File=/var/run/waagent.pid # # Extension.LogDir=/var/log/azure # # Home.Dir=/home # Enable RDMA management and set up, should only be used in HPC images # OS.EnableRDMA=y # Enable or disable goal state processing auto-update, default is enabled # When turned off, it remains on latest version installed on the vm # Added this new option AutoUpdate.UpdateToLatestVersion in place of AutoUpdate.Enabled, and encourage users to transition to this new option # See wiki[https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion] for more details AutoUpdate.UpdateToLatestVersion=n # Determine the update family, this should not be changed # AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is enabled # EnableOverProvisioning=y # Allow fallback to HTTP if HTTPS is unavailable # Note: Allowing HTTP (vs. HTTPS) may cause security risks # OS.AllowHTTP=n # Add firewall rules to protect access to Azure host node services # Note: # - The default is false to protect the state of existing VMs OS.EnableFirewall=y Azure-WALinuxAgent-a976115/config/clearlinux/000077500000000000000000000000001510742556200210055ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/config/clearlinux/waagent.conf000066400000000000000000000055411510742556200233070ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Specified program is invoked with the argument "Ready" when we report ready status # to the endpoint server. Role.StateConsumer=None # Specified program is invoked with XML file argument specifying role # configuration. Role.ConfigurationConsumer=None # Specified program is invoked with XML file argument specifying role topology. Role.TopologyConsumer=None # Which provisioning agent to use. Supported values are "auto" (default), "waagent", # "cloud-init", or "disabled". Provisioning.Agent=auto # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=y # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=y # Supported values are "rsa", "dsa", "ecdsa", "ed25519", and "auto". # The "auto" option is supported on OpenSSH 5.9 (2011) and later. Provisioning.SshHostKeyPairType=rsa # Monitor host name changes and publish changes via DHCP requests. Provisioning.MonitorHostName=y # Decode CustomData from Base64. Provisioning.DecodeCustomData=y # Execute CustomData after provisioning. Provisioning.ExecuteCustomData=n # Allow reset password of sys user Provisioning.AllowResetSysUser=n # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=y # File system on the resource disk # Typically ext3 or ext4. FreeBSD images should use 'ufs2' here. ResourceDisk.Filesystem=ext4 # Mount point for the resource disk ResourceDisk.MountPoint=/mnt/resource # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=n # Size of the swapfile. ResourceDisk.SwapSizeMB=0 # Enable verbose logging (y|n) Logs.Verbose=n # Enable Console logging, default is y # Logs.Console=y # Is FIPS enabled OS.EnableFIPS=n # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # If "None", the system default version is used. OS.OpensslPath=None # Set the path to SSH keys and configuration files OS.SshDir=/etc/ssh # Enable or disable goal state processing auto-update, default is enabled # When turned off, it remains on latest version installed on the vm # Added this new option AutoUpdate.UpdateToLatestVersion in place of AutoUpdate.Enabled, and encourage users to transition to this new option # See wiki[https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion] for more details # AutoUpdate.UpdateToLatestVersion=y AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is enabled # EnableOverProvisioning=y # Allow fallback to HTTP if HTTPS is unavailable # Note: Allowing HTTP (vs. HTTPS) may cause security risks # OS.AllowHTTP=n # Add firewall rules to protect access to Azure host node services # Note: # - The default is false to protect the state of existing VMs OS.EnableFirewall=y Azure-WALinuxAgent-a976115/config/coreos/000077500000000000000000000000001510742556200201315ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/config/coreos/waagent.conf000066400000000000000000000066601510742556200224360ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Enable extension handling. Do not disable this unless you do not need password reset, # backup, monitoring, or any extension handling whatsoever. Extensions.Enabled=y # Which provisioning agent to use. Supported values are "auto" (default), "waagent", # "cloud-init", or "disabled". Provisioning.Agent=auto # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=n # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=n # Supported values are "rsa", "dsa", "ecdsa", "ed25519", and "auto". # The "auto" option is supported on OpenSSH 5.9 (2011) and later. Provisioning.SshHostKeyPairType=ed25519 # Monitor host name changes and publish changes via DHCP requests. Provisioning.MonitorHostName=y # Decode CustomData from Base64. Provisioning.DecodeCustomData=n # Execute CustomData after provisioning. Provisioning.ExecuteCustomData=n # Algorithm used by crypt when generating password hash. #Provisioning.PasswordCryptId=6 # Length of random salt used when generating password hash. #Provisioning.PasswordCryptSaltLength=10 # Allow reset password of sys user Provisioning.AllowResetSysUser=n # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=y # File system on the resource disk # Typically ext3 or ext4. FreeBSD images should use 'ufs2' here. ResourceDisk.Filesystem=ext4 # Mount point for the resource disk ResourceDisk.MountPoint=/mnt/resource # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=n # Size of the swapfile. ResourceDisk.SwapSizeMB=0 # Comma-separated list of mount options. See mount(8) for valid options. ResourceDisk.MountOptions=None # Respond to load balancer probes if requested by Windows Azure. LBProbeResponder=y # Enable verbose logging (y|n) Logs.Verbose=n # Enable Console logging, default is y # Logs.Console=y # Is FIPS enabled OS.EnableFIPS=n # Set the path to SSH keys and configuration files OS.SshDir=/etc/ssh # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # If "None", the system default version is used. OS.OpensslPath=None # If set, agent will use proxy server to access internet #HttpProxy.Host=None #HttpProxy.Port=None # Detect Scvmm environment, default is n # DetectScvmmEnv=n # # Lib.Dir=/var/lib/waagent # # DVD.MountPoint=/mnt/cdrom/secure # # Pid.File=/var/run/waagent.pid # # Extension.LogDir=/var/log/azure # # Home.Dir=/home # Enable RDMA management and set up, should only be used in HPC images # OS.EnableRDMA=y # Enable or disable goal state processing auto-update, default is enabled # When turned off, it remains on latest version installed on the vm # Added this new option AutoUpdate.UpdateToLatestVersion in place of AutoUpdate.Enabled, and encourage users to transition to this new option # See wiki[https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion] for more details # AutoUpdate.UpdateToLatestVersion=y # Determine the update family, this should not be changed # AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is enabled # EnableOverProvisioning=y # Allow fallback to HTTP if HTTPS is unavailable # Note: Allowing HTTP (vs. HTTPS) may cause security risks OS.AllowHTTP=y # Add firewall rules to protect access to Azure host node services OS.EnableFirewall=y Azure-WALinuxAgent-a976115/config/debian/000077500000000000000000000000001510742556200200615ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/config/debian/waagent.conf000066400000000000000000000071621510742556200223640ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Enable extension handling. Do not disable this unless you do not need password reset, # backup, monitoring, or any extension handling whatsoever. Extensions.Enabled=y # Which provisioning agent to use. Supported values are "auto" (default), "waagent", # "cloud-init", or "disabled". Provisioning.Agent=auto # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=y # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=y # Supported values are "rsa", "dsa", "ecdsa", "ed25519", and "auto". # The "auto" option is supported on OpenSSH 5.9 (2011) and later. Provisioning.SshHostKeyPairType=auto # Monitor host name changes and publish changes via DHCP requests. Provisioning.MonitorHostName=y # Decode CustomData from Base64. Provisioning.DecodeCustomData=n # Execute CustomData after provisioning. Provisioning.ExecuteCustomData=n # Algorithm used by crypt when generating password hash. #Provisioning.PasswordCryptId=6 # Length of random salt used when generating password hash. #Provisioning.PasswordCryptSaltLength=10 # Allow reset password of sys user Provisioning.AllowResetSysUser=n # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=y # File system on the resource disk # Typically ext3 or ext4. FreeBSD images should use 'ufs2' here. ResourceDisk.Filesystem=ext4 # Mount point for the resource disk ResourceDisk.MountPoint=/mnt/resource # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=n # Size of the swapfile. ResourceDisk.SwapSizeMB=0 # Comma-separated list of mount options. See mount(8) for valid options. ResourceDisk.MountOptions=None # Enable verbose logging (y|n) Logs.Verbose=n # Enable Console logging, default is y # Logs.Console=y # Enable periodic log collection, default is y Logs.Collect=y # How frequently to collect logs, default is each hour Logs.CollectPeriod=3600 # Is FIPS enabled OS.EnableFIPS=n # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # If "None", the system default version is used. OS.OpensslPath=None # Set the SSH ClientAliveInterval # OS.SshClientAliveInterval=180 # Set the path to SSH keys and configuration files OS.SshDir=/etc/ssh # If set, agent will use proxy server to access internet #HttpProxy.Host=None #HttpProxy.Port=None # Detect Scvmm environment, default is n # DetectScvmmEnv=n # # Lib.Dir=/var/lib/waagent # # DVD.MountPoint=/mnt/cdrom/secure # # Pid.File=/var/run/waagent.pid # # Extension.LogDir=/var/log/azure # # Home.Dir=/home # Enable RDMA management and set up, should only be used in HPC images # OS.EnableRDMA=y # Enable or disable goal state processing auto-update, default is enabled # When turned off, it remains on latest version installed on the vm # Added this new option AutoUpdate.UpdateToLatestVersion in place of AutoUpdate.Enabled, and encourage users to transition to this new option # See wiki[https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion] for more details # AutoUpdate.UpdateToLatestVersion=y # Determine the update family, this should not be changed # AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is enabled # EnableOverProvisioning=y # Allow fallback to HTTP if HTTPS is unavailable # Note: Allowing HTTP (vs. HTTPS) may cause security risks # OS.AllowHTTP=n # Add firewall rules to protect access to Azure host node services # Note: # - The default is false to protect the state of existing VMs OS.EnableFirewall=n Azure-WALinuxAgent-a976115/config/devuan/000077500000000000000000000000001510742556200201215ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/config/devuan/waagent.conf000066400000000000000000000072331510742556200224230ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Enable extension handling. Do not disable this unless you do not need password reset, # backup, monitoring, or any extension handling whatsoever. Extensions.Enabled=y # Which provisioning agent to use. Supported values are "auto" (default), "waagent", # "cloud-init", or "disabled". Provisioning.Agent=auto # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=y # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=y # Supported values are "rsa", "dsa", "ecdsa", "ed25519", and "auto". # The "auto" option is supported on OpenSSH 5.9 (2011) and later. Provisioning.SshHostKeyPairType=auto # Monitor host name changes and publish changes via DHCP requests. Provisioning.MonitorHostName=y # Decode CustomData from Base64. Provisioning.DecodeCustomData=n # Execute CustomData after provisioning. Provisioning.ExecuteCustomData=n # Algorithm used by crypt when generating password hash. #Provisioning.PasswordCryptId=6 # Length of random salt used when generating password hash. #Provisioning.PasswordCryptSaltLength=10 # Allow reset password of sys user Provisioning.AllowResetSysUser=n # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=y # File system on the resource disk # Typically ext3 or ext4. FreeBSD images should use 'ufs2' here. ResourceDisk.Filesystem=ext4 # Mount point for the resource disk ResourceDisk.MountPoint=/mnt/resource # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=n # Size of the swapfile. ResourceDisk.SwapSizeMB=0 # Comma-separated list of mount options. See mount(8) for valid options. ResourceDisk.MountOptions=None # Enable verbose logging (y|n) Logs.Verbose=n # Enable Console logging, default is y # Logs.Console=y # Is FIPS enabled OS.EnableFIPS=n # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # If "None", the system default version is used. OS.OpensslPath=None # Set the SSH ClientAliveInterval # OS.SshClientAliveInterval=180 # Set the path to SSH keys and configuration files OS.SshDir=/etc/ssh # If set, agent will use proxy server to access internet #HttpProxy.Host=None #HttpProxy.Port=None # Detect Scvmm environment, default is n # DetectScvmmEnv=n # # Lib.Dir=/var/lib/waagent # # DVD.MountPoint=/mnt/cdrom/secure # # Pid.File=/var/run/waagent.pid # # Extension.LogDir=/var/log/azure # # Home.Dir=/home # Enable RDMA management and set up, should only be used in HPC images # OS.EnableRDMA=y # Enable or disable goal state processing auto-update, default is enabled # When turned off, it remains on latest version installed on the vm # Added this new option AutoUpdate.UpdateToLatestVersion in place of AutoUpdate.Enabled, and encourage users to transition to this new option # See wiki[https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion] for more details # AutoUpdate.UpdateToLatestVersion=y # Determine the update family, this should not be changed # AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is enabled # EnableOverProvisioning=y # Allow fallback to HTTP if HTTPS is unavailable # Note: Allowing HTTP (vs. HTTPS) may cause security risks # OS.AllowHTTP=n # Add firewall rules to protect access to Azure host node services # Note: # - The default is false to protect the state of existing VMs OS.EnableFirewall=y # Enforce control groups limits on the agent and extensions CGroups.EnforceLimits=n # CGroups which are excluded from limits, comma separated CGroups.Excluded=customscript,runcommand Azure-WALinuxAgent-a976115/config/freebsd/000077500000000000000000000000001510742556200202515ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/config/freebsd/waagent.conf000066400000000000000000000065421510742556200225550ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Enable extension handling. Do not disable this unless you do not need password reset, # backup, monitoring, or any extension handling whatsoever. Extensions.Enabled=y # Which provisioning agent to use. Supported values are "auto" (default), "waagent", # "cloud-init", or "disabled". Provisioning.Agent=auto # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=y # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=y # Supported values are "rsa", "dsa", "ecdsa", "ed25519", and "auto". # The "auto" option is supported on OpenSSH 5.9 (2011) and later. Provisioning.SshHostKeyPairType=rsa # Monitor host name changes and publish changes via DHCP requests. Provisioning.MonitorHostName=y # Decode CustomData from Base64. Provisioning.DecodeCustomData=n # Execute CustomData after provisioning. Provisioning.ExecuteCustomData=n # Algorithm used by crypt when generating password hash. #Provisioning.PasswordCryptId=6 # Length of random salt used when generating password hash. #Provisioning.PasswordCryptSaltLength=10 # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=y # File system on the resource disk # Typically ext3 or ext4. FreeBSD images should use 'ufs' here. ResourceDisk.Filesystem=ufs # Mount point for the resource disk ResourceDisk.MountPoint=/mnt/resource # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=y # Size of the swapfile. ResourceDisk.SwapSizeMB=16384 # Comma-separated list of mount options. See mount(8) for valid options. ResourceDisk.MountOptions=None # Enable verbose logging (y|n) Logs.Verbose=n # Enable Console logging, default is y # Logs.Console=y # Is FIPS enabled OS.EnableFIPS=n # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # If "None", the system default version is used. OS.OpensslPath=None # Set the path to SSH keys and configuration files OS.SshDir=/etc/ssh OS.PasswordPath=/etc/master.passwd OS.SudoersDir=/usr/local/etc/sudoers.d # If set, agent will use proxy server to access internet #HttpProxy.Host=None #HttpProxy.Port=None # Detect Scvmm environment, default is n # DetectScvmmEnv=n # # Lib.Dir=/var/lib/waagent # # DVD.MountPoint=/mnt/cdrom/secure # # Pid.File=/var/run/waagent.pid # # Extension.LogDir=/var/log/azure # # Home.Dir=/home # Enable RDMA management and set up, should only be used in HPC images # OS.EnableRDMA=y # Enable or disable goal state processing auto-update, default is enabled # When turned off, it remains on latest version installed on the vm # Added this new option AutoUpdate.UpdateToLatestVersion in place of AutoUpdate.Enabled, and encourage users to transition to this new option # See wiki[https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion] for more details # AutoUpdate.UpdateToLatestVersion=y # Determine the update family, this should not be changed # AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is enabled # EnableOverProvisioning=y # Allow fallback to HTTP if HTTPS is unavailable # Note: Allowing HTTP (vs. HTTPS) may cause security risks # OS.AllowHTTP=n # Add firewall rules to protect access to Azure host node services OS.EnableFirewall=y Azure-WALinuxAgent-a976115/config/gaia/000077500000000000000000000000001510742556200175405ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/config/gaia/waagent.conf000066400000000000000000000070571510742556200220460ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Enable extension handling. Do not disable this unless you do not need password reset, # backup, monitoring, or any extension handling whatsoever. Extensions.Enabled=y # Which provisioning agent to use. Supported values are "auto" (default), "waagent", # "cloud-init", or "disabled". Provisioning.Agent=auto # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=n # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=n # Supported values are "rsa", "dsa", "ecdsa", "ed25519", and "auto". # The "auto" option is supported on OpenSSH 5.9 (2011) and later. Provisioning.SshHostKeyPairType=rsa # Monitor host name changes and publish changes via DHCP requests. Provisioning.MonitorHostName=n # Decode CustomData from Base64. Provisioning.DecodeCustomData=y # Execute CustomData after provisioning. Provisioning.ExecuteCustomData=n # Algorithm used by crypt when generating password hash. Provisioning.PasswordCryptId=1 # Length of random salt used when generating password hash. #Provisioning.PasswordCryptSaltLength=10 # Allow reset password of sys user Provisioning.AllowResetSysUser=y # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=y # File system on the resource disk # Typically ext3 or ext4. FreeBSD images should use 'ufs2' here. ResourceDisk.Filesystem=ext3 # Mount point for the resource disk ResourceDisk.MountPoint=/mnt/resource # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=y # Size of the swapfile. ResourceDisk.SwapSizeMB=1024 # Comma-separated list of mount options. See mount(8) for valid options. ResourceDisk.MountOptions=None # Enable verbose logging (y|n) Logs.Verbose=n # Enable Console logging, default is y # Logs.Console=y # Is FIPS enabled OS.EnableFIPS=n # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # If "None", the system default version is used. OS.OpensslPath=/var/lib/waagent/openssl # Set the path to SSH keys and configuration files OS.SshDir=/etc/ssh # If set, agent will use proxy server to access internet #HttpProxy.Host=None #HttpProxy.Port=None # Detect Scvmm environment, default is n # DetectScvmmEnv=n # # Lib.Dir=/var/lib/waagent # # DVD.MountPoint=/mnt/cdrom/secure # # Pid.File=/var/run/waagent.pid # # Extension.LogDir=/var/log/azure # # Home.Dir=/home # Enable RDMA management and set up, should only be used in HPC images OS.EnableRDMA=n # Enable or disable goal state processing auto-update, default is enabled # When turned off, it reverts to the pre-installed agent that comes with image # AutoUpdate.Enabled is a legacy parameter used only for backwards compatibility. We encourage users to transition to new option AutoUpdate.UpdateToLatestVersion # See wiki[https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion] for more details AutoUpdate.Enabled=n # Enable or disable goal state processing auto-update, default is enabled # When turned off, it remains on latest version installed on the vm # AutoUpdate.UpdateToLatestVersion=y # Determine the update family, this should not be changed # AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is enabled # EnableOverProvisioning=y # Allow fallback to HTTP if HTTPS is unavailable # Note: Allowing HTTP (vs. HTTPS) may cause security risks # OS.AllowHTTP=n # Add firewall rules to protect access to Azure host node services OS.EnableFirewall=y Azure-WALinuxAgent-a976115/config/iosxe/000077500000000000000000000000001510742556200177665ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/config/iosxe/waagent.conf000066400000000000000000000064671510742556200223000ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Which provisioning agent to use. Supported values are "auto" (default), "waagent", # "cloud-init", or "disabled". Provisioning.Agent=auto # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=y # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=n # Supported values are "rsa", "dsa", "ecdsa", "ed25519", and "auto". # The "auto" option is supported on OpenSSH 5.9 (2011) and later. Provisioning.SshHostKeyPairType=rsa # Monitor host name changes and publish changes via DHCP requests. Provisioning.MonitorHostName=n # Decode CustomData from Base64. Provisioning.DecodeCustomData=n # Execute CustomData after provisioning. Provisioning.ExecuteCustomData=n # Algorithm used by crypt when generating password hash. #Provisioning.PasswordCryptId=6 # Length of random salt used when generating password hash. #Provisioning.PasswordCryptSaltLength=10 # Allow reset password of sys user Provisioning.AllowResetSysUser=n # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=n # File system on the resource disk # Typically ext3 or ext4. FreeBSD images should use 'ufs2' here. ResourceDisk.Filesystem=ext4 # Mount point for the resource disk ResourceDisk.MountPoint=/mnt/resource # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=n # Size of the swapfile. ResourceDisk.SwapSizeMB=0 # Comma-separated list of mount options. See mount(8) for valid options. ResourceDisk.MountOptions=None # Enable verbose logging (y|n) Logs.Verbose=n # Enable Console logging, default is y # Logs.Console=y # Is FIPS enabled OS.EnableFIPS=n # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # If "None", the system default version is used. OS.OpensslPath=None # Set the SSH ClientAliveInterval # OS.SshClientAliveInterval=180 # Set the path to SSH keys and configuration files OS.SshDir=/etc/ssh # If set, agent will use proxy server to access internet #HttpProxy.Host=None #HttpProxy.Port=None # Detect Scvmm environment, default is n # DetectScvmmEnv=n # # Lib.Dir=/var/lib/waagent # # DVD.MountPoint=/mnt/cdrom/secure # # Pid.File=/var/run/waagent.pid # # Extension.LogDir=/var/log/azure # # Home.Dir=/home # Enable RDMA management and set up, should only be used in HPC images # OS.EnableRDMA=y # Enable or disable goal state processing auto-update, default is enabled # When turned off, it remains on latest version installed on the vm # Added this new option AutoUpdate.UpdateToLatestVersion in place of AutoUpdate.Enabled, and encourage users to transition to this new option # See wiki[https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion] for more details # AutoUpdate.UpdateToLatestVersion=y # Determine the update family, this should not be changed # AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is enabled # EnableOverProvisioning=y # Allow fallback to HTTP if HTTPS is unavailable # Note: Allowing HTTP (vs. HTTPS) may cause security risks # OS.AllowHTTP=n # Add firewall rules to protect access to Azure host node services # Note: # - The default is false to protect the state of existing VMs OS.EnableFirewall=y Azure-WALinuxAgent-a976115/config/mariner/000077500000000000000000000000001510742556200202745ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/config/mariner/waagent.conf000066400000000000000000000053521510742556200225760ustar00rootroot00000000000000# Microsoft Azure Linux Agent Configuration # # Enable extension handling. Do not disable this unless you do not need password reset, # backup, monitoring, or any extension handling whatsoever. Extensions.Enabled=y # Which provisioning agent to use. Supported values are "auto" (default), "waagent", # "cloud-init", or "disabled". Provisioning.Agent=auto # Specified program is invoked with the argument "Ready" when we report ready status # to the endpoint server. Role.StateConsumer=None # Specified program is invoked with XML file argument specifying role # configuration. Role.ConfigurationConsumer=None # Specified program is invoked with XML file argument specifying role topology. Role.TopologyConsumer=None # Enable instance creation Provisioning.Enabled=n # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=y # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=y # Supported values are "rsa", "dsa" and "ecdsa". Provisioning.SshHostKeyPairType=rsa # Monitor host name changes and publish changes via DHCP requests. Provisioning.MonitorHostName=y # Decode CustomData from Base64. Provisioning.DecodeCustomData=y # Execute CustomData after provisioning. Provisioning.ExecuteCustomData=n # Allow reset password of sys user Provisioning.AllowResetSysUser=n # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=n # File system on the resource disk # Typically ext3 or ext4. FreeBSD images should use 'ufs2' here. ResourceDisk.Filesystem=ext4 # Mount point for the resource disk ResourceDisk.MountPoint=/mnt/resource # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=n # Size of the swapfile. ResourceDisk.SwapSizeMB=0 # Enable verbose logging (y|n) Logs.Verbose=n # Is FIPS enabled OS.EnableFIPS=n # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # If "None", the system default version is used. OS.OpensslPath=None # Set the path to SSH keys and configuration files OS.SshDir=/etc/ssh # Enable or disable goal state processing auto-update, default is enabled # When turned off, it remains on latest version installed on the vm # Added this new option AutoUpdate.UpdateToLatestVersion in place of AutoUpdate.Enabled, and encourage users to transition to this new option # See wiki[https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion] for more details # AutoUpdate.UpdateToLatestVersion=y AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is disabled # EnableOverProvisioning=n # Add firewall rules to protect access to Azure host node services OS.EnableFirewall=y Azure-WALinuxAgent-a976115/config/nsbsd/000077500000000000000000000000001510742556200177505ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/config/nsbsd/waagent.conf000066400000000000000000000067121510742556200222530ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Which provisioning agent to use. Supported values are "auto" (default), "waagent", # "cloud-init", or "disabled". Provisioning.Agent=auto # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=n # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=n # Supported values are "rsa", "dsa", "ecdsa", "ed25519", and "auto". # The "auto" option is supported on OpenSSH 5.9 (2011) and later. Provisioning.SshHostKeyPairType=rsa # Monitor host name changes and publish changes via DHCP requests. Provisioning.MonitorHostName=y # Decode CustomData from Base64. Provisioning.DecodeCustomData=n # Execute CustomData after provisioning. Provisioning.ExecuteCustomData=n # Algorithm used by crypt when generating password hash. #Provisioning.PasswordCryptId=6 # Length of random salt used when generating password hash. #Provisioning.PasswordCryptSaltLength=10 # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=n # File system on the resource disk # Typically ext3 or ext4. FreeBSD images should use 'ufs' here. ResourceDisk.Filesystem=ufs # Mount point for the resource disk ResourceDisk.MountPoint=/mnt/resource # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=n # Size of the swapfile. ResourceDisk.SwapSizeMB=0 # Comma-separated list of mount options. See mount(8) for valid options. ResourceDisk.MountOptions=None # Enable verbose logging (y|n) TODO set n Logs.Verbose=n # Enable Console logging, default is y # Logs.Console=y # Is FIPS enabled OS.EnableFIPS=n # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # If "None", the system default version is used. OS.OpensslPath=None # Set the path to SSH keys and configuration files OS.SshDir=/etc/ssh OS.PasswordPath=/etc/master.passwd OS.SudoersDir=/usr/local/etc/sudoers.d # If set, agent will use proxy server to access internet #HttpProxy.Host=None #HttpProxy.Port=None # Detect Scvmm environment, default is n # DetectScvmmEnv=n # Lib.Dir=/usr/Firewall/lib/waagent # # DVD.MountPoint=/mnt/cdrom/secure # # Pid.File=/var/run/waagent.pid # Extension.LogDir=/log/azure # # Home.Dir=/home # Enable RDMA management and set up, should only be used in HPC images # OS.EnableRDMA=y # Enable or disable goal state processing auto-update, default is enabled # When turned off, it reverts to the pre-installed agent that comes with image # AutoUpdate.Enabled is a legacy parameter used only for backwards compatibility. We encourage users to transition to new option AutoUpdate.UpdateToLatestVersion # See wiki[https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion] for more details AutoUpdate.Enabled=n # Enable or disable goal state processing auto-update, default is enabled # When turned off, it remains on latest version installed on the vm # AutoUpdate.UpdateToLatestVersion=y # Determine the update family, this should not be changed # AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is disabled # EnableOverProvisioning=n # Allow fallback to HTTP if HTTPS is unavailable # Note: Allowing HTTP (vs. HTTPS) may cause security risks # OS.AllowHTTP=n # Add firewall rules to protect access to Azure host node services # Note: # - The default is false to protect the state of existing VMs OS.EnableFirewall=n Azure-WALinuxAgent-a976115/config/openbsd/000077500000000000000000000000001510742556200202715ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/config/openbsd/waagent.conf000066400000000000000000000063721510742556200225760ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Which provisioning agent to use. Supported values are "auto" (default), "waagent", # "cloud-init", or "disabled". Provisioning.Agent=auto # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=y # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=y # Supported values are "rsa", "dsa", "ecdsa", "ed25519", and "auto". # The "auto" option is supported on OpenSSH 5.9 (2011) and later. Provisioning.SshHostKeyPairType=auto # Monitor host name changes and publish changes via DHCP requests. Provisioning.MonitorHostName=y # Decode CustomData from Base64. Provisioning.DecodeCustomData=n # Execute CustomData after provisioning. Provisioning.ExecuteCustomData=n # Algorithm used by crypt when generating password hash. #Provisioning.PasswordCryptId=6 # Length of random salt used when generating password hash. #Provisioning.PasswordCryptSaltLength=10 # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=y # File system on the resource disk # Typically ext3 or ext4. OpenBSD images should use 'ufs2' here. ResourceDisk.Filesystem=ufs2 # Mount point for the resource disk ResourceDisk.MountPoint=/mnt/resource # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=y # Max size of the swap partition in MB ResourceDisk.SwapSizeMB=65536 # Comma-separated list of mount options. See mount(8) for valid options. ResourceDisk.MountOptions=None # Enable verbose logging (y|n) Logs.Verbose=n # Enable Console logging, default is y # Logs.Console=y # Is FIPS enabled OS.EnableFIPS=n # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # If "None", the system default version is used. OS.OpensslPath=/usr/local/bin/eopenssl # Set the path to SSH keys and configuration files OS.SshDir=/etc/ssh OS.PasswordPath=/etc/master.passwd # If set, agent will use proxy server to access internet #HttpProxy.Host=None #HttpProxy.Port=None # Detect Scvmm environment, default is n # DetectScvmmEnv=n # # Lib.Dir=/var/lib/waagent # # DVD.MountPoint=/mnt/cdrom/secure # # Pid.File=/var/run/waagent.pid # # Extension.LogDir=/var/log/azure # # Home.Dir=/home # Enable RDMA management and set up, should only be used in HPC images # OS.EnableRDMA=y # Enable or disable goal state processing auto-update, default is enabled # When turned off, it remains on latest version installed on the vm # Added this new option AutoUpdate.UpdateToLatestVersion in place of AutoUpdate.Enabled, and encourage users to transition to this new option # See wiki[https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion] for more details # AutoUpdate.UpdateToLatestVersion=y # Determine the update family, this should not be changed # AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is enabled # EnableOverProvisioning=y # Allow fallback to HTTP if HTTPS is unavailable # Note: Allowing HTTP (vs. HTTPS) may cause security risks # OS.AllowHTTP=n # Add firewall rules to protect access to Azure host node services # Note: # - The default is false to protect the state of existing VMs OS.EnableFirewall=y Azure-WALinuxAgent-a976115/config/photonos/000077500000000000000000000000001510742556200205105ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/config/photonos/waagent.conf000066400000000000000000000046321510742556200230120ustar00rootroot00000000000000# Microsoft Azure Linux Agent Configuration # # Specified program is invoked with the argument "Ready" when we report ready status # to the endpoint server. Role.StateConsumer=None # Specified program is invoked with XML file argument specifying role # configuration. Role.ConfigurationConsumer=None # Specified program is invoked with XML file argument specifying role topology. Role.TopologyConsumer=None # Enable instance creation Provisioning.Enabled=n # Rely on cloud-init to provision Provisioning.UseCloudInit=y # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=y # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=y # Supported values are "rsa", "dsa" and "ecdsa". Provisioning.SshHostKeyPairType=rsa # Monitor host name changes and publish changes via DHCP requests. Provisioning.MonitorHostName=y # Decode CustomData from Base64. Provisioning.DecodeCustomData=y # Execute CustomData after provisioning. Provisioning.ExecuteCustomData=n # Allow reset password of sys user Provisioning.AllowResetSysUser=n # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=n # File system on the resource disk # Typically ext3 or ext4. FreeBSD images should use 'ufs2' here. ResourceDisk.Filesystem=ext4 # Mount point for the resource disk ResourceDisk.MountPoint=/mnt/resource # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=n # Size of the swapfile. ResourceDisk.SwapSizeMB=0 # Enable verbose logging (y|n) Logs.Verbose=n # Is FIPS enabled OS.EnableFIPS=n # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # If "None", the system default version is used. OS.OpensslPath=None # Set the path to SSH keys and configuration files OS.SshDir=/etc/ssh # Enable or disable goal state processing auto-update, default is enabled # When turned off, it remains on latest version installed on the vm # Added this new option AutoUpdate.UpdateToLatestVersion in place of AutoUpdate.Enabled, and encourage users to transition to this new option # See wiki[https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion] for more details # AutoUpdate.UpdateToLatestVersion=y AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is disabled # EnableOverProvisioning=n Azure-WALinuxAgent-a976115/config/suse/000077500000000000000000000000001510742556200176165ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/config/suse/waagent.conf000066400000000000000000000072201510742556200221140ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Enable extension handling. Do not disable this unless you do not need password reset, # backup, monitoring, or any extension handling whatsoever. Extensions.Enabled=y # Which provisioning agent to use. Supported values are "auto" (default), "waagent", # "cloud-init", or "disabled". Provisioning.Agent=auto # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=y # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=y # Supported values are "rsa", "dsa", "ecdsa", "ed25519", and "auto". # The "auto" option is supported on OpenSSH 5.9 (2011) and later. Provisioning.SshHostKeyPairType=rsa # Monitor host name changes and publish changes via DHCP requests. Provisioning.MonitorHostName=y # Decode CustomData from Base64. Provisioning.DecodeCustomData=n # Execute CustomData after provisioning. Provisioning.ExecuteCustomData=n # Algorithm used by crypt when generating password hash. #Provisioning.PasswordCryptId=6 # Length of random salt used when generating password hash. #Provisioning.PasswordCryptSaltLength=10 # Allow reset password of sys user Provisioning.AllowResetSysUser=n # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=y # File system on the resource disk # Typically ext3 or ext4. FreeBSD images should use 'ufs2' here. ResourceDisk.Filesystem=btrfs # Mount point for the resource disk ResourceDisk.MountPoint=/mnt/resource # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=n # Size of the swapfile. ResourceDisk.SwapSizeMB=0 # Comma-separated list of mount options. See mount(8) for valid options. ResourceDisk.MountOptions=compress=lzo # Respond to load balancer probes if requested by Microsoft Azure. LBProbeResponder=y # Enable verbose logging (y|n) Logs.Verbose=n # Enable Console logging, default is y # Logs.Console=y # Enable periodic log collection, default is y Logs.Collect=y # How frequently to collect logs, default is each hour Logs.CollectPeriod=3600 # Is FIPS enabled OS.EnableFIPS=n # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # If "None", the system default version is used. OS.OpensslPath=None # Set the path to SSH keys and configuration files OS.SshDir=/etc/ssh # If set, agent will use proxy server to access internet #HttpProxy.Host=None #HttpProxy.Port=None # Detect Scvmm environment, default is n # DetectScvmmEnv=n # # Lib.Dir=/var/lib/waagent # # DVD.MountPoint=/mnt/cdrom/secure # # Pid.File=/var/run/waagent.pid # # Extension.LogDir=/var/log/azure # # Home.Dir=/home # Enable RDMA management and set up, should only be used in HPC images # OS.EnableRDMA=y # Enable checking RDMA driver version and update # OS.CheckRdmaDriver=y # Enable or disable goal state processing auto-update, default is enabled # When turned off, it remains on latest version installed on the vm # Added this new option AutoUpdate.UpdateToLatestVersion in place of AutoUpdate.Enabled, and encourage users to transition to this new option # See wiki[https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion] for more details # AutoUpdate.UpdateToLatestVersion=y # Determine the update family, this should not be changed # AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is enabled # EnableOverProvisioning=y # Allow fallback to HTTP if HTTPS is unavailable # Note: Allowing HTTP (vs. HTTPS) may cause security risks # OS.AllowHTTP=n # Add firewall rules to protect access to Azure host node services OS.EnableFirewall=y Azure-WALinuxAgent-a976115/config/ubuntu/000077500000000000000000000000001510742556200201615ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/config/ubuntu/waagent.conf000066400000000000000000000070671510742556200224700ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Enable extension handling. Do not disable this unless you do not need password reset, # backup, monitoring, or any extension handling whatsoever. Extensions.Enabled=y # Which provisioning agent to use. Supported values are "auto" (default), "waagent", # "cloud-init", or "disabled". Provisioning.Agent=auto # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=y # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=n # Supported values are "rsa", "dsa", "ecdsa", "ed25519", and "auto". # The "auto" option is supported on OpenSSH 5.9 (2011) and later. Provisioning.SshHostKeyPairType=rsa # Monitor host name changes and publish changes via DHCP requests. Provisioning.MonitorHostName=n # Decode CustomData from Base64. Provisioning.DecodeCustomData=n # Execute CustomData after provisioning. Provisioning.ExecuteCustomData=n # Algorithm used by crypt when generating password hash. #Provisioning.PasswordCryptId=6 # Length of random salt used when generating password hash. #Provisioning.PasswordCryptSaltLength=10 # Allow reset password of sys user Provisioning.AllowResetSysUser=n # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=n # File system on the resource disk # Typically ext3 or ext4. FreeBSD images should use 'ufs2' here. ResourceDisk.Filesystem=ext4 # Mount point for the resource disk ResourceDisk.MountPoint=/mnt # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=n # Size of the swapfile. ResourceDisk.SwapSizeMB=0 # Comma-separated list of mount options. See mount(8) for valid options. ResourceDisk.MountOptions=None # Respond to load balancer probes if requested by Microsoft Azure. LBProbeResponder=y # Enable verbose logging (y|n) Logs.Verbose=n # Enable Console logging, default is y # Logs.Console=y # Enable periodic log collection, default is y Logs.Collect=y # How frequently to collect logs, default is each hour Logs.CollectPeriod=3600 # Is FIPS enabled OS.EnableFIPS=n # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # If "None", the system default version is used. OS.OpensslPath=None # Set the path to SSH keys and configuration files OS.SshDir=/etc/ssh # If set, agent will use proxy server to access internet #HttpProxy.Host=None #HttpProxy.Port=None # Detect Scvmm environment, default is n # DetectScvmmEnv=n # Enable RDMA management and set up, should only be used in HPC images # OS.EnableRDMA=y # Enable RDMA kernel update, this value is effective on Ubuntu # OS.UpdateRdmaDriver=y # Enable checking RDMA driver version and update # OS.CheckRdmaDriver=y # Enable or disable goal state processing auto-update, default is enabled # When turned off, it remains on latest version installed on the vm # Added this new option AutoUpdate.UpdateToLatestVersion in place of AutoUpdate.Enabled, and encourage users to transition to this new option # See wiki[https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion] for more details # AutoUpdate.UpdateToLatestVersion=y # Determine the update family, this should not be changed # AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is enabled # EnableOverProvisioning=y # Allow fallback to HTTP if HTTPS is unavailable # Note: Allowing HTTP (vs. HTTPS) may cause security risks # OS.AllowHTTP=n # Add firewall rules to protect access to Azure host node services OS.EnableFirewall=y Azure-WALinuxAgent-a976115/config/waagent.conf000066400000000000000000000104051510742556200211340ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Enable extension handling. Do not disable this unless you do not need password reset, # backup, monitoring, or any extension handling whatsoever. Extensions.Enabled=y # How often (in seconds) to poll for new goal states Extensions.GoalStatePeriod=6 # Which provisioning agent to use. Supported values are "auto" (default), "waagent", # "cloud-init", or "disabled". Provisioning.Agent=auto # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=y # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=y # Supported values are "rsa", "dsa", "ecdsa", "ed25519", and "auto". # The "auto" option is supported on OpenSSH 5.9 (2011) and later. Provisioning.SshHostKeyPairType=rsa # Monitor host name changes and publish changes via DHCP requests. Provisioning.MonitorHostName=y # How often (in seconds) to monitor host name changes. Provisioning.MonitorHostNamePeriod=30 # Decode CustomData from Base64. Provisioning.DecodeCustomData=n # Execute CustomData after provisioning. Provisioning.ExecuteCustomData=n # Algorithm used by crypt when generating password hash. #Provisioning.PasswordCryptId=6 # Length of random salt used when generating password hash. #Provisioning.PasswordCryptSaltLength=10 # Allow reset password of sys user Provisioning.AllowResetSysUser=n # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=y # File system on the resource disk # Typically ext3 or ext4. FreeBSD images should use 'ufs2' here. ResourceDisk.Filesystem=ext4 # Mount point for the resource disk ResourceDisk.MountPoint=/mnt/resource # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=n # Size of the swapfile. ResourceDisk.SwapSizeMB=0 # Comma-separated list of mount options. See mount(8) for valid options. ResourceDisk.MountOptions=None # Enable verbose logging (y|n) Logs.Verbose=n # Enable Console logging, default is y # Logs.Console=y # Enable periodic log collection, default is y Logs.Collect=y # How frequently to collect logs, default is each hour Logs.CollectPeriod=3600 # Is FIPS enabled OS.EnableFIPS=n # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # How often (in seconds) to set the root device timeout. OS.RootDeviceScsiTimeoutPeriod=30 # If "None", the system default version is used. OS.OpensslPath=None # Set the SSH ClientAliveInterval # OS.SshClientAliveInterval=180 # Set the path to SSH keys and configuration files OS.SshDir=/etc/ssh # If set, agent will use proxy server to access internet #HttpProxy.Host=None #HttpProxy.Port=None # Detect Scvmm environment, default is n # DetectScvmmEnv=n # # Lib.Dir=/var/lib/waagent # # DVD.MountPoint=/mnt/cdrom/secure # # Pid.File=/var/run/waagent.pid # # Extension.LogDir=/var/log/azure # # Home.Dir=/home # Enable RDMA management and set up, should only be used in HPC images # OS.EnableRDMA=y # Enable checking RDMA driver version and update # OS.CheckRdmaDriver=y # Enable or disable goal state processing auto-update, default is enabled # When turned off, it remains on latest version installed on the vm # Added this new option AutoUpdate.UpdateToLatestVersion in place of AutoUpdate.Enabled, and encourage users to transition to this new option # See wiki[https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion] for more details # AutoUpdate.UpdateToLatestVersion=y # Determine the update family, this should not be changed # AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is enabled # EnableOverProvisioning=y # Allow fallback to HTTP if HTTPS is unavailable # Note: Allowing HTTP (vs. HTTPS) may cause security risks # OS.AllowHTTP=n # Add firewall rules to protect access to Azure host node services OS.EnableFirewall=y # How often (in seconds) to check the firewall rules OS.EnableFirewallPeriod=30 # How often (in seconds) to remove the udev rules for persistent network interface names (75-persistent-net-generator.rules and /etc/udev/rules.d/70-persistent-net.rules) OS.RemovePersistentNetRulesPeriod=30 # How often (in seconds) to monitor for DHCP client restarts OS.MonitorDhcpClientRestartPeriod=30 Azure-WALinuxAgent-a976115/config/waagent.logrotate000066400000000000000000000016141510742556200222110ustar00rootroot00000000000000/var/log/waagent.log { # Old versions of log files are compressed with gzip by default. compress # Rotate log files > 20 MB, and keep last 50 archived files. With an compression ratio ranging from 5-10%, The # archived files would take an average of around 50-100 MB. Even for extremely chatty agent, the average size of # the compressed files would not go beyond ~2 MB per day. size 20M rotate 50 # Add date as extension when rotating logs dateext # Formatting the date extension to YYYY-MM-DD-SSSSSSS format. logrotate does not provide hours, mins and seconds # option. Adding the %s (system clock epoch time) to differentiate rotated log files within the same day. dateformat -%Y-%m-%d-%s # Do not rotate the log if it is empty notifempty # If the log file is missing, go on to the next one without issuing an error message. missingok }Azure-WALinuxAgent-a976115/doc/000077500000000000000000000000001510742556200161375ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/doc/man/000077500000000000000000000000001510742556200167125ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/doc/man/waagent.1000066400000000000000000000057401510742556200204300ustar00rootroot00000000000000.TH WAAGENT 1 "June 2025" "Azure Linux Agent" "System Administration" .SH NAME waagent \- Azure Linux VM Agent .SH SYNOPSIS .B waagent [-verbose] [-force] [-help] [\fISUBCOMMAND\fR]... .SH DESCRIPTION The Azure Linux Agent (waagent) manages virtual machine interaction with the Azure fabric controller. Most subcommands are not meant to be run directly by the user. However, some subcommands may be useful for debugging (such as collect-logs, version, and show-configuration) and deprovisioning. .SH SUBCOMMANDS .TP \fB-collect-logs\fR Runs the log collector utility that collects relevant agent logs for debugging and stores them in the agent folder on disk. Exact location will be shown when run. Use flag \fB-full\fR for more exhaustive log collection. .TP \fB-configuration-path FILE\fR Used together with \fB-start\fR or \fB-daemon\fR to specify configuration file. Default to /etc/waagent.conf. .TP \fB-daemon -start\fR Run waagent as a daemon in background. .TP \fB-deprovision\fR Attempt to clean the system and make it suitable for re-provisioning. WARNING: Deprovision does not guarantee that the image is cleared of all sensitive information and suitable for redistribution. .TP \fB-deprovision+user\fR Same as \fB-deprovision\fR, but also removes the last provisioned user account. .TP \fB-register-service\fR Register waagent as a service and enable it. .TP \fB-run-exthandlers\fR Run check for updates to waagent and extension handler. Note that outputs to /dev/console will be temporarily suspended. .TP \fB-setup-firewall=IP\fR Set up firewall rules for endpoint \fBIP\fR. .TP \fB-show-configuration\fR Print the current configuration, including values read from waagent.conf. .TP \fB-help\fR Display usage information. .TP \fB-version\fR Show the current version of the agent. .SH CONFIGURATION The agent is configured via this file by default: .B /etc/waagent.conf This file contains key=value settings that control agent behavior, including provisioning, disk formatting, resource limits, and certificate handling. Example entries: .RS Provisioning.Enabled=y ResourceDisk.Format=y ResourceDisk.MountPoint=/mnt/resource RSA.KeyLength=2048 Logs.Verbose=y .RE .SH FILES AND DIRECTORIES .TP \fB/etc/waagent.conf\fR Main configuration file. .TP \fB/var/lib/waagent\fR State files and provisioning artifacts. .TP \fB/var/log/waagent.log\fR Agent log file. .SH SERVICES On systemd systems, the agent runs as: .RS .B systemctl start .B systemctl enable .RE .SH EXIT STATUS Zero on success, non-zero on error. .SH EXAMPLES .TP Deprovision before capturing an image: .RS waagent -deprovision+user && rm -rf /var/lib/waagent && shutdown -h now .RE .SH SEE ALSO .BR systemctl (1), .BR cloud-init (1) .SH HOMEPAGE .B https://github.com/Azure/WALinuxAgent .B https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/agent-linux .SH COPYRIGHT Copyright 2018 Microsoft Corporation .SH AUTHORS Microsoft Azure Linux Team Azure-WALinuxAgent-a976115/init/000077500000000000000000000000001510742556200163355ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/init/arch/000077500000000000000000000000001510742556200172525ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/init/arch/waagent.service000066400000000000000000000005371510742556200222670ustar00rootroot00000000000000[Unit] Description=Azure Linux Agent Wants=network-online.target sshd.service sshd-keygen.service After=network-online.target ConditionFileIsExecutable=/usr/bin/waagent ConditionPathExists=/etc/waagent.conf [Service] Type=simple ExecStart=/usr/bin/python -u /usr/bin/waagent -daemon Restart=always RestartSec=5 [Install] WantedBy=multi-user.target Azure-WALinuxAgent-a976115/init/azure-vmextensions.slice000066400000000000000000000002141510742556200232410ustar00rootroot00000000000000[Unit] Description=Slice for Azure VM Extensions DefaultDependencies=no Before=slices.target [Slice] CPUAccounting=yes MemoryAccounting=yes Azure-WALinuxAgent-a976115/init/azure.slice000066400000000000000000000001471510742556200205060ustar00rootroot00000000000000[Unit] Description=Slice for Azure VM Agent and Extensions DefaultDependencies=no Before=slices.target Azure-WALinuxAgent-a976115/init/chainguard/000077500000000000000000000000001510742556200204425ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/init/chainguard/90-waagent.preset000066400000000000000000000000171510742556200235400ustar00rootroot00000000000000enable waagent Azure-WALinuxAgent-a976115/init/chainguard/waagent.service000066400000000000000000000006211510742556200234510ustar00rootroot00000000000000[Unit] Description=Azure Linux Agent Wants=network-online.target sshd.service sshd-keygen.service After=network-online.target ConditionFileIsExecutable=/usr/bin/waagent ConditionPathExists=/etc/waagent.conf [Service] Type=simple ExecStart=waagent -daemon Restart=always RestartSec=5 Environment=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin [Install] WantedBy=multi-user.target Azure-WALinuxAgent-a976115/init/clearlinux/000077500000000000000000000000001510742556200205035ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/init/clearlinux/waagent.service000066400000000000000000000005661510742556200235220ustar00rootroot00000000000000[Unit] Description=Azure Linux Agent Wants=network-online.target sshd.service sshd-keygen.service After=network-online.target ConditionFileIsExecutable=/usr/bin/waagent ConditionPathExists=/usr/share/defaults/waagent/waagent.conf [Service] Type=simple ExecStart=/usr/bin/python -u /usr/bin/waagent -daemon Restart=always RestartSec=5 [Install] WantedBy=multi-user.target Azure-WALinuxAgent-a976115/init/coreos/000077500000000000000000000000001510742556200176275ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/init/coreos/cloud-config.yml000066400000000000000000000023511510742556200227240ustar00rootroot00000000000000#cloud-config coreos: units: - name: etcd.service runtime: true drop-ins: - name: 10-oem.conf content: | [Service] Environment=ETCD_PEER_ELECTION_TIMEOUT=1200 - name: etcd2.service runtime: true drop-ins: - name: 10-oem.conf content: | [Service] Environment=ETCD_ELECTION_TIMEOUT=1200 - name: waagent.service command: start runtime: true content: | [Unit] Description=Microsoft Azure Agent Wants=network-online.target sshd-keygen.service After=network-online.target sshd-keygen.service [Service] Type=simple Restart=always RestartSec=5s ExecStart=/usr/share/oem/python/bin/python /usr/share/oem/bin/waagent -daemon - name: oem-cloudinit.service command: restart runtime: yes content: | [Unit] Description=Cloudinit from Azure metadata [Service] Type=oneshot ExecStart=/usr/bin/coreos-cloudinit --oem=azure oem: id: azure name: Microsoft Azure version-id: 2.1.4 home-url: https://azure.microsoft.com/ bug-report-url: https://github.com/coreos/bugs/issues Azure-WALinuxAgent-a976115/init/devuan/000077500000000000000000000000001510742556200176175ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/init/devuan/default/000077500000000000000000000000001510742556200212435ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/init/devuan/default/walinuxagent000066400000000000000000000001321510742556200236700ustar00rootroot00000000000000# To disable the Microsoft Azure Agent, set WALINUXAGENT_ENABLED=0 WALINUXAGENT_ENABLED=1 Azure-WALinuxAgent-a976115/init/devuan/walinuxagent000066400000000000000000000226701510742556200222570ustar00rootroot00000000000000#!/bin/bash # walinuxagent # script to start and stop the waagent daemon. # # This script takes into account the possibility that both daemon and # non-daemon instances of waagent may be running concurrently, # and attempts to ensure that any non-daemon instances are preserved # when the daemon instance is stopped. # ### BEGIN INIT INFO # Provides: walinuxagent # Required-Start: $remote_fs $syslog $network # Required-Stop: $remote_fs # X-Start-Before: cloud-init # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Microsoft Azure Linux Agent ### END INIT INFO DESC="Microsoft Azure Linux Agent" INTERPRETER="/usr/bin/python3" DAEMON='/usr/sbin/waagent' DAEMON_ARGS='-daemon' START_ARGS='--background' NAME='waagent' # set to 1 to enable a lot of debugging output DEBUG=0 . /lib/lsb/init-functions debugmsg() { # output a console message if DEBUG is set # (can be enabled dynamically by giving "debug" as an extra argument) if [ "x${DEBUG}" == "x1" ] ; then echo "[debug]: $1" >&2 fi return 0 } check_non_daemon_instances() { # check if there are any non-daemon instances of waagent running local NDPIDLIST i NDPIDCT declare -a NDPIDLIST debugmsg "check_non_daemon_instance: after init, #NDPIDLIST=${#NDPIDLIST[*]}" readarray -t NDPIDLIST < <( ps ax | grep "${INTERPRETER}" | grep "${DAEMON}" | grep -v -- "${DAEMON_ARGS}" | grep -v "grep" | awk '{ print $1 }') NDPIDCT=${#NDPIDLIST[@]} debugmsg "check_non_daemon_instances: NDPIDCT=${NDPIDCT}" debugmsg "check_non_daemon_instances: NDPIDLIST[0] = ${NDPIDLIST[0]}" if [ ${NDPIDCT} -gt 0 ] ; then debugmsg "check_non_daemon_instances: WARNING: non-daemon instances of waagent exist" else debugmsg "check_non_daemon_instances: no non-daemon instances of waagent are currently running" fi for (( i = 0 ; i < ${NDPIDCT} ; i++ )) ; do debugmsg "check_non_daemon_instances: WARNING: process ${NDPIDLIST[${i}]} is a non-daemon waagent instance" done return 0 } get_daemon_pid() { # (re)create PIDLIST, return the first entry local PID create_pidlist PID=${PIDLIST[0]} if [ -z "${PID}" ] ; then debugmsg "get_daemon_pid: : WARNING: no waagent daemon process found" fi echo "${PID}" } recheck_status() { # after an attempt to stop the daemon, re-check the status # and take any further actions required. # (NB: at the moment, we only re-check once. Possible improvement # would be to iterate the re-check up to a given maximum tries). local STATUS NEWSTATUS get_status STATUS=$? debugmsg "stop_waagent: status is now ${STATUS}" # ideal if stop has been successful: STATUS=1 - no daemon process case ${STATUS} in 0) # stop didn't work # what to do? maybe try kill -9 ? debugmsg "recheck_status: ERROR: unable to stop waagent" debugmsg "recheck_status: trying again with kill -9" kill_daemon_from_pid 1 # probably need to check status again? get_status NEW_STATUS=$? if [ "x${NEW_STATUS}" == "x1" ] ; then debugmsg "recheck_status: successfully stopped." log_end_msg 0 || true else # could probably do something more productive here debugmsg "recheck_status: unable to stop daemon - giving up" log_end_msg 1 || true exit 1 fi ;; 1) # THIS IS THE EXPECTED CASE: daemon is no longer running and debugmsg "recheck_status: waagent daemon stopped successfully." log_end_msg 0 || true ;; 2) # so weird that we can't figure out what's going on debugmsg "recheck_status: ERROR: unable to determine waagent status" debugmsg "recheck_status: manual intervention required" log_end_msg 1 || true exit 1 ;; esac } start_waagent() { # we use start-stop-daemon for starting waagent local STATUS get_status STATUS=$? # check the status value - take appropriate action debugmsg "start_waagent: STATUS=${STATUS}" case "${STATUS}" in 0) debugmsg "start_waagent: waagent is already running" log_daemon_msg "waagent is already running" log_end_msg 0 || true ;; 1) # not running (we ignore presence/absence of pidfile) # just start waagent debugmsg "start_waagent: waagent is not currently running" log_daemon_msg "Starting ${NAME} daemon" start-stop-daemon --start --quiet --background --name "${NAME}" --exec ${INTERPRETER} -- ${DAEMON} ${DAEMON_ARGS} log_end_msg $? || true ;; 2) # get_status can't figure out what's going on. # try doing a stop to clean up, then attempt to start waagent # will probably require manual intervention debugmsg "start_waagent: unable to determine current status" debugmsg "start_waagent: trying to stop waagent first, and then start it" stop_waagent log_daemon_msg "Starting ${NAME} daemon" start-stop-daemon --start --quiet --background --name ${NAME} --exec ${INTERPRETER} -- ${DAEMON} ${DAEMON_ARGS} log_end_msg $? || true ;; esac } kill_daemon_from_pidlist() { # check the pidlist for at least one waagent daemon process # if found, kill it directly from the entry in the pidlist # Ignore any pidfile. Avoid killing any non-daemon # waagent processes. # If called with "1" as first argument, use kill -9 rather than # normal kill local i PIDCT FORCE FORCE=0 if [ "x${1}" == "x1" ] ; then debugmsg "kill_daemon_from_pidlist: WARNING: using kill -9" FORCE=1 fi debugmsg "kill_daemon_from_pidlist: killing daemon using pid(s) in PIDLIST" PIDCT=${#PIDLIST[*]} if [ "${PIDCT}" -eq 0 ] ; then debugmsg "kill_daemon_from_pidlist: ERROR: no pids in PIDLIST" return 1 fi for (( i=0 ; i < ${PIDCT} ; i++ )) ; do debugmsg "kill_daemon_from_pidlist: killing waagent daemon process ${PIDLIST[${i}]}" if [ "x${FORCE}" == "x1" ] ; then kill -9 ${PIDLIST[${i}]} else kill ${PIDLIST[${i}]} fi done return 0 } stop_waagent() { # check the current status and if the waagent daemon is running, attempt # to stop it. # start-stop-daemon is avoided here local STATUS PID RC get_status STATUS=$? debugmsg "stop_waagent: current status = ${STATUS}" case "${STATUS}" in 0) # - ignore any pidfile - kill directly from process list log_daemon_msg "Stopping ${NAME} daemon (using process list)" kill_daemon_from_pidlist recheck_status ;; 1) # not running - we ignore any pidfile # REVISIT: should we check for a pidfile and remove if found? debugmsg "waagent is not running" log_daemon_msg "waagent is already stopped" log_end_msg 0 || true ;; 2) # weirdness - call for help debugmsg "ERROR: unable to determine waagent status - manual intervention required" log_daemon_msg "WARNING: unable to determine status of waagent daemon - manual intervention required" log_end_msg 1 || true ;; esac } check_daemons() { # check for running waagent daemon processes local ENTRY ps ax | grep "${INTERPRETER}" | grep "${DAEMON}" | grep -- "${DAEMON_ARGS}" | grep -v 'grep' | while read ENTRY ; do debugmsg "check_daemons(): ENTRY='${ENTRY}'" done return 0 } create_pidlist() { # initialise the list of waagent daemon processes # NB: there should only be one - both this script and waagent itself # attempt to avoid starting more than one daemon process. # However, we use an array just in case. readarray -t PIDLIST < <( ps ax | grep "${INTERPRETER}" | grep "${DAEMON}" | grep -- "${DAEMON_ARGS}" | grep -v 'grep' | awk '{ print $1 }') if [ "${#PIDLIST[*]}" -eq 0 ] ; then debugmsg "create_pidlist: WARNING: no waagent daemons found" elif [ "${#PIDLIST[*]}" -gt 1 ] ; then debugmsg "create_pidlist: WARNING: multiple waagent daemons running" fi return 0 } get_status() { # simplified status - ignoring any pidfile # Possibilities: # 0 - waagent daemon running # 1 - waagent daemon not running # 2 - status unclear # (NB: if we find that multiple daemons exist, we just ignore the fact. # It should be virtually impossible for this to happen) local FOUND RPID ENTRY STATUS DAEMON_RUNNING PIDCT PIDCT=0 DAEMON_RUNNING= RPID= ENTRY= # assume the worst STATUS=2 check_daemons create_pidlist # should only be one daemon running - but we check, just in case PIDCT=${#PIDLIST[@]} debugmsg "get_status: PIDCT=${PIDCT}" if [ ${PIDCT} -eq 0 ] ; then # not running STATUS=1 else # at least one daemon process is running if [ ${PIDCT} -gt 1 ] ; then debugmsg "get_status: WARNING: more than one waagent daemon running" debugmsg "get_status: (should not happen)" else debugmsg "get_status: only one daemon instance running - as expected" fi STATUS=0 fi return ${STATUS} } waagent_status() { # get the current status of the waagent daemon, and return it local STATUS get_status STATUS=$? debugmsg "waagent status = ${STATUS}" case ${STATUS} in 0) log_daemon_msg "waagent is running" ;; 1) log_daemon_msg "WARNING: waagent is not running" ;; 2) log_daemon_msg "WARNING: waagent status cannot be determined" ;; esac log_end_msg 0 || true return 0 } ######################################################################### # MAINLINE # Usage: "service [scriptname] [ start | stop | status | restart ] [ debug ] # (specifying debug as extra argument enables debugging output) ######################################################################### export PATH="${PATH}:+$PATH:}/usr/sbin:/sbin" declare -a PIDLIST if [ ! -z "$2" -a "$2" == "debug" ] ; then DEBUG=1 fi # pre-check for non-daemon (e.g. console) instances of waagent check_non_daemon_instances case "$1" in start) start_waagent ;; stop) stop_waagent ;; status) waagent_status ;; restart) stop_waagent start_waagent ;; esac exit 0 Azure-WALinuxAgent-a976115/init/freebsd/000077500000000000000000000000001510742556200177475ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/init/freebsd/waagent000077500000000000000000000005151510742556200213240ustar00rootroot00000000000000#!/bin/sh # PROVIDE: waagent # REQUIRE: sshd netif dhclient # KEYWORD: nojail . /etc/rc.subr PATH=$PATH:/usr/local/bin:/usr/local/sbin name="waagent" rcvar="waagent_enable" pidfile="/var/run/waagent.pid" command="/usr/local/sbin/${name}" command_interpreter="python" command_args="start" load_rc_config $name run_rc_command "$1" Azure-WALinuxAgent-a976115/init/gaia/000077500000000000000000000000001510742556200172365ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/init/gaia/waagent000077500000000000000000000014561510742556200206200ustar00rootroot00000000000000#!/bin/bash # # Init file for AzureLinuxAgent. # # chkconfig: 2345 60 80 # description: AzureLinuxAgent # # source function library . /etc/rc.d/init.d/functions RETVAL=0 FriendlyName="AzureLinuxAgent" WAZD_BIN=/usr/sbin/waagent.sh start() { echo -n $"Starting $FriendlyName: " $WAZD_BIN -start & success echo } stop() { echo -n $"Stopping $FriendlyName: " killproc -p /var/run/waagent.pid $WAZD_BIN RETVAL=$? echo return $RETVAL } case "$1" in start) start ;; stop) stop ;; restart) stop start ;; reload) ;; report) ;; status) status $WAZD_BIN RETVAL=$? ;; *) echo $"Usage: $0 {start|stop|restart|status}" RETVAL=1 esac exit $RETVAL Azure-WALinuxAgent-a976115/init/mariner/000077500000000000000000000000001510742556200177725ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/init/mariner/waagent.service000066400000000000000000000006201510742556200230000ustar00rootroot00000000000000[Unit] Description=Azure Linux Agent Wants=systemd-networkd-wait-online.service sshd.service sshd-keygen.service After=systemd-networkd-wait-online.service cloud-init.service ConditionFileIsExecutable=/usr/bin/waagent ConditionPathExists=/etc/waagent.conf [Service] Type=simple ExecStart=/usr/bin/python -u /usr/bin/waagent -daemon Restart=always RestartSec=5 [Install] WantedBy=multi-user.target Azure-WALinuxAgent-a976115/init/openbsd/000077500000000000000000000000001510742556200177675ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/init/openbsd/waagent000066400000000000000000000002311510742556200213340ustar00rootroot00000000000000#!/bin/sh daemon="python2.7 /usr/local/sbin/waagent -start" . /etc/rc.d/rc.subr pexp="python /usr/local/sbin/waagent -daemon" rc_reload=NO rc_cmd $1 Azure-WALinuxAgent-a976115/init/openrc/000077500000000000000000000000001510742556200176235ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/init/openrc/waagent000077500000000000000000000003061510742556200211760ustar00rootroot00000000000000#!/usr/sbin/openrc-run name="Microsoft Azure Linux Agent" command="/usr/sbin/waagent" command_args="-verbose -start" pidfile="/var/run/waagent.pid" depend() { after sshd provide waagent } Azure-WALinuxAgent-a976115/init/openwrt/000077500000000000000000000000001510742556200200335ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/init/openwrt/waagent000077500000000000000000000027061510742556200214140ustar00rootroot00000000000000#!/bin/sh /etc/rc.common # Init file for AzureLinuxAgent. # # Copyright 2018 Microsoft Corporation # Copyright 2018 Sonus Networks, Inc. (d.b.a. Ribbon Communications Operating Company) # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # # description: AzureLinuxAgent # START=60 STOP=80 RETVAL=0 FriendlyName="AzureLinuxAgent" WAZD_BIN=/usr/sbin/waagent WAZD_CONF=/etc/waagent.conf WAZD_PIDFILE=/var/run/waagent.pid test -x "$WAZD_BIN" || { echo "$WAZD_BIN not installed"; exit 5; } test -e "$WAZD_CONF" || { echo "$WAZD_CONF not found"; exit 6; } start() { echo -n "Starting $FriendlyName: " $WAZD_BIN -start RETVAL=$? echo return $RETVAL } stop() { echo -n "Stopping $FriendlyName: " if [ -f "$WAZD_PIDFILE" ] then kill -9 `cat ${WAZD_PIDFILE}` rm ${WAZD_PIDFILE} RETVAL=$? echo return $RETVAL else echo "$FriendlyName already stopped." fi } Azure-WALinuxAgent-a976115/init/photonos/000077500000000000000000000000001510742556200202065ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/init/photonos/waagent.service000066400000000000000000000006211510742556200232150ustar00rootroot00000000000000[Unit] Description=Azure Linux Agent Wants=systemd-networkd-wait-online.service sshd.service sshd-keygen.service After=systemd-networkd-wait-online.service cloud-init.service ConditionFileIsExecutable=/usr/bin/waagent ConditionPathExists=/etc/waagent.conf [Service] Type=simple ExecStart=/usr/bin/python3 -u /usr/bin/waagent -daemon Restart=always RestartSec=5 [Install] WantedBy=multi-user.target Azure-WALinuxAgent-a976115/init/redhat/000077500000000000000000000000001510742556200176045ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/init/redhat/py2/000077500000000000000000000000001510742556200203165ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/init/redhat/py2/waagent.service000066400000000000000000000006321510742556200233270ustar00rootroot00000000000000[Unit] Description=Azure Linux Agent Wants=network-online.target sshd.service sshd-keygen.service After=network-online.target ConditionFileIsExecutable=/usr/sbin/waagent ConditionPathExists=/etc/waagent.conf [Service] Type=simple ExecStart=/usr/bin/python -u /usr/sbin/waagent -daemon Restart=always RestartSec=5 Slice=azure.slice CPUAccounting=yes MemoryAccounting=yes [Install] WantedBy=multi-user.target Azure-WALinuxAgent-a976115/init/redhat/waagent.service000066400000000000000000000006331510742556200226160ustar00rootroot00000000000000[Unit] Description=Azure Linux Agent Wants=network-online.target sshd.service sshd-keygen.service After=network-online.target ConditionFileIsExecutable=/usr/sbin/waagent ConditionPathExists=/etc/waagent.conf [Service] Type=simple ExecStart=/usr/bin/python3 -u /usr/sbin/waagent -daemon Restart=always RestartSec=5 Slice=azure.slice CPUAccounting=yes MemoryAccounting=yes [Install] WantedBy=multi-user.target Azure-WALinuxAgent-a976115/init/sles/000077500000000000000000000000001510742556200173035ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/init/sles/waagent.service000066400000000000000000000005421510742556200223140ustar00rootroot00000000000000[Unit] Description=Azure Linux Agent Wants=network-online.target sshd.service sshd-keygen.service After=network-online.target ConditionFileIsExecutable=/usr/sbin/waagent ConditionPathExists=/etc/waagent.conf [Service] Type=simple ExecStart=/usr/bin/python3 -u /usr/sbin/waagent -daemon Restart=always RestartSec=5 [Install] WantedBy=multi-user.target Azure-WALinuxAgent-a976115/init/suse/000077500000000000000000000000001510742556200173145ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/init/suse/waagent000077500000000000000000000062011510742556200206670ustar00rootroot00000000000000#! /bin/sh # # Microsoft Azure Linux Agent sysV init script # # Copyright 2013 Microsoft Corporation # Copyright SUSE LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # /etc/init.d/waagent # # and symbolic link # # /usr/sbin/rcwaagent # # System startup script for the waagent # ### BEGIN INIT INFO # Provides: MicrosoftAzureLinuxAgent # Required-Start: $network sshd # Required-Stop: $network sshd # Default-Start: 3 5 # Default-Stop: 0 1 2 6 # Description: Start the MicrosoftAzureLinuxAgent ### END INIT INFO PYTHON=/usr/bin/python WAZD_BIN=/usr/sbin/waagent WAZD_CONF=/etc/waagent.conf WAZD_PIDFILE=/var/run/waagent.pid test -x "$WAZD_BIN" || { echo "$WAZD_BIN not installed"; exit 5; } test -e "$WAZD_CONF" || { echo "$WAZD_CONF not found"; exit 6; } . /etc/rc.status # First reset status of this service rc_reset # Return values acc. to LSB for all commands but status: # 0 - success # 1 - misc error # 2 - invalid or excess args # 3 - unimplemented feature (e.g. reload) # 4 - insufficient privilege # 5 - program not installed # 6 - program not configured # # Note that starting an already running service, stopping # or restarting a not-running service as well as the restart # with force-reload (in case signalling is not supported) are # considered a success. case "$1" in start) echo -n "Starting MicrosoftAzureLinuxAgent" ## Start daemon with startproc(8). If this fails ## the echo return value is set appropriate. startproc -f ${PYTHON} ${WAZD_BIN} -start rc_status -v ;; stop) echo -n "Shutting down MicrosoftAzureLinuxAgent" ## Stop daemon with killproc(8) and if this fails ## set echo the echo return value. killproc -p ${WAZD_PIDFILE} ${PYTHON} ${WAZD_BIN} rc_status -v ;; try-restart) ## Stop the service and if this succeeds (i.e. the ## service was running before), start it again. $0 status >/dev/null && $0 restart rc_status ;; restart) ## Stop the service and regardless of whether it was ## running or not, start it again. $0 stop sleep 1 $0 start rc_status ;; force-reload|reload) rc_status ;; status) echo -n "Checking for service MicrosoftAzureLinuxAgent " ## Check status with checkproc(8), if process is running ## checkproc will return with exit status 0. checkproc -p ${WAZD_PIDFILE} ${PYTHON} ${WAZD_BIN} rc_status -v ;; probe) ;; *) echo "Usage: $0 {start|stop|status|try-restart|restart|force-reload|reload}" exit 1 ;; esac rc_exit Azure-WALinuxAgent-a976115/init/ubuntu/000077500000000000000000000000001510742556200176575ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/init/ubuntu/walinuxagent000066400000000000000000000001321510742556200223040ustar00rootroot00000000000000# To disable the Microsoft Azure Agent, set WALINUXAGENT_ENABLED=0 WALINUXAGENT_ENABLED=1 Azure-WALinuxAgent-a976115/init/ubuntu/walinuxagent.conf000066400000000000000000000007321510742556200232360ustar00rootroot00000000000000description "Microsoft Azure Linux agent" author "Ben Howard " start on runlevel [2345] stop on runlevel [!2345] pre-start script [ -r /etc/default/walinuxagent ] && . /etc/default/walinuxagent if [ "$WALINUXAGENT_ENABLED" != "1" ]; then stop ; exit 0 fi if [ ! -x /usr/sbin/waagent ]; then stop ; exit 0 fi #Load the udf module modprobe -b udf end script exec /usr/sbin/waagent -daemon respawn Azure-WALinuxAgent-a976115/init/ubuntu/walinuxagent.service000066400000000000000000000011161510742556200237460ustar00rootroot00000000000000# # NOTE: # This file hosted on WALinuxAgent repository only for reference purposes. # Please refer to a recent image to find out the up-to-date systemd unit file. # [Unit] Description=Azure Linux Agent After=network-online.target cloud-init.service Wants=network-online.target sshd.service sshd-keygen.service ConditionFileIsExecutable=/usr/sbin/waagent ConditionPathExists=/etc/waagent.conf [Service] Type=simple ExecStart=/usr/bin/python3 -u /usr/sbin/waagent -daemon Restart=always Slice=azure.slice CPUAccounting=yes MemoryAccounting=yes [Install] WantedBy=multi-user.target Azure-WALinuxAgent-a976115/init/waagent000077500000000000000000000014761510742556200177210ustar00rootroot00000000000000#!/bin/bash # # Init file for AzureLinuxAgent. # # chkconfig: 2345 60 80 # description: AzureLinuxAgent # # source function library . /etc/rc.d/init.d/functions RETVAL=0 FriendlyName="AzureLinuxAgent" WAZD_BIN=/usr/sbin/waagent start() { echo -n $"Starting $FriendlyName: " $WAZD_BIN -start RETVAL=$? echo return $RETVAL } stop() { echo -n $"Stopping $FriendlyName: " killproc -p /var/run/waagent.pid $WAZD_BIN RETVAL=$? echo return $RETVAL } case "$1" in start) start ;; stop) stop ;; restart) stop start ;; reload) ;; report) ;; status) status $WAZD_BIN RETVAL=$? ;; *) echo $"Usage: $0 {start|stop|restart|status}" RETVAL=1 esac exit $RETVAL Azure-WALinuxAgent-a976115/init/waagent.service000066400000000000000000000005411510742556200213450ustar00rootroot00000000000000[Unit] Description=Azure Linux Agent Wants=network-online.target sshd.service sshd-keygen.service After=network-online.target ConditionFileIsExecutable=/usr/sbin/waagent ConditionPathExists=/etc/waagent.conf [Service] Type=simple ExecStart=/usr/bin/python -u /usr/sbin/waagent -daemon Restart=always RestartSec=5 [Install] WantedBy=multi-user.target Azure-WALinuxAgent-a976115/makepkg.py000077500000000000000000000103401510742556200173640ustar00rootroot00000000000000#!/usr/bin/env python3 import argparse import glob import logging import os.path import shutil import subprocess import sys from azurelinuxagent.common.version import AGENT_NAME, AGENT_VERSION, \ AGENT_LONG_VERSION from azurelinuxagent.ga.guestagent import AGENT_MANIFEST_FILE MANIFEST = '''[{{ "name": "{0}", "version": 1.0, "handlerManifest": {{ "installCommand": "", "uninstallCommand": "", "updateCommand": "", "enableCommand": "python -u {1} -run-exthandlers", "disableCommand": "", "rebootAfterInstall": false, "reportHeartbeat": false }} }}]''' PUBLISH_MANIFEST = ''' Microsoft.OSTCLinuxAgent {1} {0} VmRole Microsoft Azure Guest Agent for Linux IaaS true https://github.com/Azure/WALinuxAgent/blob/2.1/LICENSE.txt https://github.com/Azure/WALinuxAgent/blob/2.1/LICENSE.txt https://github.com/Azure/WALinuxAgent true Microsoft Linux ''' PUBLISH_MANIFEST_FILE = 'manifest.xml' def do(*args): try: return subprocess.check_output(args, stderr=subprocess.STDOUT) except subprocess.CalledProcessError as e: # pylint: disable=C0103 raise Exception("[{0}] failed:\n{1}\n{2}".format(" ".join(args), str(e), e.output)) def run(agent_family, output_directory, log): output_path = os.path.join(output_directory, "eggs") target_path = os.path.join(output_path, AGENT_LONG_VERSION) bin_path = os.path.join(target_path, "bin") egg_path = os.path.join(bin_path, AGENT_LONG_VERSION + ".egg") manifest_path = os.path.join(target_path, AGENT_MANIFEST_FILE) publish_manifest_path = os.path.join(target_path, PUBLISH_MANIFEST_FILE) pkg_name = os.path.join(output_path, AGENT_LONG_VERSION + ".zip") if os.path.isdir(target_path): shutil.rmtree(target_path) elif os.path.isfile(target_path): os.remove(target_path) if os.path.isfile(pkg_name): os.remove(pkg_name) os.makedirs(bin_path) log.info("Created {0} directory".format(target_path)) setup_path = os.path.join(os.path.dirname(__file__), "setup.py") args = ["python3", setup_path, "bdist_egg", "--dist-dir={0}".format(bin_path)] log.info("Creating egg {0}".format(egg_path)) do(*args) egg_name = os.path.join("bin", os.path.basename( glob.glob(os.path.join(bin_path, "*"))[0])) log.info("Writing {0}".format(manifest_path)) with open(manifest_path, mode='w') as manifest: manifest.write(MANIFEST.format(AGENT_NAME, egg_name)) log.info("Writing {0}".format(publish_manifest_path)) with open(publish_manifest_path, mode='w') as publish_manifest: publish_manifest.write(PUBLISH_MANIFEST.format(AGENT_VERSION, agent_family)) cwd = os.getcwd() os.chdir(target_path) try: log.info("Creating package {0}".format(pkg_name)) do("zip", "-r", pkg_name, egg_name) do("zip", "-j", pkg_name, AGENT_MANIFEST_FILE) do("zip", "-j", pkg_name, PUBLISH_MANIFEST_FILE) finally: os.chdir(cwd) log.info("Package {0} successfully created".format(pkg_name)) if __name__ == "__main__": logging.basicConfig(format='%(message)s', level=logging.INFO) parser = argparse.ArgumentParser() parser.add_argument('family', metavar='family', nargs='?', default='Test', help='Agent family') parser.add_argument('-o', '--output', default=os.getcwd(), help='Output directory') arguments = parser.parse_args() try: run(arguments.family, arguments.output, logging) except Exception as exception: logging.error(str(exception)) sys.exit(1) sys.exit(0) Azure-WALinuxAgent-a976115/requirements.txt000066400000000000000000000001111510742556200206470ustar00rootroot00000000000000distro; python_version >= '3.8' pyasn1 crypt-r; python_version >= '3.13' Azure-WALinuxAgent-a976115/setup.py000077500000000000000000000362731510742556200171220ustar00rootroot00000000000000#!/usr/bin/env python # # Microsoft Azure Linux Agent setup.py # # Copyright 2013 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import gzip import os import shutil import subprocess import sys import setuptools from setuptools import find_packages from setuptools.command.install import install as _install from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.version import AGENT_NAME, AGENT_VERSION, \ AGENT_DESCRIPTION, \ DISTRO_NAME, DISTRO_VERSION, DISTRO_FULL_NAME root_dir = os.path.dirname(os.path.abspath(__file__)) # pylint: disable=invalid-name os.chdir(root_dir) def set_files(data_files, dest=None, src=None): data_files.append((dest, src)) def set_bin_files(data_files, dest, src=None): if src is None: src = ["bin/waagent", "bin/waagent2.0"] data_files.append((dest, src)) def set_conf_files(data_files, dest="/etc", src=None): if src is None: src = ["config/waagent.conf"] data_files.append((dest, src)) def set_logrotate_files(data_files, dest="/etc/logrotate.d", src=None): if src is None: src = ["config/waagent.logrotate"] data_files.append((dest, src)) def set_sysv_files(data_files, dest="/etc/rc.d/init.d", src=None): if src is None: src = ["init/waagent"] data_files.append((dest, src)) def set_openrc_files(data_files, dest="/etc/init.d", src=None): if src is None: src = ["init/openrc/waagent"] data_files.append((dest, src)) def set_systemd_files(data_files, dest, src=None): if src is None: src = ["init/waagent.service"] data_files.append((dest, src)) def set_freebsd_rc_files(data_files, dest="/etc/rc.d/", src=None): if src is None: src = ["init/freebsd/waagent"] data_files.append((dest, src)) def set_openbsd_rc_files(data_files, dest="/etc/rc.d/", src=None): if src is None: src = ["init/openbsd/waagent"] data_files.append((dest, src)) def set_udev_files(data_files, dest="/etc/udev/rules.d/", src=None): if src is None: src = ["config/66-azure-storage.rules", "config/99-azure-product-uuid.rules"] data_files.append((dest, src)) def set_man_files(data_files, dest="/usr/share/man/man1", src=None): if src is None: src = ["doc/man/waagent.1"] src_gz = [] for file in src: with open(file, 'rb') as f_in, gzip.open(file+".gz", 'wb') as f_out: shutil.copyfileobj(f_in, f_out) src_gz.append(file+".gz") data_files.append((dest, src_gz)) def get_data_files(name, version, fullname): # pylint: disable=R0912 """ Determine data_files according to distro name, version and init system type """ data_files = [] osutil = get_osutil() systemd_dir_path = osutil.get_systemd_unit_file_install_path() agent_bin_path = osutil.get_agent_bin_path() if name in ('redhat', 'rhel', 'centos', 'almalinux', 'cloudlinux', 'rocky'): if version.startswith(("8", "9", "10")): # redhat8+ default to py3 set_bin_files(data_files, dest=agent_bin_path, src=["bin/py3/waagent", "bin/waagent2.0"]) else: set_bin_files(data_files, dest=agent_bin_path) set_conf_files(data_files) set_logrotate_files(data_files) set_udev_files(data_files) set_man_files(data_files) if version.startswith(("8", "9", "10")): # redhat 8+ uses systemd and python3 set_systemd_files(data_files, dest=systemd_dir_path, src=["init/redhat/waagent.service", "init/azure.slice", "init/azure-vmextensions.slice" ]) elif version.startswith("6"): set_sysv_files(data_files) else: # redhat7.0+ use systemd set_systemd_files(data_files, dest=systemd_dir_path, src=[ "init/redhat/py2/waagent.service", "init/azure.slice", "init/azure-vmextensions.slice" ]) if version.startswith("7.1"): # TODO this is a mitigation to systemctl bug on 7.1 set_sysv_files(data_files) elif name == 'arch': set_bin_files(data_files, dest=agent_bin_path) set_conf_files(data_files, src=["config/arch/waagent.conf"]) set_udev_files(data_files) set_systemd_files(data_files, dest=systemd_dir_path, src=["init/arch/waagent.service"]) elif name in ('coreos', 'flatcar'): set_bin_files(data_files, dest=agent_bin_path) set_conf_files(data_files, dest="/usr/share/oem", src=["config/coreos/waagent.conf"]) set_logrotate_files(data_files) set_udev_files(data_files) set_files(data_files, dest="/usr/share/oem", src=["init/coreos/cloud-config.yml"]) elif "Clear Linux" in fullname: set_bin_files(data_files, dest=agent_bin_path) set_conf_files(data_files, dest="/usr/share/defaults/waagent", src=["config/clearlinux/waagent.conf"]) set_systemd_files(data_files, dest=systemd_dir_path, src=["init/clearlinux/waagent.service"]) elif name in ["mariner", "azurelinux"]: set_bin_files(data_files, dest=agent_bin_path) set_conf_files(data_files, dest="/etc", src=["config/mariner/waagent.conf"]) set_systemd_files(data_files, dest=systemd_dir_path, src=["init/mariner/waagent.service"]) set_logrotate_files(data_files) set_udev_files(data_files) elif name == 'ubuntu': set_conf_files(data_files, src=["config/ubuntu/waagent.conf"]) set_logrotate_files(data_files) set_udev_files(data_files) if version.startswith("12") or version.startswith("14"): # Ubuntu12.04/14.04 - uses upstart if version.startswith("12"): set_bin_files(data_files, dest=agent_bin_path) else: set_bin_files(data_files, dest=agent_bin_path, src=["bin/py3/waagent", "bin/waagent2.0"]) set_files(data_files, dest="/etc/init", src=["init/ubuntu/walinuxagent.conf"]) set_files(data_files, dest='/etc/default', src=['init/ubuntu/walinuxagent']) else: set_bin_files(data_files, dest=agent_bin_path, src=["bin/py3/waagent", "bin/waagent2.0"]) # Ubuntu15.04+ uses systemd set_systemd_files(data_files, dest=systemd_dir_path, src=[ "init/ubuntu/walinuxagent.service", "init/azure.slice", "init/azure-vmextensions.slice" ]) elif name == 'suse' or name == 'opensuse': # pylint: disable=R1714 set_bin_files(data_files, dest=agent_bin_path) set_conf_files(data_files, src=["config/suse/waagent.conf"]) set_logrotate_files(data_files) set_udev_files(data_files) if fullname == 'SUSE Linux Enterprise Server' and \ version.startswith('11') or \ fullname == 'openSUSE' and version.startswith( '13.1'): set_sysv_files(data_files, dest='/etc/init.d', src=["init/suse/waagent"]) else: # sles 12+ and openSUSE 13.2+ use systemd set_systemd_files(data_files, dest=systemd_dir_path) elif name == 'sles': # sles 15+ distro named as sles set_bin_files(data_files, dest=agent_bin_path, src=["bin/py3/waagent", "bin/waagent2.0"]) set_conf_files(data_files, src=["config/suse/waagent.conf"]) set_logrotate_files(data_files) set_udev_files(data_files) # sles 15+ uses systemd and python3 set_systemd_files(data_files, dest=systemd_dir_path, src=["init/sles/waagent.service"]) elif name == 'freebsd': set_bin_files(data_files, dest=agent_bin_path) set_conf_files(data_files, src=["config/freebsd/waagent.conf"]) set_freebsd_rc_files(data_files) elif name == 'openbsd': set_bin_files(data_files, dest=agent_bin_path) set_conf_files(data_files, src=["config/openbsd/waagent.conf"]) set_openbsd_rc_files(data_files) elif name == 'debian': set_bin_files(data_files, dest=agent_bin_path, src=["bin/py3/waagent", "bin/waagent2.0"]) set_conf_files(data_files, src=["config/debian/waagent.conf"]) set_logrotate_files(data_files) set_udev_files(data_files, dest="/lib/udev/rules.d") if debian_has_systemd(): set_systemd_files(data_files, dest=systemd_dir_path) elif name == 'devuan': set_bin_files(data_files, dest=agent_bin_path, src=["bin/py3/waagent", "bin/waagent2.0"]) set_files(data_files, dest="/etc/init.d", src=['init/devuan/walinuxagent']) set_files(data_files, dest="/etc/default", src=['init/devuan/default/walinuxagent']) set_conf_files(data_files, src=['config/devuan/waagent.conf']) set_logrotate_files(data_files) set_udev_files(data_files, dest="/lib/udev/rules.d") elif name == 'iosxe': set_bin_files(data_files, dest=agent_bin_path) set_conf_files(data_files, src=["config/iosxe/waagent.conf"]) set_logrotate_files(data_files) set_udev_files(data_files) set_systemd_files(data_files, dest=systemd_dir_path) if version.startswith("7.1"): # TODO this is a mitigation to systemctl bug on 7.1 set_sysv_files(data_files) elif name == 'openwrt': set_bin_files(data_files, dest=agent_bin_path) set_conf_files(data_files) set_logrotate_files(data_files) set_sysv_files(data_files, dest='/etc/init.d', src=["init/openwrt/waagent"]) elif name == 'photonos': set_bin_files(data_files, dest=agent_bin_path) set_conf_files(data_files, src=["config/photonos/waagent.conf"]) set_systemd_files(data_files, dest=systemd_dir_path, src=["init/photonos/waagent.service"]) elif name == 'fedora': set_bin_files(data_files, dest=agent_bin_path) set_conf_files(data_files) set_logrotate_files(data_files) set_udev_files(data_files) set_systemd_files(data_files, dest=systemd_dir_path) set_man_files(data_files) elif name == 'chainguard': set_bin_files(data_files, dest=agent_bin_path, src=["bin/py3/waagent"]) set_conf_files(data_files, src=["config/chainguard/waagent.conf"]) set_systemd_files(data_files, dest=systemd_dir_path, src=["init/chainguard/waagent.service", "init/chainguard/90-waagent.preset", "init/azure.slice", "init/azure-vmextensions.slice" ]) set_udev_files(data_files) elif name in ('alpine', 'alpaquita'): set_bin_files(data_files, dest=agent_bin_path, src=['bin/waagent']) set_conf_files(data_files, src=["config/alpine/waagent.conf"]) set_logrotate_files(data_files) set_udev_files(data_files) set_openrc_files(data_files) else: # Use default setting set_bin_files(data_files, dest=agent_bin_path) set_conf_files(data_files) set_logrotate_files(data_files) set_udev_files(data_files) set_sysv_files(data_files) return data_files def debian_has_systemd(): try: return subprocess.check_output( ['cat', '/proc/1/comm']).strip().decode() == 'systemd' except subprocess.CalledProcessError: return False class install(_install): # pylint: disable=C0103 user_options = _install.user_options + [ ('lnx-distro=', None, 'target Linux distribution'), ('lnx-distro-version=', None, 'target Linux distribution version'), ('lnx-distro-fullname=', None, 'target Linux distribution full name'), ('register-service', None, 'register as startup service and start'), ('skip-data-files', None, 'skip data files installation'), ] def initialize_options(self): _install.initialize_options(self) # pylint: disable=attribute-defined-outside-init self.lnx_distro = DISTRO_NAME self.lnx_distro_version = DISTRO_VERSION self.lnx_distro_fullname = DISTRO_FULL_NAME self.register_service = False # All our data files are system-wide files that are not included in the egg; skip them when # creating an egg. self.skip_data_files = "bdist_egg" in sys.argv # pylint: enable=attribute-defined-outside-init def finalize_options(self): _install.finalize_options(self) if self.skip_data_files: return data_files = get_data_files(self.lnx_distro, self.lnx_distro_version, self.lnx_distro_fullname) self.distribution.data_files = data_files self.distribution.reinitialize_command('install_data', True) def run(self): _install.run(self) if self.register_service: osutil = get_osutil() osutil.register_agent_service() osutil.stop_agent_service() osutil.start_agent_service() # Note to packagers and users from source. # * In version 3.5 of Python distribution information handling in the platform # module was deprecated. Depending on the Linux distribution the # implementation may be broken prior to Python 3.8 where the functionality # will be removed from Python 3. # * In version 3.13 of Python, the crypt module was removed and crypt-r is # required instead. requires = [] if sys.version_info[0] >= 3 and sys.version_info[1] >= 8: requires.append('distro') if sys.version_info[0] >= 3 and sys.version_info[1] >= 13: requires.append('crypt-r') modules = [] # pylint: disable=invalid-name if "bdist_egg" in sys.argv: modules.append("__main__") setuptools.setup( name=AGENT_NAME, version=AGENT_VERSION, long_description=AGENT_DESCRIPTION, author='Microsoft Corporation', author_email='walinuxagent@microsoft.com', platforms='Linux', url='https://github.com/Azure/WALinuxAgent', license='Apache License Version 2.0', packages=find_packages(exclude=["tests*", "dcr*"]), py_modules=modules, install_requires=requires, cmdclass={ 'install': install } ) Azure-WALinuxAgent-a976115/test-requirements.txt000066400000000000000000000017161510742556200216400ustar00rootroot00000000000000coverage mock==2.0.0; python_version == '2.6' mock==3.0.5; python_version >= '2.7' and python_version <= '3.5' mock==4.0.2; python_version >= '3.6' distro; python_version >= '3.8' nose; python_version <= '3.9' nose-timer; python_version >= '2.7' and python_version <= '3.9' pytest; python_version >= '3.10' # Pinning the setuptools to 79.0.1 due to support for egg-based install has been removed https://setuptools.pypa.io/en/stable/history.html#v80-0-0 setuptools==79.0.1; python_version >= '3.12' # Pinning the wrapt requirement to 1.12.0 due to the bug - https://github.com/GrahamDumpleton/wrapt/issues/188 wrapt==1.12.0; python_version > '2.6' and python_version < '3.6' pylint; python_version > '2.6' and python_version < '3.6' pylint==2.8.3; python_version >= '3.6' # Requirements to run pylint on the end-to-end tests source code assertpy azure-core azure-identity azure-mgmt-compute>=22.1.0 azure-mgmt-network>=19.3.0 azure-mgmt-resource>=15.0.0 msrestazure pytz Azure-WALinuxAgent-a976115/tests/000077500000000000000000000000001510742556200165345ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/__init__.py000066400000000000000000000011651510742556200206500ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # Azure-WALinuxAgent-a976115/tests/common/000077500000000000000000000000001510742556200200245ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/common/__init__.py000066400000000000000000000011651510742556200221400ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # Azure-WALinuxAgent-a976115/tests/common/dhcp/000077500000000000000000000000001510742556200207425ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/common/dhcp/__init__.py000066400000000000000000000011651510742556200230560ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # Azure-WALinuxAgent-a976115/tests/common/dhcp/test_dhcp.py000066400000000000000000000165241510742556200233010ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import mock import azurelinuxagent.common.dhcp as dhcp import azurelinuxagent.common.osutil.default as osutil from azurelinuxagent.common.utils.restutil import KNOWN_WIRESERVER_IP from tests.lib.tools import AgentTestCase, open_patch, patch class TestDHCP(AgentTestCase): DEFAULT_ROUTING_TABLE = "\ Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT \n\ eth0 00345B0A 00000000 0001 0 0 5 00000000 0 0 0 \n\ lo 00000000 01345B0A 0003 0 0 1 00FCFFFF 0 0 0 \n" def setUp(self): AgentTestCase.setUp(self) def tearDown(self): AgentTestCase.tearDown(self) def test_wireserver_route_exists(self): # setup dhcp_handler = dhcp.get_dhcp_handler() self.assertTrue(dhcp_handler.endpoint is None) self.assertTrue(dhcp_handler.routes is None) self.assertTrue(dhcp_handler.gateway is None) # execute routing_table_with_wireserver_route = TestDHCP.DEFAULT_ROUTING_TABLE + \ "eth0 00000000 10813FA8 0003 0 0 5 00000000 0 0 0 \n" with patch("os.path.exists", return_value=True): open_file_mock = mock.mock_open(read_data=routing_table_with_wireserver_route) with patch(open_patch(), open_file_mock): self.assertTrue(dhcp_handler.wireserver_route_exists) # test self.assertTrue(dhcp_handler.endpoint is not None) self.assertTrue(dhcp_handler.routes is None) self.assertTrue(dhcp_handler.gateway is None) def test_wireserver_route_not_exists(self): # setup dhcp_handler = dhcp.get_dhcp_handler() self.assertTrue(dhcp_handler.endpoint is None) self.assertTrue(dhcp_handler.routes is None) self.assertTrue(dhcp_handler.gateway is None) # execute with patch("os.path.exists", return_value=True): open_file_mock = mock.mock_open(read_data=TestDHCP.DEFAULT_ROUTING_TABLE) with patch(open_patch(), open_file_mock): self.assertFalse(dhcp_handler.wireserver_route_exists) # test self.assertTrue(dhcp_handler.endpoint is None) self.assertTrue(dhcp_handler.routes is None) self.assertTrue(dhcp_handler.gateway is None) def test_dhcp_cache_exists(self): dhcp_handler = dhcp.get_dhcp_handler() dhcp_handler.osutil = osutil.DefaultOSUtil() with patch.object(osutil.DefaultOSUtil, 'get_dhcp_lease_endpoint', return_value=None): self.assertFalse(dhcp_handler.dhcp_cache_exists) self.assertEqual(dhcp_handler.endpoint, None) with patch.object(osutil.DefaultOSUtil, 'get_dhcp_lease_endpoint', return_value="foo"): self.assertTrue(dhcp_handler.dhcp_cache_exists) self.assertEqual(dhcp_handler.endpoint, "foo") def test_dhcp_skip_cache(self): handler = dhcp.get_dhcp_handler() handler.osutil = osutil.DefaultOSUtil() open_file_mock = mock.mock_open(read_data=TestDHCP.DEFAULT_ROUTING_TABLE) with patch('os.path.exists', return_value=False): with patch.object(osutil.DefaultOSUtil, 'get_dhcp_lease_endpoint')\ as patch_dhcp_cache: with patch.object(dhcp.DhcpHandler, 'send_dhcp_req') \ as patch_dhcp_send: endpoint = 'foo' patch_dhcp_cache.return_value = endpoint # endpoint comes from cache self.assertFalse(handler.skip_cache) with patch("os.path.exists", return_value=True): with patch(open_patch(), open_file_mock): handler.run() self.assertTrue(patch_dhcp_cache.call_count == 1) self.assertTrue(patch_dhcp_send.call_count == 0) self.assertTrue(handler.endpoint == endpoint) # reset handler.skip_cache = True handler.endpoint = None # endpoint comes from dhcp request self.assertTrue(handler.skip_cache) with patch("os.path.exists", return_value=True): with patch(open_patch(), open_file_mock): handler.run() self.assertTrue(patch_dhcp_cache.call_count == 1) self.assertTrue(patch_dhcp_send.call_count == 1) def test_dhcp_send_req_dhcp_unavailable(self): handler = dhcp.get_dhcp_handler() handler.skip_cache = True # Force test to skip cache and get to send_dhcp_req handler.osutil = osutil.DefaultOSUtil() # Mock routing table so that it doesn't have wireserver route with patch("os.path.exists", return_value=True): open_file_mock = mock.mock_open(read_data=TestDHCP.DEFAULT_ROUTING_TABLE) with patch(open_patch(), open_file_mock): # Mock osutil so dhcp is not available with patch('azurelinuxagent.common.osutil.default.DefaultOSUtil.is_dhcp_available', return_value=False): with patch.object(dhcp.DhcpHandler, '_send_dhcp_req') as patch_dhcp_send: handler.run() # Assert that endpoint is set to known wireserver ip self.assertEqual(handler.endpoint, KNOWN_WIRESERVER_IP) # Assert that the dhcp request did not get sent, because dhcp is not available self.assertTrue(patch_dhcp_send.call_count == 0) def test_dhcp_send_req_dhcp_discovery_disabled(self): handler = dhcp.get_dhcp_handler() handler.skip_cache = True # Force test to skip cache and get to send_dhcp_req handler.osutil = osutil.DefaultOSUtil() # Mock routing table so that it doesn't have wireserver route with patch("os.path.exists", return_value=True): open_file_mock = mock.mock_open(read_data=TestDHCP.DEFAULT_ROUTING_TABLE) with patch(open_patch(), open_file_mock): # Mock osutil so dhcp is not available with patch('azurelinuxagent.common.conf.get_dhcp_discovery_enabled', return_value=False): with patch.object(dhcp.DhcpHandler, '_send_dhcp_req') as patch_dhcp_send: handler.run() # Assert that endpoint is set to known wireserver ip self.assertEqual(handler.endpoint, KNOWN_WIRESERVER_IP) # Assert that the dhcp request did not get sent, because dhcp discovery is disabled self.assertTrue(patch_dhcp_send.call_count == 0) Azure-WALinuxAgent-a976115/tests/common/osutil/000077500000000000000000000000001510742556200213435ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/common/osutil/__init__.py000066400000000000000000000011651510742556200234570ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # Azure-WALinuxAgent-a976115/tests/common/osutil/test_alpine.py000066400000000000000000000022261510742556200242260ustar00rootroot00000000000000# Copyright 2019 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import unittest from azurelinuxagent.common.osutil.alpine import AlpineOSUtil from tests.lib.tools import AgentTestCase from .test_default import osutil_get_dhcp_pid_should_return_a_list_of_pids class TestAlpineOSUtil(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) def tearDown(self): AgentTestCase.tearDown(self) def test_get_dhcp_pid_should_return_a_list_of_pids(self): osutil_get_dhcp_pid_should_return_a_list_of_pids(self, AlpineOSUtil()) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-a976115/tests/common/osutil/test_arch.py000066400000000000000000000022101510742556200236640ustar00rootroot00000000000000# Copyright 2019 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import unittest from azurelinuxagent.common.osutil.arch import ArchUtil from tests.lib.tools import AgentTestCase from .test_default import osutil_get_dhcp_pid_should_return_a_list_of_pids class TestArchUtil(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) def tearDown(self): AgentTestCase.tearDown(self) def test_get_dhcp_pid_should_return_a_list_of_pids(self): osutil_get_dhcp_pid_should_return_a_list_of_pids(self, ArchUtil()) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-a976115/tests/common/osutil/test_bigip.py000066400000000000000000000270711510742556200240550ustar00rootroot00000000000000# Copyright 2016 F5 Networks Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import socket import time import unittest import azurelinuxagent.common.logger as logger import azurelinuxagent.common.osutil.bigip as osutil import azurelinuxagent.common.osutil.default as default import azurelinuxagent.common.utils.shellutil as shellutil from azurelinuxagent.common.exception import OSUtilError from azurelinuxagent.common.osutil.bigip import BigIpOSUtil from tests.lib.tools import AgentTestCase, patch from .test_default import osutil_get_dhcp_pid_should_return_a_list_of_pids class TestBigIpOSUtil_wait_until_mcpd_is_initialized(AgentTestCase): @patch.object(shellutil, "run", return_value=0) @patch.object(logger, "info", return_value=None) def test_success(self, *args): result = osutil.BigIpOSUtil._wait_until_mcpd_is_initialized( osutil.BigIpOSUtil() ) self.assertEqual(result, True) # There are two logger calls in the mcpd wait function. The second # occurs after mcpd is found to be "up" self.assertEqual(args[0].call_count, 2) @patch.object(shellutil, "run", return_value=1) @patch.object(logger, "info", return_value=None) @patch.object(time, "sleep", return_value=None) def test_failure(self, *args): # pylint: disable=unused-argument self.assertRaises( OSUtilError, osutil.BigIpOSUtil._wait_until_mcpd_is_initialized, osutil.BigIpOSUtil() ) class TestBigIpOSUtil_save_sys_config(AgentTestCase): @patch.object(shellutil, "run", return_value=0) @patch.object(logger, "error", return_value=None) def test_success(self, *args): result = osutil.BigIpOSUtil._save_sys_config(osutil.BigIpOSUtil()) self.assertEqual(result, 0) self.assertEqual(args[0].call_count, 0) @patch.object(shellutil, "run", return_value=1) @patch.object(logger, "error", return_value=None) def test_failure(self, *args): result = osutil.BigIpOSUtil._save_sys_config(osutil.BigIpOSUtil()) self.assertEqual(result, 1) self.assertEqual(args[0].call_count, 1) class TestBigIpOSUtil_useradd(AgentTestCase): @patch.object(osutil.BigIpOSUtil, 'get_userentry', return_value=None) @patch.object(shellutil, "run_command") def test_success(self, *args): args[0].return_value = (0, None) result = osutil.BigIpOSUtil.useradd( osutil.BigIpOSUtil(), 'foo', expiration=None ) self.assertEqual(result, 0) @patch.object(osutil.BigIpOSUtil, 'get_userentry', return_value=None) def test_user_already_exists(self, *args): args[0].return_value = 'admin' result = osutil.BigIpOSUtil.useradd( osutil.BigIpOSUtil(), 'admin', expiration=None ) self.assertEqual(result, None) @patch.object(shellutil, "run", return_value=1) def test_failure(self, *args): # pylint: disable=unused-argument self.assertRaises( OSUtilError, osutil.BigIpOSUtil.useradd, osutil.BigIpOSUtil(), 'foo', expiration=None ) class TestBigIpOSUtil_chpasswd(AgentTestCase): @patch.object(shellutil, "run_command") @patch.object(osutil.BigIpOSUtil, 'get_userentry', return_value=True) @patch.object(osutil.BigIpOSUtil, 'is_sys_user', return_value=False) @patch.object(osutil.BigIpOSUtil, '_save_sys_config', return_value=None) def test_success(self, *args): result = osutil.BigIpOSUtil.chpasswd( osutil.BigIpOSUtil(), 'admin', 'password', crypt_id=6, salt_len=10 ) self.assertEqual(result, 0) self.assertEqual(args[0].call_count, 1) self.assertEqual(args[0].call_count, 1) @patch.object(osutil.BigIpOSUtil, 'is_sys_user', return_value=True) def test_is_sys_user(self, *args): # pylint: disable=unused-argument self.assertRaises( OSUtilError, osutil.BigIpOSUtil.chpasswd, osutil.BigIpOSUtil(), 'admin', 'password', crypt_id=6, salt_len=10 ) @patch.object(shellutil, "run_get_output", return_value=(1, None)) @patch.object(osutil.BigIpOSUtil, 'is_sys_user', return_value=False) def test_failed_to_set_user_password(self, *args): # pylint: disable=unused-argument self.assertRaises( OSUtilError, osutil.BigIpOSUtil.chpasswd, osutil.BigIpOSUtil(), 'admin', 'password', crypt_id=6, salt_len=10 ) @patch.object(shellutil, "run_get_output", return_value=(0, None)) @patch.object(osutil.BigIpOSUtil, 'is_sys_user', return_value=False) @patch.object(osutil.BigIpOSUtil, 'get_userentry', return_value=None) def test_failed_to_get_user_entry(self, *args): # pylint: disable=unused-argument self.assertRaises( OSUtilError, osutil.BigIpOSUtil.chpasswd, osutil.BigIpOSUtil(), 'admin', 'password', crypt_id=6, salt_len=10 ) class TestBigIpOSUtil_get_dvd_device(AgentTestCase): @patch.object(os, "listdir", return_value=['tty1','cdrom0']) def test_success(self, *args): # pylint: disable=unused-argument result = osutil.BigIpOSUtil.get_dvd_device( osutil.BigIpOSUtil(), '/dev' ) self.assertEqual(result, '/dev/cdrom0') @patch.object(os, "listdir", return_value=['foo', 'bar']) def test_failure(self, *args): # pylint: disable=unused-argument self.assertRaises( OSUtilError, osutil.BigIpOSUtil.get_dvd_device, osutil.BigIpOSUtil(), '/dev' ) class TestBigIpOSUtil_restart_ssh_service(AgentTestCase): @patch.object(shellutil, "run", return_value=0) def test_success(self, *args): # pylint: disable=unused-argument result = osutil.BigIpOSUtil.restart_ssh_service( osutil.BigIpOSUtil() ) self.assertEqual(result, 0) class TestBigIpOSUtil_stop_agent_service(AgentTestCase): @patch.object(shellutil, "run", return_value=0) def test_success(self, *args): # pylint: disable=unused-argument result = osutil.BigIpOSUtil.stop_agent_service( osutil.BigIpOSUtil() ) self.assertEqual(result, 0) class TestBigIpOSUtil_start_agent_service(AgentTestCase): @patch.object(shellutil, "run", return_value=0) def test_success(self, *args): # pylint: disable=unused-argument result = osutil.BigIpOSUtil.start_agent_service( osutil.BigIpOSUtil() ) self.assertEqual(result, 0) class TestBigIpOSUtil_register_agent_service(AgentTestCase): @patch.object(shellutil, "run", return_value=0) def test_success(self, *args): # pylint: disable=unused-argument result = osutil.BigIpOSUtil.register_agent_service( osutil.BigIpOSUtil() ) self.assertEqual(result, 0) class TestBigIpOSUtil_unregister_agent_service(AgentTestCase): @patch.object(shellutil, "run", return_value=0) def test_success(self, *args): # pylint: disable=unused-argument result = osutil.BigIpOSUtil.unregister_agent_service( osutil.BigIpOSUtil() ) self.assertEqual(result, 0) class TestBigIpOSUtil_set_hostname(AgentTestCase): @patch.object(os.path, "exists", return_value=False) def test_success(self, *args): result = osutil.BigIpOSUtil.set_hostname( # pylint: disable=assignment-from-none osutil.BigIpOSUtil(), None ) self.assertEqual(args[0].call_count, 0) self.assertEqual(result, None) class TestBigIpOSUtil_set_dhcp_hostname(AgentTestCase): @patch.object(os.path, "exists", return_value=False) def test_success(self, *args): result = osutil.BigIpOSUtil.set_dhcp_hostname( # pylint: disable=assignment-from-none osutil.BigIpOSUtil(), None ) self.assertEqual(args[0].call_count, 0) self.assertEqual(result, None) class TestBigIpOSUtil_get_first_if(AgentTestCase): @patch.object(osutil.BigIpOSUtil, '_format_single_interface_name', return_value=b'eth0') def test_success(self, *args): # pylint: disable=unused-argument ifname, ipaddr = osutil.BigIpOSUtil().get_first_if() self.assertTrue(ifname.startswith('eth')) self.assertTrue(ipaddr is not None) try: socket.inet_aton(ipaddr) except socket.error: self.fail("not a valid ip address") @patch.object(osutil.BigIpOSUtil, '_format_single_interface_name', return_value=b'loenp0s3') def test_success(self, *args): # pylint: disable=unused-argument,function-redefined ifname, ipaddr = osutil.BigIpOSUtil().get_first_if() self.assertFalse(ifname.startswith('eth')) self.assertTrue(ipaddr is not None) try: socket.inet_aton(ipaddr) except socket.error: self.fail("not a valid ip address") class TestBigIpOSUtil_mount_dvd(AgentTestCase): @patch.object(shellutil, "run", return_value=0) @patch.object(time, "sleep", return_value=None) @patch.object(osutil.BigIpOSUtil, '_wait_until_mcpd_is_initialized', return_value=None) @patch.object(default.DefaultOSUtil, 'mount_dvd', return_value=None) def test_success(self, *args): osutil.BigIpOSUtil.mount_dvd( osutil.BigIpOSUtil(), max_retry=6, chk_err=True ) self.assertEqual(args[0].call_count, 1) self.assertEqual(args[1].call_count, 1) class TestBigIpOSUtil_route_add(AgentTestCase): @patch.object(shellutil, "run", return_value=0) def test_success(self, *args): osutil.BigIpOSUtil.route_add( osutil.BigIpOSUtil(), '10.10.10.0', '255.255.255.0', '10.10.10.1' ) self.assertEqual(args[0].call_count, 1) class TestBigIpOSUtil_device_for_ide_port(AgentTestCase): @patch.object(time, "sleep", return_value=None) @patch.object(os.path, "exists", return_value=False) @patch.object(default.DefaultOSUtil, 'device_for_ide_port', return_value=None) def test_success_waiting(self, *args): osutil.BigIpOSUtil.device_for_ide_port( osutil.BigIpOSUtil(), '5' ) self.assertEqual(args[0].call_count, 1) self.assertEqual(args[1].call_count, 99) self.assertEqual(args[2].call_count, 99) @patch.object(time, "sleep", return_value=None) @patch.object(os.path, "exists", return_value=True) @patch.object(default.DefaultOSUtil, 'device_for_ide_port', return_value=None) def test_success_immediate(self, *args): osutil.BigIpOSUtil.device_for_ide_port( osutil.BigIpOSUtil(), '5' ) self.assertEqual(args[0].call_count, 1) self.assertEqual(args[1].call_count, 1) self.assertEqual(args[2].call_count, 0) class TestBigIpOSUtil(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) def tearDown(self): AgentTestCase.tearDown(self) def test_get_dhcp_pid_should_return_a_list_of_pids(self): osutil_get_dhcp_pid_should_return_a_list_of_pids(self, BigIpOSUtil()) if __name__ == '__main__': unittest.main()Azure-WALinuxAgent-a976115/tests/common/osutil/test_clearlinux.py000066400000000000000000000022401510742556200251200ustar00rootroot00000000000000# Copyright 2019 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import unittest from azurelinuxagent.common.osutil.clearlinux import ClearLinuxUtil from tests.lib.tools import AgentTestCase from .test_default import osutil_get_dhcp_pid_should_return_a_list_of_pids class TestClearLinuxUtil(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) def tearDown(self): AgentTestCase.tearDown(self) def test_get_dhcp_pid_should_return_a_list_of_pids(self): osutil_get_dhcp_pid_should_return_a_list_of_pids(self, ClearLinuxUtil()) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-a976115/tests/common/osutil/test_coreos.py000066400000000000000000000022221510742556200242440ustar00rootroot00000000000000# Copyright 2019 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import unittest from azurelinuxagent.common.osutil.coreos import CoreOSUtil from tests.lib.tools import AgentTestCase from .test_default import osutil_get_dhcp_pid_should_return_a_list_of_pids class TestAlpineOSUtil(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) def tearDown(self): AgentTestCase.tearDown(self) def test_get_dhcp_pid_should_return_a_list_of_pids(self): osutil_get_dhcp_pid_should_return_a_list_of_pids(self, CoreOSUtil()) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-a976115/tests/common/osutil/test_default.py000066400000000000000000001061521510742556200244050ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import glob import os import socket import tempfile import unittest import mock import azurelinuxagent.common.conf as conf import azurelinuxagent.common.osutil.default as osutil import azurelinuxagent.common.utils.shellutil as shellutil import azurelinuxagent.common.utils.textutil as textutil from azurelinuxagent.common.exception import OSUtilError from azurelinuxagent.common.future import ustr from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.utils import fileutil from tests.lib.mock_environment import MockEnvironment from tests.lib.tools import AgentTestCase, patch, open_patch, load_data, data_dir actual_get_proc_net_route = 'azurelinuxagent.common.osutil.default.DefaultOSUtil._get_proc_net_route' def fake_is_loopback(_, iface): return iface.startswith('lo') class TestOSUtil(AgentTestCase): def test_restart(self): # setup retries = 3 ifname = 'dummy' with patch.object(shellutil, "run") as run_patch: run_patch.return_value = 1 # execute osutil.DefaultOSUtil.restart_if(osutil.DefaultOSUtil(), ifname=ifname, retries=retries, wait=0) # assert self.assertEqual(run_patch.call_count, retries) self.assertEqual(run_patch.call_args_list[0][0][0], 'ifdown {0} && ifup {0}'.format(ifname)) def test_get_dvd_device_success(self): with patch.object(os, 'listdir', return_value=['cpu', 'cdrom0']): osutil.DefaultOSUtil().get_dvd_device() def test_get_dvd_device_failure(self): with patch.object(os, 'listdir', return_value=['cpu', 'notmatching']): try: osutil.DefaultOSUtil().get_dvd_device() self.fail('OSUtilError was not raised') except OSUtilError as ose: self.assertTrue('notmatching' in ustr(ose)) @patch('time.sleep') def test_mount_dvd_success(self, _): msg = 'message' with patch.object(osutil.DefaultOSUtil, 'get_dvd_device', return_value='/dev/cdrom'): with patch.object(shellutil, 'run_command', return_value=msg): with patch.object(os, 'makedirs'): try: osutil.DefaultOSUtil().mount_dvd() except OSUtilError: self.fail("mounting failed") @patch('time.sleep') def test_mount_dvd_failure(self, _): msg = 'message' exception = shellutil.CommandError("mount dvd", 1, "", msg) with patch.object(osutil.DefaultOSUtil, 'get_dvd_device', return_value='/dev/cdrom'): with patch.object(shellutil, 'run_command', side_effect=exception) as patch_run: with patch.object(os, 'makedirs'): try: osutil.DefaultOSUtil().mount_dvd() self.fail('OSUtilError was not raised') except OSUtilError as ose: self.assertTrue(msg in ustr(ose)) self.assertEqual(patch_run.call_count, 5) def test_empty_proc_net_route(self): routing_table = "" mo = mock.mock_open(read_data=routing_table) with patch(open_patch(), mo): self.assertEqual(len(osutil.DefaultOSUtil().read_route_table()), 0) def test_no_routes(self): routing_table = 'Iface\tDestination\tGateway \tFlags\tRefCnt\tUse\tMetric\tMask\t\tMTU\tWindow\tIRTT \n' mo = mock.mock_open(read_data=routing_table) with patch(open_patch(), mo): raw_route_list = osutil.DefaultOSUtil().read_route_table() self.assertEqual(len(osutil.DefaultOSUtil().get_list_of_routes(raw_route_list)), 0) def test_bogus_proc_net_route(self): routing_table = 'Iface\tDestination\tGateway \tFlags\t\tUse\tMetric\t\neth0\t00000000\t00000000\t0001\t\t0\t0\n' mo = mock.mock_open(read_data=routing_table) with patch(open_patch(), mo): raw_route_list = osutil.DefaultOSUtil().read_route_table() self.assertEqual(len(osutil.DefaultOSUtil().get_list_of_routes(raw_route_list)), 0) def test_valid_routes(self): routing_table = \ 'Iface\tDestination\tGateway \tFlags\tRefCnt\tUse\tMetric\tMask\t\tMTU\tWindow\tIRTT \n' \ 'eth0\t00000000\tC1BB910A\t0003\t0\t0\t0\t00000000\t0\t0\t0 \n' \ 'eth0\tC0BB910A\t00000000\t0001\t0\t0\t0\tC0FFFFFF\t0\t0\t0 \n' \ 'eth0\t10813FA8\tC1BB910A\t000F\t0\t0\t0\tFFFFFFFF\t0\t0\t0 \n' \ 'eth0\tFEA9FEA9\tC1BB910A\t0007\t0\t0\t0\tFFFFFFFF\t0\t0\t0 \n' \ 'docker0\t002BA8C0\t00000000\t0001\t0\t0\t10\t00FFFFFF\t0\t0\t0 \n' mo = mock.mock_open(read_data=routing_table) with patch(open_patch(), mo): raw_route_list = osutil.DefaultOSUtil().read_route_table() self.assertEqual(len(raw_route_list), 6) route_list = osutil.DefaultOSUtil().get_list_of_routes(raw_route_list) self.assertEqual(len(route_list), 5) self.assertEqual(route_list[0].gateway_quad(), '10.145.187.193') self.assertEqual(route_list[1].gateway_quad(), '0.0.0.0') self.assertEqual(route_list[1].mask_quad(), '255.255.255.192') self.assertEqual(route_list[2].destination_quad(), '168.63.129.16') self.assertEqual(route_list[1].flags, 1) self.assertEqual(route_list[2].flags, 15) self.assertEqual(route_list[3].flags, 7) self.assertEqual(route_list[3].metric, 0) self.assertEqual(route_list[4].metric, 10) self.assertEqual(route_list[0].interface, 'eth0') self.assertEqual(route_list[4].interface, 'docker0') @patch('azurelinuxagent.common.osutil.default.DefaultOSUtil.get_primary_interface', return_value='eth0') @patch('azurelinuxagent.common.osutil.default.DefaultOSUtil._get_all_interfaces', return_value={'eth0':'10.0.0.1'}) @patch('azurelinuxagent.common.osutil.default.DefaultOSUtil.is_loopback', fake_is_loopback) def test_get_first_if(self, get_all_interfaces_mock, get_primary_interface_mock): # pylint: disable=unused-argument """ Validate that the agent can find the first active non-loopback interface. This test case used to run live, but not all developers have an eth* interface. It is perfectly valid to have a br*, but this test does not account for that. """ ifname, ipaddr = osutil.DefaultOSUtil().get_first_if() self.assertEqual(ifname, 'eth0') self.assertEqual(ipaddr, '10.0.0.1') @patch('azurelinuxagent.common.osutil.default.DefaultOSUtil.get_primary_interface', return_value='bogus0') @patch('azurelinuxagent.common.osutil.default.DefaultOSUtil._get_all_interfaces', return_value={'eth0':'10.0.0.1', 'lo': '127.0.0.1'}) @patch('azurelinuxagent.common.osutil.default.DefaultOSUtil.is_loopback', fake_is_loopback) def test_get_first_if_nosuchprimary(self, get_all_interfaces_mock, get_primary_interface_mock): # pylint: disable=unused-argument ifname, ipaddr = osutil.DefaultOSUtil().get_first_if() self.assertTrue(ifname.startswith('eth')) self.assertTrue(ipaddr is not None) try: socket.inet_aton(ipaddr) except socket.error: self.fail("not a valid ip address") def test_get_first_if_all_loopback(self): fake_ifaces = {'lo':'127.0.0.1'} with patch.object(osutil.DefaultOSUtil, 'get_primary_interface', return_value='bogus0'): with patch.object(osutil.DefaultOSUtil, '_get_all_interfaces', return_value=fake_ifaces): self.assertEqual(('', ''), osutil.DefaultOSUtil().get_first_if()) def test_get_all_interfaces(self): loopback_count = 0 non_loopback_count = 0 for iface in osutil.DefaultOSUtil()._get_all_interfaces(): if iface == 'lo': loopback_count += 1 else: non_loopback_count += 1 self.assertEqual(loopback_count, 1, 'Exactly 1 loopback network interface should exist') self.assertGreater(loopback_count, 0, 'At least 1 non-loopback network interface should exist') def test_isloopback(self): for iface in osutil.DefaultOSUtil()._get_all_interfaces(): if iface == 'lo': self.assertTrue(osutil.DefaultOSUtil().is_loopback(iface)) else: self.assertFalse(osutil.DefaultOSUtil().is_loopback(iface)) def test_isprimary(self): routing_table = "\ Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT \n\ eth0 00000000 01345B0A 0003 0 0 5 00000000 0 0 0 \n\ eth0 00345B0A 00000000 0001 0 0 5 00000000 0 0 0 \n\ lo 00000000 01345B0A 0003 0 0 1 00FCFFFF 0 0 0 \n" mo = mock.mock_open(read_data=routing_table) with patch(open_patch(), mo): self.assertFalse(osutil.DefaultOSUtil().is_primary_interface('lo')) self.assertTrue(osutil.DefaultOSUtil().is_primary_interface('eth0')) def test_sriov(self): routing_table = "\ Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT \n" \ "bond0 00000000 0100000A 0003 0 0 0 00000000 0 0 0 \n" \ "bond0 0000000A 00000000 0001 0 0 0 00000000 0 0 0 \n" \ "eth0 0000000A 00000000 0001 0 0 0 00000000 0 0 0 \n" \ "bond0 10813FA8 0100000A 0007 0 0 0 00000000 0 0 0 \n" \ "bond0 FEA9FEA9 0100000A 0007 0 0 0 00000000 0 0 0 \n" mo = mock.mock_open(read_data=routing_table) with patch(open_patch(), mo): self.assertFalse(osutil.DefaultOSUtil().is_primary_interface('eth0')) self.assertTrue(osutil.DefaultOSUtil().is_primary_interface('bond0')) def test_multiple_default_routes(self): routing_table = "\ Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT \n\ high 00000000 01345B0A 0003 0 0 5 00000000 0 0 0 \n\ low1 00000000 01345B0A 0003 0 0 1 00FCFFFF 0 0 0 \n" mo = mock.mock_open(read_data=routing_table) with patch(open_patch(), mo): self.assertTrue(osutil.DefaultOSUtil().is_primary_interface('low1')) def test_multiple_interfaces(self): routing_table = "\ Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT \n\ first 00000000 01345B0A 0003 0 0 1 00000000 0 0 0 \n\ secnd 00000000 01345B0A 0003 0 0 1 00FCFFFF 0 0 0 \n" mo = mock.mock_open(read_data=routing_table) with patch(open_patch(), mo): self.assertTrue(osutil.DefaultOSUtil().is_primary_interface('first')) def test_interface_flags(self): routing_table = "\ Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT \n\ nflg 00000000 01345B0A 0001 0 0 1 00000000 0 0 0 \n\ flgs 00000000 01345B0A 0003 0 0 1 00FCFFFF 0 0 0 \n" mo = mock.mock_open(read_data=routing_table) with patch(open_patch(), mo): self.assertTrue(osutil.DefaultOSUtil().is_primary_interface('flgs')) def test_no_interface(self): routing_table = "\ Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT \n\ ndst 00000001 01345B0A 0003 0 0 1 00000000 0 0 0 \n\ nflg 00000000 01345B0A 0001 0 0 1 00FCFFFF 0 0 0 \n" mo = mock.mock_open(read_data=routing_table) with patch(open_patch(), mo): self.assertFalse(osutil.DefaultOSUtil().is_primary_interface('ndst')) self.assertFalse(osutil.DefaultOSUtil().is_primary_interface('nflg')) self.assertFalse(osutil.DefaultOSUtil().is_primary_interface('invalid')) def test_no_primary_does_not_throw(self): with patch.object(osutil.DefaultOSUtil, 'get_primary_interface') \ as patch_primary: exception = False patch_primary.return_value = '' try: osutil.DefaultOSUtil().get_first_if()[0] except Exception as e: # pylint: disable=unused-variable print(textutil.format_exception(e)) exception = True self.assertFalse(exception) def test_dhcp_lease_default(self): self.assertTrue(osutil.DefaultOSUtil().get_dhcp_lease_endpoint() is None) def test_dhcp_lease_older_ubuntu(self): with patch.object(glob, "glob", return_value=['/var/lib/dhcp/dhclient.eth0.leases']): with patch(open_patch(), mock.mock_open(read_data=load_data("dhcp.leases"))): endpoint = get_osutil(distro_name='ubuntu', distro_version='12.04').get_dhcp_lease_endpoint() # pylint: disable=assignment-from-none self.assertTrue(endpoint is not None) self.assertEqual(endpoint, "168.63.129.16") endpoint = get_osutil(distro_name='ubuntu', distro_version='12.04').get_dhcp_lease_endpoint() # pylint: disable=assignment-from-none self.assertTrue(endpoint is not None) self.assertEqual(endpoint, "168.63.129.16") endpoint = get_osutil(distro_name='ubuntu', distro_version='14.04').get_dhcp_lease_endpoint() # pylint: disable=assignment-from-none self.assertTrue(endpoint is not None) self.assertEqual(endpoint, "168.63.129.16") endpoint = get_osutil(distro_name='ubuntu', distro_version='18.04').get_dhcp_lease_endpoint() # pylint: disable=assignment-from-none self.assertTrue(endpoint is None) def test_dhcp_lease_newer_ubuntu(self): with patch.object(glob, "glob", return_value=['/run/systemd/netif/leases/2']): with patch(open_patch(), mock.mock_open(read_data=load_data("2"))): endpoint = get_osutil(distro_name='ubuntu', distro_version='18.04').get_dhcp_lease_endpoint() # pylint: disable=assignment-from-none self.assertTrue(endpoint is not None) self.assertEqual(endpoint, "168.63.129.16") endpoint = get_osutil(distro_name='ubuntu', distro_version='20.04').get_dhcp_lease_endpoint() # pylint: disable=assignment-from-none self.assertTrue(endpoint is not None) self.assertEqual(endpoint, "168.63.129.16") def test_dhcp_lease_custom_dns(self): """ Validate that the wireserver address is coming from option 245 (on default configurations the address is also available in the domain-name-servers option, but users may set up a custom dns server on their vnet) """ with patch.object(glob, "glob", return_value=['/var/lib/dhcp/dhclient.eth0.leases']): with patch(open_patch(), mock.mock_open(read_data=load_data("dhcp.leases.custom.dns"))): endpoint = get_osutil(distro_name='ubuntu', distro_version='14.04').get_dhcp_lease_endpoint() # pylint: disable=assignment-from-none self.assertEqual(endpoint, "168.63.129.16") def test_dhcp_lease_multi(self): with patch.object(glob, "glob", return_value=['/var/lib/dhcp/dhclient.eth0.leases']): with patch(open_patch(), mock.mock_open(read_data=load_data("dhcp.leases.multi"))): endpoint = get_osutil(distro_name='ubuntu', distro_version='12.04').get_dhcp_lease_endpoint() # pylint: disable=assignment-from-none self.assertTrue(endpoint is not None) self.assertEqual(endpoint, "168.63.129.2") def test_get_total_mem(self): """ Validate the returned value matches to the one retrieved by invoking shell command """ cmd = "grep MemTotal /proc/meminfo |awk '{print $2}'" ret = shellutil.run_get_output(cmd) if ret[0] == 0: self.assertEqual(int(ret[1]) / 1024, get_osutil().get_total_mem()) else: self.fail("Cannot retrieve total memory using shell command.") def test_get_processor_cores(self): """ Validate the returned value matches to the one retrieved by invoking shell command """ cmd = "grep 'processor.*:' /proc/cpuinfo |wc -l" ret = shellutil.run_get_output(cmd) if ret[0] == 0: self.assertEqual(int(ret[1]), get_osutil().get_processor_cores()) else: self.fail("Cannot retrieve number of process cores using shell command.") def test_conf_sshd(self): new_file = "\ Port 22\n\ Protocol 2\n\ ChallengeResponseAuthentication yes\n\ #PasswordAuthentication yes\n\ UsePAM yes\n\ " expected_output = "\ Port 22\n\ Protocol 2\n\ ChallengeResponseAuthentication no\n\ #PasswordAuthentication yes\n\ UsePAM yes\n\ PasswordAuthentication no\n\ ClientAliveInterval 180\n\ " with patch.object(fileutil, 'write_file') as patch_write: with patch.object(fileutil, 'read_file', return_value=new_file): osutil.DefaultOSUtil().conf_sshd(disable_password=True) patch_write.assert_called_once_with( conf.get_sshd_conf_file_path(), expected_output) def test_conf_sshd_with_match(self): new_file = "\ Port 22\n\ ChallengeResponseAuthentication yes\n\ Match host 192.168.1.1\n\ ChallengeResponseAuthentication yes\n\ " expected_output = "\ Port 22\n\ ChallengeResponseAuthentication no\n\ PasswordAuthentication no\n\ ClientAliveInterval 180\n\ Match host 192.168.1.1\n\ ChallengeResponseAuthentication yes\n\ " with patch.object(fileutil, 'write_file') as patch_write: with patch.object(fileutil, 'read_file', return_value=new_file): osutil.DefaultOSUtil().conf_sshd(disable_password=True) patch_write.assert_called_once_with( conf.get_sshd_conf_file_path(), expected_output) def test_conf_sshd_with_match_last(self): new_file = "\ Port 22\n\ Match host 192.168.1.1\n\ ChallengeResponseAuthentication yes\n\ " expected_output = "\ Port 22\n\ PasswordAuthentication no\n\ ChallengeResponseAuthentication no\n\ ClientAliveInterval 180\n\ Match host 192.168.1.1\n\ ChallengeResponseAuthentication yes\n\ " with patch.object(fileutil, 'write_file') as patch_write: with patch.object(fileutil, 'read_file', return_value=new_file): osutil.DefaultOSUtil().conf_sshd(disable_password=True) patch_write.assert_called_once_with( conf.get_sshd_conf_file_path(), expected_output) def test_conf_sshd_with_match_middle(self): new_file = "\ Port 22\n\ match host 192.168.1.1\n\ ChallengeResponseAuthentication yes\n\ match all\n\ #Other config\n\ " expected_output = "\ Port 22\n\ match host 192.168.1.1\n\ ChallengeResponseAuthentication yes\n\ match all\n\ #Other config\n\ PasswordAuthentication no\n\ ChallengeResponseAuthentication no\n\ ClientAliveInterval 180\n\ " with patch.object(fileutil, 'write_file') as patch_write: with patch.object(fileutil, 'read_file', return_value=new_file): osutil.DefaultOSUtil().conf_sshd(disable_password=True) patch_write.assert_called_once_with( conf.get_sshd_conf_file_path(), expected_output) def test_conf_sshd_with_match_multiple(self): new_file = "\ Port 22\n\ Match host 192.168.1.1\n\ ChallengeResponseAuthentication yes\n\ Match host 192.168.1.2\n\ ChallengeResponseAuthentication yes\n\ Match all\n\ #Other config\n\ " expected_output = "\ Port 22\n\ Match host 192.168.1.1\n\ ChallengeResponseAuthentication yes\n\ Match host 192.168.1.2\n\ ChallengeResponseAuthentication yes\n\ Match all\n\ #Other config\n\ PasswordAuthentication no\n\ ChallengeResponseAuthentication no\n\ ClientAliveInterval 180\n\ " with patch.object(fileutil, 'write_file') as patch_write: with patch.object(fileutil, 'read_file', return_value=new_file): osutil.DefaultOSUtil().conf_sshd(disable_password=True) patch_write.assert_called_once_with( conf.get_sshd_conf_file_path(), expected_output) def test_conf_sshd_with_match_multiple_first_last(self): new_file = "\ Match host 192.168.1.1\n\ ChallengeResponseAuthentication yes\n\ Match host 192.168.1.2\n\ ChallengeResponseAuthentication yes\n\ " expected_output = "\ PasswordAuthentication no\n\ ChallengeResponseAuthentication no\n\ ClientAliveInterval 180\n\ Match host 192.168.1.1\n\ ChallengeResponseAuthentication yes\n\ Match host 192.168.1.2\n\ ChallengeResponseAuthentication yes\n\ " with patch.object(fileutil, 'write_file') as patch_write: with patch.object(fileutil, 'read_file', return_value=new_file): osutil.DefaultOSUtil().conf_sshd(disable_password=True) patch_write.assert_called_once_with( conf.get_sshd_conf_file_path(), expected_output) def test_correct_instance_id(self): util = osutil.DefaultOSUtil() self.assertEqual( "12345678-1234-1234-1234-123456789012", util._correct_instance_id("78563412-3412-3412-1234-123456789012")) self.assertEqual( "D0DF4C54-4ECB-4A4B-9954-5BDF3ED5C3B8", util._correct_instance_id("544CDFD0-CB4E-4B4A-9954-5BDF3ED5C3B8")) self.assertEqual( "d0df4c54-4ecb-4a4b-9954-5bdf3ed5c3b8", util._correct_instance_id("544cdfd0-cb4e-4b4a-9954-5bdf3ed5c3b8")) @patch('os.path.isfile', return_value=True) @patch('azurelinuxagent.common.utils.fileutil.read_file', return_value="33C2F3B9-1399-429F-8EB3-BA656DF32502") def test_get_instance_id_from_file(self, mock_read, mock_isfile): # pylint: disable=unused-argument util = osutil.DefaultOSUtil() self.assertEqual( util.get_instance_id(), "B9F3C233-9913-9F42-8EB3-BA656DF32502") @patch('os.path.isfile', return_value=True) @patch('azurelinuxagent.common.utils.fileutil.read_file', return_value="") def test_get_instance_id_empty_from_file(self, mock_read, mock_isfile): # pylint: disable=unused-argument util = osutil.DefaultOSUtil() self.assertEqual( "", util.get_instance_id()) @patch('os.path.isfile', return_value=True) @patch('azurelinuxagent.common.utils.fileutil.read_file', return_value="Value") def test_get_instance_id_malformed_from_file(self, mock_read, mock_isfile): # pylint: disable=unused-argument util = osutil.DefaultOSUtil() self.assertEqual( "Value", util.get_instance_id()) @patch('os.path.isfile', return_value=False) @patch('azurelinuxagent.common.utils.shellutil.run_get_output', return_value=[0, '33C2F3B9-1399-429F-8EB3-BA656DF32502']) def test_get_instance_id_from_dmidecode(self, mock_shell, mock_isfile): # pylint: disable=unused-argument util = osutil.DefaultOSUtil() self.assertEqual( util.get_instance_id(), "B9F3C233-9913-9F42-8EB3-BA656DF32502") @patch('os.path.isfile', return_value=False) @patch('azurelinuxagent.common.utils.shellutil.run_get_output', return_value=[1, 'Error Value']) def test_get_instance_id_missing(self, mock_shell, mock_isfile): # pylint: disable=unused-argument util = osutil.DefaultOSUtil() self.assertEqual("", util.get_instance_id()) @patch('os.path.isfile', return_value=False) @patch('azurelinuxagent.common.utils.shellutil.run_get_output', return_value=[0, 'Unexpected Value']) def test_get_instance_id_unexpected(self, mock_shell, mock_isfile): # pylint: disable=unused-argument util = osutil.DefaultOSUtil() self.assertEqual("", util.get_instance_id()) @patch('os.path.isfile', return_value=True) @patch('azurelinuxagent.common.utils.fileutil.read_file') def test_is_current_instance_id_from_file(self, mock_read, mock_isfile): # pylint: disable=unused-argument util = osutil.DefaultOSUtil() mock_read.return_value = "11111111-2222-3333-4444-556677889900" self.assertFalse(util.is_current_instance_id( "B9F3C233-9913-9F42-8EB3-BA656DF32502")) mock_read.return_value = "B9F3C233-9913-9F42-8EB3-BA656DF32502" self.assertTrue(util.is_current_instance_id( "B9F3C233-9913-9F42-8EB3-BA656DF32502")) mock_read.return_value = "33C2F3B9-1399-429F-8EB3-BA656DF32502" self.assertTrue(util.is_current_instance_id( "B9F3C233-9913-9F42-8EB3-BA656DF32502")) mock_read.return_value = "b9f3c233-9913-9f42-8eb3-ba656df32502" self.assertTrue(util.is_current_instance_id( "B9F3C233-9913-9F42-8EB3-BA656DF32502")) mock_read.return_value = "33c2f3b9-1399-429f-8eb3-ba656df32502" self.assertTrue(util.is_current_instance_id( "B9F3C233-9913-9F42-8EB3-BA656DF32502")) @patch('os.path.isfile', return_value=False) @patch('azurelinuxagent.common.utils.shellutil.run_get_output') def test_is_current_instance_id_from_dmidecode(self, mock_shell, mock_isfile): # pylint: disable=unused-argument util = osutil.DefaultOSUtil() mock_shell.return_value = [0, 'B9F3C233-9913-9F42-8EB3-BA656DF32502'] self.assertTrue(util.is_current_instance_id( "B9F3C233-9913-9F42-8EB3-BA656DF32502")) mock_shell.return_value = [0, '33C2F3B9-1399-429F-8EB3-BA656DF32502'] self.assertTrue(util.is_current_instance_id( "B9F3C233-9913-9F42-8EB3-BA656DF32502")) @patch('azurelinuxagent.common.conf.get_sudoers_dir') def test_conf_sudoer(self, mock_dir): tmp_dir = tempfile.mkdtemp() mock_dir.return_value = tmp_dir util = osutil.DefaultOSUtil() # Assert the sudoer line is added if missing util.conf_sudoer("FooBar") waagent_sudoers = os.path.join(tmp_dir, 'waagent') self.assertTrue(os.path.isfile(waagent_sudoers)) count = -1 with open(waagent_sudoers, 'r') as f: count = len(f.readlines()) self.assertEqual(1, count) # Assert the line does not get added a second time util.conf_sudoer("FooBar") count = -1 with open(waagent_sudoers, 'r') as f: count = len(f.readlines()) print("WRITING TO {0}".format(waagent_sudoers)) self.assertEqual(1, count) def test_get_nic_state(self): state = osutil.DefaultOSUtil().get_nic_state() self.assertNotEqual(state, {}) self.assertGreater(len(state.keys()), 1) another_state = osutil.DefaultOSUtil().get_nic_state() name = list(another_state.keys())[0] another_state[name].add_ipv4("xyzzy") self.assertNotEqual(state, another_state) as_string = osutil.DefaultOSUtil().get_nic_state(as_string=True) self.assertNotEqual(as_string, '') def test_get_used_and_available_system_memory(self): memory_table = "\ total used free shared buff/cache available \n\ Mem: 8340144128 619352064 5236809728 1499136 2483982336 7426314240 \n\ Swap: 0 0 0 \n" with patch.object(shellutil, 'run_command', return_value=memory_table): used_mem, available_mem = osutil.DefaultOSUtil().get_used_and_available_system_memory() self.assertEqual(used_mem, 619352064/(1024**2), "The value didn't match") self.assertEqual(available_mem, 7426314240/(1024**2), "The value didn't match") def test_get_used_and_available_system_memory_error(self): msg = 'message' exception = shellutil.CommandError("free -d", 1, "", msg) with patch.object(shellutil, 'run_command', side_effect=exception) as patch_run: with self.assertRaises(shellutil.CommandError) as context_manager: osutil.DefaultOSUtil().get_used_and_available_system_memory() self.assertEqual(patch_run.call_count, 1) self.assertEqual(context_manager.exception.returncode, 1) def test_get_dhcp_pid_should_return_a_list_of_pids(self): osutil_get_dhcp_pid_should_return_a_list_of_pids(self, osutil.DefaultOSUtil()) def test_get_dhcp_pid_should_return_an_empty_list_when_the_dhcp_client_is_not_running(self): original_run_command = shellutil.run_command def mock_run_command(cmd): # pylint: disable=unused-argument return original_run_command(["pidof", "non-existing-process"]) with patch("azurelinuxagent.common.utils.shellutil.run_command", side_effect=mock_run_command): pid_list = osutil.DefaultOSUtil().get_dhcp_pid() self.assertTrue(len(pid_list) == 0, "the return value is not an empty list: {0}".format(pid_list)) @patch('os.walk', return_value=[('host3/target3:0:1/3:0:1:0/block', ['sdb'], [])]) @patch('azurelinuxagent.common.utils.fileutil.read_file', return_value='{00000000-0001-8899-0000-000000000000}') @patch('os.listdir', return_value=['00000000-0001-8899-0000-000000000000']) @patch('os.path.exists', return_value=True) def test_device_for_ide_port_gen1_success( self, os_path_exists, # pylint: disable=unused-argument os_listdir, # pylint: disable=unused-argument fileutil_read_file, # pylint: disable=unused-argument os_walk): # pylint: disable=unused-argument dev = osutil.DefaultOSUtil().device_for_ide_port(1) self.assertEqual(dev, 'sdb', 'The returned device should be the resource disk') @patch('os.walk', return_value=[('host0/target0:0:0/0:0:0:1/block', ['sdb'], [])]) @patch('azurelinuxagent.common.utils.fileutil.read_file', return_value='{f8b3781a-1e82-4818-a1c3-63d806ec15bb}') @patch('os.listdir', return_value=['f8b3781a-1e82-4818-a1c3-63d806ec15bb']) @patch('os.path.exists', return_value=True) def test_device_for_ide_port_gen2_success( self, os_path_exists, # pylint: disable=unused-argument os_listdir, # pylint: disable=unused-argument fileutil_read_file, # pylint: disable=unused-argument os_walk): # pylint: disable=unused-argument dev = osutil.DefaultOSUtil().device_for_ide_port(1) self.assertEqual(dev, 'sdb', 'The returned device should be the resource disk') @patch('os.listdir', return_value=['00000000-0000-0000-0000-000000000000']) @patch('os.path.exists', return_value=True) def test_device_for_ide_port_none( self, os_path_exists, # pylint: disable=unused-argument os_listdir): # pylint: disable=unused-argument dev = osutil.DefaultOSUtil().device_for_ide_port(1) self.assertIsNone(dev, 'None should be returned if no resource disk found') def osutil_get_dhcp_pid_should_return_a_list_of_pids(test_instance, osutil_instance): """ This is a very basic test for osutil.get_dhcp_pid. It is simply meant to exercise the implementation of that method in case there are any basic errors, such as a typos, etc. The test does not verify that the implementation returns the PID for the actual dhcp client; in fact, it uses a mock that invokes pidof to return the PID of an arbitrary process (the pidof process itself). Most implementations of get_dhcp_pid use pidof with the appropriate name for the dhcp client. The test is defined as a global function to make it easily accessible from the test suites for each distro. """ original_run_command = shellutil.run_command def mock_run_command(cmd): # pylint: disable=unused-argument return original_run_command(["pidof", "pidof"]) with patch("azurelinuxagent.common.utils.shellutil.run_command", side_effect=mock_run_command): pid = osutil_instance.get_dhcp_pid() test_instance.assertTrue(len(pid) != 0, "get_dhcp_pid did not return a PID") class TestGetPublishedHostname(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) self.__published_hostname = os.path.join(self.tmp_dir, "published_hostname") self.__patcher = patch('azurelinuxagent.common.osutil.default.conf.get_published_hostname', return_value=self.__published_hostname) self.__patcher.start() def tearDown(self): self.__patcher.stop() AgentTestCase.tearDown(self) def __get_published_hostname_contents(self): with open(self.__published_hostname, "r") as file_: return file_.read() def test_get_hostname_record_should_create_published_hostname(self): actual = osutil.DefaultOSUtil().get_hostname_record() expected = socket.gethostname() self.assertEqual(expected, actual, "get_hostname_record returned an incorrect hostname") self.assertTrue(os.path.exists(self.__published_hostname), "The published_hostname file was not created") self.assertEqual(expected, self.__get_published_hostname_contents(), "get_hostname_record returned an incorrect hostname") def test_get_hostname_record_should_use_existing_published_hostname(self): expected = "a-sample-hostname-used-for-testing" with open(self.__published_hostname, "w") as file_: file_.write(expected) actual = osutil.DefaultOSUtil().get_hostname_record() self.assertEqual(expected, actual, "get_hostname_record returned an incorrect hostname") self.assertEqual(expected, self.__get_published_hostname_contents(), "get_hostname_record returned an incorrect hostname") def test_get_hostname_record_should_initialize_the_host_name_using_cloud_init_info(self): with MockEnvironment(self.tmp_dir, files=[('/var/lib/cloud/data/set-hostname', os.path.join(data_dir, "cloud-init", "set-hostname"))]): actual = osutil.DefaultOSUtil().get_hostname_record() expected = "a-sample-set-hostname" self.assertEqual(expected, actual, "get_hostname_record returned an incorrect hostname") self.assertEqual(expected, self.__get_published_hostname_contents(), "get_hostname_record returned an incorrect hostname") def test_get_password_hash(self): with open(os.path.join(os.path.dirname(os.path.realpath(__file__)), 'test_passwords.txt'), 'rb') as in_file: for data in in_file: # Remove bom on bytes data before it is converted into string. data = textutil.remove_bom(data) data = ustr(data, encoding='utf-8') password_hash = osutil.DefaultOSUtil.gen_password_hash(data, 6, 10) self.assertNotEqual(None, password_hash) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-a976115/tests/common/osutil/test_default_osutil.py000066400000000000000000000017231510742556200260020ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.4+ and Openssl 1.0+ # from azurelinuxagent.common.osutil.default import DefaultOSUtil, shellutil # pylint: disable=unused-import from tests.lib.tools import AgentTestCase, patch # pylint: disable=unused-import class DefaultOsUtilTestCase(AgentTestCase): def test_default_service_name(self): self.assertEqual(DefaultOSUtil().get_service_name(), "waagent") Azure-WALinuxAgent-a976115/tests/common/osutil/test_factory.py000066400000000000000000000371221510742556200244300ustar00rootroot00000000000000# Copyright 2019 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.4+ and Openssl 1.0+ # from azurelinuxagent.common.osutil.alpine import AlpineOSUtil from azurelinuxagent.common.osutil.arch import ArchUtil from azurelinuxagent.common.osutil.bigip import BigIpOSUtil from azurelinuxagent.common.osutil.clearlinux import ClearLinuxUtil from azurelinuxagent.common.osutil.coreos import CoreOSUtil from azurelinuxagent.common.osutil.debian import DebianOSBaseUtil, DebianOSModernUtil from azurelinuxagent.common.osutil.devuan import DevuanOSUtil from azurelinuxagent.common.osutil.default import DefaultOSUtil from azurelinuxagent.common.osutil.factory import _get_osutil from azurelinuxagent.common.osutil.freebsd import FreeBSDOSUtil from azurelinuxagent.common.osutil.gaia import GaiaOSUtil from azurelinuxagent.common.osutil.iosxe import IosxeOSUtil from azurelinuxagent.common.osutil.openbsd import OpenBSDOSUtil from azurelinuxagent.common.osutil.openwrt import OpenWRTOSUtil from azurelinuxagent.common.osutil.photonos import PhotonOSUtil from azurelinuxagent.common.osutil.redhat import RedhatOSUtil, Redhat6xOSUtil from azurelinuxagent.common.osutil.suse import SUSEOSUtil, SUSE11OSUtil from azurelinuxagent.common.osutil.ubuntu import UbuntuOSUtil, Ubuntu12OSUtil, Ubuntu14OSUtil, \ UbuntuSnappyOSUtil, Ubuntu16OSUtil, Ubuntu18OSUtil from tests.lib.tools import AgentTestCase, patch class TestOsUtilFactory(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) def tearDown(self): AgentTestCase.tearDown(self) @patch("azurelinuxagent.common.logger.warn") def test_get_osutil_it_should_return_default(self, patch_logger): ret = _get_osutil(distro_name="", distro_code_name="", distro_version="", distro_full_name="") self.assertTrue(isinstance(ret, DefaultOSUtil)) self.assertEqual(patch_logger.call_count, 1) self.assertEqual(ret.get_service_name(), "waagent") def test_get_osutil_it_should_return_ubuntu(self): ret = _get_osutil(distro_name="ubuntu", distro_code_name="", distro_version="10.04", distro_full_name="") self.assertTrue(isinstance(ret, UbuntuOSUtil)) self.assertEqual(ret.get_service_name(), "walinuxagent") ret = _get_osutil(distro_name="ubuntu", distro_code_name="", distro_version="12.04", distro_full_name="") self.assertTrue(isinstance(ret, Ubuntu12OSUtil)) self.assertEqual(ret.get_service_name(), "walinuxagent") ret = _get_osutil(distro_name="ubuntu", distro_code_name="trusty", distro_version="14.04", distro_full_name="") self.assertTrue(isinstance(ret, Ubuntu14OSUtil)) self.assertEqual(ret.get_service_name(), "walinuxagent") ret = _get_osutil(distro_name="ubuntu", distro_code_name="xenial", distro_version="16.04", distro_full_name="") self.assertTrue(isinstance(ret, Ubuntu16OSUtil)) self.assertEqual(ret.get_service_name(), "walinuxagent") ret = _get_osutil(distro_name="ubuntu", distro_code_name="bionic", distro_version="18.04", distro_full_name="") self.assertTrue(isinstance(ret, Ubuntu18OSUtil)) self.assertEqual(ret.get_service_name(), "walinuxagent") ret = _get_osutil(distro_name="ubuntu", distro_code_name="focal", distro_version="20.04", distro_full_name="") self.assertTrue(isinstance(ret, Ubuntu18OSUtil)) self.assertEqual(ret.get_service_name(), "walinuxagent") ret = _get_osutil(distro_name="ubuntu", distro_code_name="noble", distro_version="24.04", distro_full_name="") self.assertTrue(isinstance(ret, Ubuntu18OSUtil)) self.assertEqual(ret.get_service_name(), "walinuxagent") ret = _get_osutil(distro_name="ubuntu", distro_code_name="", distro_version="10.04", distro_full_name="Snappy Ubuntu Core") self.assertTrue(isinstance(ret, UbuntuSnappyOSUtil)) self.assertEqual(ret.get_service_name(), "walinuxagent") def test_get_osutil_it_should_return_arch(self): ret = _get_osutil(distro_name="arch", distro_code_name="", distro_version="", distro_full_name="") self.assertTrue(isinstance(ret, ArchUtil)) self.assertEqual(ret.get_service_name(), "waagent") def test_get_osutil_it_should_return_clear_linux(self): ret = _get_osutil(distro_name="clear linux", distro_code_name="", distro_version="", distro_full_name="Clear Linux") self.assertTrue(isinstance(ret, ClearLinuxUtil)) self.assertEqual(ret.get_service_name(), "waagent") def test_get_osutil_it_should_return_alpine(self): ret = _get_osutil(distro_name="alpine", distro_code_name="", distro_version="", distro_full_name="") self.assertTrue(isinstance(ret, AlpineOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") def test_get_osutil_it_should_return_kali(self): ret = _get_osutil(distro_name="kali", distro_code_name="", distro_version="", distro_full_name="") self.assertTrue(isinstance(ret, DebianOSBaseUtil)) self.assertEqual(ret.get_service_name(), "waagent") def test_get_osutil_it_should_return_coreos(self): ret = _get_osutil(distro_name="coreos", distro_code_name="", distro_version="", distro_full_name="") self.assertTrue(isinstance(ret, CoreOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") ret = _get_osutil(distro_name="flatcar", distro_code_name="", distro_version="", distro_full_name="") self.assertTrue(isinstance(ret, CoreOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") def test_get_osutil_it_should_return_suse(self): ret = _get_osutil(distro_name="suse", distro_code_name="", distro_version="10", distro_full_name="") self.assertTrue(isinstance(ret, SUSEOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") ret = _get_osutil(distro_name="suse", distro_code_name="", distro_full_name="SUSE Linux Enterprise Server", distro_version="11") self.assertTrue(isinstance(ret, SUSE11OSUtil)) self.assertEqual(ret.get_service_name(), "waagent") ret = _get_osutil(distro_name="suse", distro_code_name="", distro_full_name="openSUSE", distro_version="12") self.assertTrue(isinstance(ret, SUSE11OSUtil)) self.assertEqual(ret.get_service_name(), "waagent") def test_get_osutil_it_should_return_debian(self): ret = _get_osutil(distro_name="debian", distro_code_name="", distro_full_name="", distro_version="7") self.assertTrue(isinstance(ret, DebianOSBaseUtil)) self.assertEqual(ret.get_service_name(), "waagent") ret = _get_osutil(distro_name="debian", distro_code_name="", distro_full_name="", distro_version="8") self.assertTrue(isinstance(ret, DebianOSModernUtil)) self.assertEqual(ret.get_service_name(), "walinuxagent") def test_get_osutil_it_should_return_devuan(self): ret = _get_osutil(distro_name="devuan", distro_code_name="", distro_full_name="", distro_version="4") self.assertTrue(isinstance(ret, DevuanOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") def test_get_osutil_it_should_return_redhat(self): ret = _get_osutil(distro_name="redhat", distro_code_name="", distro_full_name="", distro_version="6") self.assertTrue(isinstance(ret, Redhat6xOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") ret = _get_osutil(distro_name="rhel", distro_code_name="", distro_full_name="", distro_version="6") self.assertTrue(isinstance(ret, Redhat6xOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") ret = _get_osutil(distro_name="centos", distro_code_name="", distro_full_name="", distro_version="6") self.assertTrue(isinstance(ret, Redhat6xOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") ret = _get_osutil(distro_name="oracle", distro_code_name="", distro_full_name="", distro_version="6") self.assertTrue(isinstance(ret, Redhat6xOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") ret = _get_osutil(distro_name="redhat", distro_code_name="", distro_full_name="", distro_version="7") self.assertTrue(isinstance(ret, RedhatOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") ret = _get_osutil(distro_name="rhel", distro_code_name="", distro_full_name="", distro_version="7") self.assertTrue(isinstance(ret, RedhatOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") ret = _get_osutil(distro_name="centos", distro_code_name="", distro_full_name="", distro_version="7") self.assertTrue(isinstance(ret, RedhatOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") ret = _get_osutil(distro_name="oracle", distro_code_name="", distro_full_name="", distro_version="7") self.assertTrue(isinstance(ret, RedhatOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") ret = _get_osutil(distro_name="almalinux", distro_code_name="", distro_full_name="", distro_version="8") self.assertTrue(isinstance(ret, RedhatOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") ret = _get_osutil(distro_name="cloudlinux", distro_code_name="", distro_full_name="", distro_version="8") self.assertTrue(isinstance(ret, RedhatOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") ret = _get_osutil(distro_name="rocky", distro_code_name="", distro_full_name="", distro_version="8") self.assertTrue(isinstance(ret, RedhatOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") def test_get_osutil_it_should_return_euleros(self): ret = _get_osutil(distro_name="euleros", distro_code_name="", distro_version="", distro_full_name="") self.assertTrue(isinstance(ret, RedhatOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") def test_get_osutil_it_should_return_uos(self): ret = _get_osutil(distro_name="uos", distro_code_name="", distro_version="", distro_full_name="") self.assertTrue(isinstance(ret, RedhatOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") def test_get_osutil_it_should_return_freebsd(self): ret = _get_osutil(distro_name="freebsd", distro_code_name="", distro_version="", distro_full_name="") self.assertTrue(isinstance(ret, FreeBSDOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") def test_get_osutil_it_should_return_openbsd(self): ret = _get_osutil(distro_name="openbsd", distro_code_name="", distro_version="", distro_full_name="") self.assertTrue(isinstance(ret, OpenBSDOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") def test_get_osutil_it_should_return_bigip(self): ret = _get_osutil(distro_name="bigip", distro_code_name="", distro_version="", distro_full_name="") self.assertTrue(isinstance(ret, BigIpOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") def test_get_osutil_it_should_return_gaia(self): ret = _get_osutil(distro_name="gaia", distro_code_name="", distro_version="", distro_full_name="") self.assertTrue(isinstance(ret, GaiaOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") def test_get_osutil_it_should_return_iosxe(self): ret = _get_osutil(distro_name="iosxe", distro_code_name="", distro_version="", distro_full_name="") self.assertTrue(isinstance(ret, IosxeOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") def test_get_osutil_it_should_return_openwrt(self): ret = _get_osutil(distro_name="openwrt", distro_code_name="", distro_version="", distro_full_name="") self.assertTrue(isinstance(ret, OpenWRTOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") def test_get_osutil_it_should_return_photonos(self): ret = _get_osutil(distro_name="photonos", distro_code_name="", distro_version="", distro_full_name="") self.assertTrue(isinstance(ret, PhotonOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") Azure-WALinuxAgent-a976115/tests/common/osutil/test_freebsd.py000066400000000000000000000113301510742556200243640ustar00rootroot00000000000000# Copyright 2019 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import unittest import azurelinuxagent.common.utils.shellutil as shellutil from azurelinuxagent.common.osutil.freebsd import FreeBSDOSUtil from azurelinuxagent.common.utils import textutil from tests.lib.tools import AgentTestCase, patch from .test_default import osutil_get_dhcp_pid_should_return_a_list_of_pids class TestFreeBSDOSUtil(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) def tearDown(self): AgentTestCase.tearDown(self) def test_get_dhcp_pid_should_return_a_list_of_pids(self): osutil_get_dhcp_pid_should_return_a_list_of_pids(self, FreeBSDOSUtil()) def test_empty_proc_net_route(self): route_table = "" with patch.object(shellutil, 'run_command', return_value=route_table): # Header line only self.assertEqual(len(FreeBSDOSUtil().read_route_table()), 1) def test_no_routes(self): route_table = """Routing tables Internet: Destination Gateway Flags Netif Expire """ with patch.object(shellutil, 'run_command', return_value=route_table): raw_route_list = FreeBSDOSUtil().read_route_table() self.assertEqual(len(FreeBSDOSUtil().get_list_of_routes(raw_route_list)), 0) def test_bogus_proc_net_route(self): route_table = """Routing tables Internet: Destination Gateway Flags Netif Expire 1.1.1 0.0.0 """ with patch.object(shellutil, 'run_command', return_value=route_table): raw_route_list = FreeBSDOSUtil().read_route_table() self.assertEqual(len(FreeBSDOSUtil().get_list_of_routes(raw_route_list)), 0) def test_valid_routes(self): route_table = """Routing tables Internet: Destination Gateway Flags Netif Expire 0.0.0.0 10.145.187.193 UGS em0 10.145.187.192/26 0.0.0.0 US em0 168.63.129.16 10.145.187.193 UH em0 169.254.169.254 10.145.187.193 UHS em0 192.168.43.0 0.0.0.0 US vtbd0 """ with patch.object(shellutil, 'run_command', return_value=route_table): raw_route_list = FreeBSDOSUtil().read_route_table() self.assertEqual(len(raw_route_list), 6) route_list = FreeBSDOSUtil().get_list_of_routes(raw_route_list) self.assertEqual(len(route_list), 5) self.assertEqual(route_list[0].gateway_quad(), '10.145.187.193') self.assertEqual(route_list[1].gateway_quad(), '0.0.0.0') self.assertEqual(route_list[1].mask_quad(), '255.255.255.192') self.assertEqual(route_list[2].destination_quad(), '168.63.129.16') self.assertEqual(route_list[1].flags, 1) self.assertEqual(route_list[2].flags, 33) self.assertEqual(route_list[3].flags, 5) self.assertEqual((route_list[3].metric - route_list[4].metric), 1) self.assertEqual(route_list[0].interface, 'em0') self.assertEqual(route_list[4].interface, 'vtbd0') def test_get_first_if(self): """ Validate that the agent can find the first active non-loopback interface. This test case used to run live, but not all developers have an eth* interface. It is perfectly valid to have a br*, but this test does not account for that. """ freebsdosutil = FreeBSDOSUtil() with patch.object(freebsdosutil, '_get_net_info', return_value=('em0', '10.0.0.1', 'e5:f0:38:aa:da:52')): ifname, ipaddr = freebsdosutil.get_first_if() self.assertEqual(ifname, 'em0') self.assertEqual(ipaddr, '10.0.0.1') def test_no_primary_does_not_throw(self): freebsdosutil = FreeBSDOSUtil() with patch.object(freebsdosutil, '_get_net_info', return_value=('em0', '10.0.0.1', 'e5:f0:38:aa:da:52')): try: freebsdosutil.get_first_if()[0] except Exception as e: # pylint: disable=unused-variable print(textutil.format_exception(e)) exception = True # pylint: disable=unused-variable if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-a976115/tests/common/osutil/test_nsbsd.py000066400000000000000000000065031510742556200240710ustar00rootroot00000000000000# Copyright 2019 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import unittest from os import path from azurelinuxagent.common.osutil.nsbsd import NSBSDOSUtil from azurelinuxagent.common.utils.fileutil import read_file from tests.lib.tools import AgentTestCase, patch class TestNSBSDOSUtil(AgentTestCase): dhclient_pid_file = "/var/run/dhclient.pid" def setUp(self): AgentTestCase.setUp(self) def tearDown(self): AgentTestCase.tearDown(self) def test_get_dhcp_pid_should_return_a_list_of_pids(self): with patch.object(NSBSDOSUtil, "resolver"): # instantiating NSBSDOSUtil requires a resolver original_isfile = path.isfile def mock_isfile(path): # pylint: disable=redefined-outer-name return True if path == self.dhclient_pid_file else original_isfile(path) original_read_file = read_file def mock_read_file(file, *args, **kwargs): # pylint: disable=redefined-builtin return "123" if file == self.dhclient_pid_file else original_read_file(file, *args, **kwargs) with patch("os.path.isfile", mock_isfile): with patch("azurelinuxagent.common.osutil.nsbsd.fileutil.read_file", mock_read_file): pid_list = NSBSDOSUtil().get_dhcp_pid() self.assertEqual(pid_list, [123]) def test_get_dhcp_pid_should_return_an_empty_list_when_the_dhcp_client_is_not_running(self): with patch.object(NSBSDOSUtil, "resolver"): # instantiating NSBSDOSUtil requires a resolver # # PID file does not exist # original_isfile = path.isfile def mock_isfile(path): # pylint: disable=redefined-outer-name return False if path == self.dhclient_pid_file else original_isfile(path) with patch("os.path.isfile", mock_isfile): pid_list = NSBSDOSUtil().get_dhcp_pid() self.assertEqual(pid_list, []) # # PID file is empty # original_isfile = path.isfile def mock_isfile(path): # pylint: disable=redefined-outer-name,function-redefined return True if path == self.dhclient_pid_file else original_isfile(path) original_read_file = read_file def mock_read_file(file, *args, **kwargs): # pylint: disable=redefined-builtin return "" if file == self.dhclient_pid_file else original_read_file(file, *args, **kwargs) with patch("os.path.isfile", mock_isfile): with patch("azurelinuxagent.common.osutil.nsbsd.fileutil.read_file", mock_read_file): pid_list = NSBSDOSUtil().get_dhcp_pid() self.assertEqual(pid_list, []) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-a976115/tests/common/osutil/test_openbsd.py000066400000000000000000000022311510742556200244040ustar00rootroot00000000000000# Copyright 2019 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import unittest from azurelinuxagent.common.osutil.openbsd import OpenBSDOSUtil from tests.lib.tools import AgentTestCase from .test_default import osutil_get_dhcp_pid_should_return_a_list_of_pids class TestAlpineOSUtil(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) def tearDown(self): AgentTestCase.tearDown(self) def test_get_dhcp_pid_should_return_a_list_of_pids(self): osutil_get_dhcp_pid_should_return_a_list_of_pids(self, OpenBSDOSUtil()) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-a976115/tests/common/osutil/test_openwrt.py000066400000000000000000000022321510742556200244510ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import unittest from azurelinuxagent.common.osutil.openwrt import OpenWRTOSUtil from tests.lib.tools import AgentTestCase from .test_default import osutil_get_dhcp_pid_should_return_a_list_of_pids class TestOpenWRTOSUtil(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) def tearDown(self): AgentTestCase.tearDown(self) def test_get_dhcp_pid_should_return_a_list_of_pids(self): osutil_get_dhcp_pid_should_return_a_list_of_pids(self, OpenWRTOSUtil()) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-a976115/tests/common/osutil/test_passwords.txt000066400000000000000000000000401510742556200251620ustar00rootroot00000000000000김치 करी hamburger caféAzure-WALinuxAgent-a976115/tests/common/osutil/test_photonos.py000066400000000000000000000022311510742556200246230ustar00rootroot00000000000000# Copyright 2021 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import unittest from azurelinuxagent.common.osutil.photonos import PhotonOSUtil from tests.lib.tools import AgentTestCase from .test_default import osutil_get_dhcp_pid_should_return_a_list_of_pids class TestPhotonOSUtil(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) def tearDown(self): AgentTestCase.tearDown(self) def test_get_dhcp_pid_should_return_a_list_of_pids(self): osutil_get_dhcp_pid_should_return_a_list_of_pids(self, PhotonOSUtil()) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-a976115/tests/common/osutil/test_redhat.py000066400000000000000000000022341510742556200242240ustar00rootroot00000000000000# Copyright 2019 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import unittest from azurelinuxagent.common.osutil.redhat import Redhat6xOSUtil from tests.lib.tools import AgentTestCase from .test_default import osutil_get_dhcp_pid_should_return_a_list_of_pids class TestRedhat6xOSUtil(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) def tearDown(self): AgentTestCase.tearDown(self) def test_get_dhcp_pid_should_return_a_list_of_pids(self): osutil_get_dhcp_pid_should_return_a_list_of_pids(self, Redhat6xOSUtil()) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-a976115/tests/common/osutil/test_suse.py000066400000000000000000000022241510742556200237330ustar00rootroot00000000000000# Copyright 2019 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import unittest from azurelinuxagent.common.osutil.suse import SUSE11OSUtil from tests.lib.tools import AgentTestCase from .test_default import osutil_get_dhcp_pid_should_return_a_list_of_pids class TestSUSE11OSUtil(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) def tearDown(self): AgentTestCase.tearDown(self) def test_get_dhcp_pid_should_return_a_list_of_pids(self): osutil_get_dhcp_pid_should_return_a_list_of_pids(self, SUSE11OSUtil()) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-a976115/tests/common/osutil/test_ubuntu.py000066400000000000000000000027351510742556200243050ustar00rootroot00000000000000# Copyright 2019 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import unittest from azurelinuxagent.common.osutil.ubuntu import Ubuntu12OSUtil, Ubuntu18OSUtil from tests.lib.tools import AgentTestCase from .test_default import osutil_get_dhcp_pid_should_return_a_list_of_pids class TestUbuntu12OSUtil(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) def tearDown(self): AgentTestCase.tearDown(self) def test_get_dhcp_pid_should_return_a_list_of_pids(self): osutil_get_dhcp_pid_should_return_a_list_of_pids(self, Ubuntu12OSUtil()) class TestUbuntu18OSUtil(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) def tearDown(self): AgentTestCase.tearDown(self) def test_get_dhcp_pid_should_return_a_list_of_pids(self): osutil_get_dhcp_pid_should_return_a_list_of_pids(self, Ubuntu18OSUtil()) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-a976115/tests/common/protocol/000077500000000000000000000000001510742556200216655ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/common/protocol/__init__.py000066400000000000000000000011651510742556200240010ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # Azure-WALinuxAgent-a976115/tests/common/protocol/test_datacontract.py000066400000000000000000000027031510742556200257470ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import unittest from azurelinuxagent.common.datacontract import get_properties, set_properties, DataContract, DataContractList class SampleDataContract(DataContract): def __init__(self): self.foo = None self.bar = DataContractList(int) class TestDataContract(unittest.TestCase): def test_get_properties(self): obj = SampleDataContract() obj.foo = "foo" obj.bar.append(1) data = get_properties(obj) self.assertEqual("foo", data["foo"]) self.assertEqual(list, type(data["bar"])) def test_set_properties(self): obj = SampleDataContract() data = { 'foo' : 1, 'baz': 'a' } set_properties('sample', obj, data) self.assertFalse(hasattr(obj, 'baz')) if __name__ == '__main__': unittest.main() test_extensions_goal_state_from_extensions_config.py000066400000000000000000000525051510742556200344560ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/common/protocol# Copyright (c) Microsoft Corporation. All rights reserved. # Licensed under the Apache License. from azurelinuxagent.common.AgentGlobals import AgentGlobals from azurelinuxagent.common.protocol.extensions_goal_state import GoalStateChannel from tests.lib.mock_wire_protocol import wire_protocol_data, mock_wire_protocol from tests.lib.tools import AgentTestCase class ExtensionsGoalStateFromExtensionsConfigTestCase(AgentTestCase): def test_it_should_parse_in_vm_metadata(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_IN_VM_META_DATA) as protocol: extensions_goal_state = protocol.get_goal_state().extensions_goal_state self.assertEqual("555e551c-600e-4fb4-90ba-8ab8ec28eccc", extensions_goal_state.activity_id, "Incorrect activity Id") self.assertEqual("400de90b-522e-491f-9d89-ec944661f531", extensions_goal_state.correlation_id, "Incorrect correlation Id") self.assertEqual('2020-11-09T17:48:50.412125Z', extensions_goal_state.created_on_timestamp, "Incorrect GS Creation time") def test_it_should_use_default_values_when_in_vm_metadata_is_missing(self): data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf-no_gs_metadata.xml" with mock_wire_protocol(data_file) as protocol: extensions_goal_state = protocol.get_goal_state().extensions_goal_state self.assertEqual(AgentGlobals.GUID_ZERO, extensions_goal_state.activity_id, "Incorrect activity Id") self.assertEqual(AgentGlobals.GUID_ZERO, extensions_goal_state.correlation_id, "Incorrect correlation Id") self.assertEqual('0001-01-01T00:00:00.000000Z', extensions_goal_state.created_on_timestamp, "Incorrect GS Creation time") def test_it_should_use_default_values_when_in_vm_metadata_is_invalid(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_INVALID_VM_META_DATA) as protocol: extensions_goal_state = protocol.get_goal_state().extensions_goal_state self.assertEqual(AgentGlobals.GUID_ZERO, extensions_goal_state.activity_id, "Incorrect activity Id") self.assertEqual(AgentGlobals.GUID_ZERO, extensions_goal_state.correlation_id, "Incorrect correlation Id") self.assertEqual('0001-01-01T00:00:00.000000Z', extensions_goal_state.created_on_timestamp, "Incorrect GS Creation time") def test_it_should_parse_missing_status_upload_blob_as_none(self): data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "hostgaplugin/ext_conf-no_status_upload_blob.xml" with mock_wire_protocol(data_file) as protocol: extensions_goal_state = protocol.get_goal_state().extensions_goal_state self.assertIsNone(extensions_goal_state.status_upload_blob, "Expected status upload blob to be None") self.assertEqual("BlockBlob", extensions_goal_state.status_upload_blob_type, "Expected status upload blob to be Block") def test_it_should_default_to_block_blob_when_the_status_blob_type_is_not_valid(self): data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "hostgaplugin/ext_conf-invalid_blob_type.xml" with mock_wire_protocol(data_file) as protocol: extensions_goal_state = protocol.get_goal_state().extensions_goal_state self.assertEqual("BlockBlob", extensions_goal_state.status_upload_blob_type, 'Expected BlockBlob for an invalid statusBlobType') def test_it_should_parse_empty_depends_on_as_dependency_level_0(self): data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() data_file["vm_settings"] = "hostgaplugin/vm_settings-empty_depends_on.json" data_file["ext_conf"] = "hostgaplugin/ext_conf-empty_depends_on.xml" with mock_wire_protocol(data_file) as protocol: extensions = protocol.get_goal_state().extensions_goal_state.extensions self.assertEqual(0, extensions[0].settings[0].dependencyLevel, "Incorrect dependencyLevel") def test_its_source_channel_should_be_wire_server(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: extensions_goal_state = protocol.get_goal_state().extensions_goal_state self.assertEqual(GoalStateChannel.WireServer, extensions_goal_state.channel, "The channel is incorrect") def test_it_should_parse_from_version_properly(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: agent_families = protocol.get_goal_state().extensions_goal_state.agent_families for family in agent_families: self.assertIsNone(family.from_version, "from_version should be None") data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "hostgaplugin/ext_conf-agent_family_version.xml" with mock_wire_protocol(data_file) as protocol: agent_families = protocol.get_goal_state().extensions_goal_state.agent_families for family in agent_families: self.assertEqual(family.from_version, "9.9.9.9", "FromVersion should be 9.9.9.9") def test_it_should_parse_is_version_from_rsm_properly(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: agent_families = protocol.get_goal_state().extensions_goal_state.agent_families for family in agent_families: self.assertIsNone(family.is_version_from_rsm, "is_version_from_rsm should be None") data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "hostgaplugin/ext_conf-agent_family_version.xml" with mock_wire_protocol(data_file) as protocol: agent_families = protocol.get_goal_state().extensions_goal_state.agent_families for family in agent_families: self.assertTrue(family.is_version_from_rsm, "is_version_from_rsm should be True") data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "hostgaplugin/ext_conf-rsm_version_properties_false.xml" with mock_wire_protocol(data_file) as protocol: agent_families = protocol.get_goal_state().extensions_goal_state.agent_families for family in agent_families: self.assertFalse(family.is_version_from_rsm, "is_version_from_rsm should be False") def test_it_should_parse_is_vm_enabled_for_rsm_upgrades(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: agent_families = protocol.get_goal_state().extensions_goal_state.agent_families for family in agent_families: self.assertIsNone(family.is_vm_enabled_for_rsm_upgrades, "is_vm_enabled_for_rsm_upgrades should be None") data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "hostgaplugin/ext_conf-agent_family_version.xml" with mock_wire_protocol(data_file) as protocol: agent_families = protocol.get_goal_state().extensions_goal_state.agent_families for family in agent_families: self.assertTrue(family.is_vm_enabled_for_rsm_upgrades, "is_vm_enabled_for_rsm_upgrades should be True") data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "hostgaplugin/ext_conf-rsm_version_properties_false.xml" with mock_wire_protocol(data_file) as protocol: agent_families = protocol.get_goal_state().extensions_goal_state.agent_families for family in agent_families: self.assertFalse(family.is_vm_enabled_for_rsm_upgrades, "is_vm_enabled_for_rsm_upgrades should be False") def test_it_should_parse_encoded_signature_plugin_property(self): data_file = wire_protocol_data.DATA_FILE.copy() expected_signature = "MIInEAYJKoZIhvcNAQcCoIInATCCJv0CAQMxDTALBglghkgBZQMEAgIwCQYHgUuDSAcICaCCDXYwggX0MIID3KADAgECAhMzAAADrzBADkyjTQVBAAAAAAOvMA0GCSqGSIb3DQEBCwUAMH4xCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xKDAmBgNVBAMTH01pY3Jvc29mdCBDb2RlIFNpZ25pbmcgUENBIDIwMTEwHhcNMjMxMTE2MTkwOTAwWhcNMjQxMTE0MTkwOTAwWjB0MQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMR4wHAYDVQQDExVNaWNyb3NvZnQgQ29ycG9yYXRpb24wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDOS8s1ra6f0YGtg0OhEaQa/t3Q+q1MEHhWJhqQVuO5amYXQpy8MDPNoJYk+FWAhePP5LxwcSge5aen+f5Q6WNPd6EDxGzotvVpNi5ve0H97S3F7C/axDfKxyNh21MG0W8Sb0vxi/vorcLHOL9i+t2D6yvvDzLlEefUCbQV/zGCBjXGlYJcUj6RAzXyeNANxSpKXAGd7Fh+ocGHPPphcD9LQTOJgG7Y7aYztHqBLJiQQ4eAgZNU4ac6+8LnEGALgo1ydC5BJEuJQjYKbNTy959HrKSu7LO3Ws0w8jw6pYdC1IMpdTkk2puTgY2PDNzBtLM4evG7FYer3WX+8t1UMYNTAgMBAAGjggFzMIIBbzAfBgNVHSUEGDAWBgorBgEEAYI3TAgBBggrBgEFBQcDAzAdBgNVHQ4EFgQURxxxNPIEPGSO8kqz+bgCAQWGXsEwRQYDVR0RBD4wPKQ6MDgxHjAcBgNVBAsTFU1pY3Jvc29mdCBDb3Jwb3JhdGlvbjEWMBQGA1UEBRMNMjMwMDEyKzUwMTgyNjAfBgNVHSMEGDAWgBRIbmTlUAXTgqoXNzcitW2oynUClTBUBgNVHR8ETTBLMEmgR6BFhkNodHRwOi8vd3d3Lm1pY3Jvc29mdC5jb20vcGtpb3BzL2NybC9NaWNDb2RTaWdQQ0EyMDExXzIwMTEtMDctMDguY3JsMGEGCCsGAQUFBwEBBFUwUzBRBggrBgEFBQcwAoZFaHR0cDovL3d3dy5taWNyb3NvZnQuY29tL3BraW9wcy9jZXJ0cy9NaWNDb2RTaWdQQ0EyMDExXzIwMTEtMDctMDguY3J0MAwGA1UdEwEB/wQCMAAwDQYJKoZIhvcNAQELBQADggIBAISxFt/zR2frTFPB45YdmhZpB2nNJoOoi+qlgcTlnO4QwlYN1w/vYwbDy/oFJolD5r6FMJd0RGcgEM8q9TgQ2OC7gQEmhweVJ7yuKJlQBH7P7Pg5RiqgV3cSonJ+OM4kFHbP3gPLiyzssSQdRuPY1mIWoGg9i7Y4ZC8ST7WhpSyc0pns2XsUe1XsIjaUcGu7zd7gg97eCUiLRdVklPmpXobH9CEAWakRUGNICYN2AgjhRTC4j3KJfqMkU04R6Toyh4/Toswm1uoDcGr5laYnTfcX3u5WnJqJLhuPe8Uj9kGAOcyo0O1mNwDa+LhFEzB6CB32+wfJMumfr6degvLTe8x55urQLeTjimBQgS49BSUkhFN7ois3cZyNpnrMca5AZaC7pLI72vuqSsSlLalGOcZmPHZGYJqZ0BacN274OZ80Q8B11iNokns9Od348bMb5Z4fihxaBWebl8kWEi2OPvQImOAeq3nt7UWJBzJYLAGEpfasaA3ZQgIcEXdD+uwo6ymMzDY6UamFOfYqYWXkntxDGu7ngD2ugKUuccYKJJRiiz+LAUcj90BVcSHRLQop9N8zoALr/1sJuwPrVAtxHNEgSW+AKBqIxYWM4Ev32l6agSUAezLMbq5f3d8x9qzT031jMDT+sUAoCw0M5wVtCUQcqINPuYjbS1WgJyZIiEkBMIIHejCCBWKgAwIBAgIKYQ6Q0gAAAAAAAzANBgkqhkiG9w0BAQsFADCBiDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCldhc2hpbmd0b24xEDAOBgNVBAcTB1JlZG1vbmQxHjAcBgNVBAoTFU1pY3Jvc29mdCBDb3Jwb3JhdGlvbjEyMDAGA1UEAxMpTWljcm9zb2Z0IFJvb3QgQ2VydGlmaWNhdGUgQXV0aG9yaXR5IDIwMTEwHhcNMTEwNzA4MjA1OTA5WhcNMjYwNzA4MjEwOTA5WjB+MQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMSgwJgYDVQQDEx9NaWNyb3NvZnQgQ29kZSBTaWduaW5nIFBDQSAyMDExMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAq/D6chAcLq3YbqqCEE00uvK2WCGfQhsqa+laUKq4BjgaBEm6f8MMHt03a8YS2AvwOMKZBrDIOdUBFDFC04kNeWSHfpRgJGyvnkmc6Whe0t+bU7IKLMOv2akrrnoJr9eWWcpgGgXpZnboMlImEi/nqwhQz7NEt13YxC4Ddato88tt8zpcoRb0RrrgOGSsbmQ1eKagYw8t00CT+OPeBw3VXHmlSSnnDb6gE3e+lD3v++MrWhAfTVYoonpy4BI6t0le2O3tQ5GD2Xuye4Yb2T6xjF3oiU+EGvKhL1nkkDstrjNYxbc+/jLTswM9sbKvkjh+0p2ALPVOVpEhNSXDOW5kf1O6nA+tGSOEy/S6A4aN91/w0FK/jJSHvMAhdCVfGCi2zCcoOCWYOUo2z3yxkq4cI6epZuxhH2rhKEmdX4jiJV3TIUs+UsS1Vz8kA/DRelsv1SPjcF0PUUZ3s/gA4bysAoJf28AVs70b1FVL5zmhD+kjSbwYuER8ReTBw3J64HLnJN+/RpnF78IcV9uDjexNSTCnq47f7Fufr/zdsGbiwZeBe+3W7UvnSSmnEyimp31ngOaKYnhfsi+E11ecXL93KCjx7W3DKI8sj0A3T8HhhUSJxAlMxdSlQy90lfdu+HggWCwTXWCVmj5PM4TasIgX3p5O9JawvEagbJjS4NaIjAsCAwEAAaOCAe0wggHpMBAGCSsGAQQBgjcVAQQDAgEAMB0GA1UdDgQWBBRIbmTlUAXTgqoXNzcitW2oynUClTAZBgkrBgEEAYI3FAIEDB4KAFMAdQBiAEMAQTALBgNVHQ8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zAfBgNVHSMEGDAWgBRyLToCMZBDuRQFTuHqp8cx0SOJNDBaBgNVHR8EUzBRME+gTaBLhklodHRwOi8vY3JsLm1pY3Jvc29mdC5jb20vcGtpL2NybC9wcm9kdWN0cy9NaWNSb29DZXJBdXQyMDExXzIwMTFfMDNfMjIuY3JsMF4GCCsGAQUFBwEBBFIwUDBOBggrBgEFBQcwAoZCaHR0cDovL3d3dy5taWNyb3NvZnQuY29tL3BraS9jZXJ0cy9NaWNSb29DZXJBdXQyMDExXzIwMTFfMDNfMjIuY3J0MIGfBgNVHSAEgZcwgZQwgZEGCSsGAQQBgjcuAzCBgzA/BggrBgEFBQcCARYzaHR0cDovL3d3dy5taWNyb3NvZnQuY29tL3BraW9wcy9kb2NzL3ByaW1hcnljcHMuaHRtMEAGCCsGAQUFBwICMDQeMiAdAEwAZQBnAGEAbABfAHAAbwBsAGkAYwB5AF8AcwB0AGEAdABlAG0AZQBuAHQALiAdMA0GCSqGSIb3DQEBCwUAA4ICAQBn8oalmOBUeRou09h0ZyKbC5YR4WOSmUKWfdJ5DJDBZV8uLD74w3LRbYP+vj/oCso7v0epo/Np22O/IjWll11lhJB9i0ZQVdgMknzSGksc8zxCi1LQsP1r4z4HLimb5j0bpdS1HXeUOeLpZMlEPXh6I/MTfaaQdION9MsmAkYqwooQu6SpBQyb7Wj6aC6VoCo/KmtYSWMfCWluWpiW5IP0wI/zRive/DvQvTXvbiWu5a8n7dDd8w6vmSiXmE0OPQvyCInWH8MyGOLwxS3OW560STkKxgrCxq2u5bLZ2xWIUUVYODJxJxp/sfQn+N4sOiBpmLJZiWhub6e3dMNABQamASooPoI/E01mC8CzTfXhj38cbxV9Rad25UAqZaPDXVJihsMdYzaXht/a8/jyFqGaJ+HNpZfQ7l1jQeNbB5yHPgZ3BtEGsXUfFL5hYbXw3MYbBL7fQccOKO7eZS/sl/ahXJbYANahRr1Z85elCUtIEJmAH9AAKcWxm6U/RXceNcbSoqKfenoi+kiVH6v7RyOA9Z74v2u3S5fi63V4GuzqN5l5GEv/1rMjaHXmr/r8i+sLgOppO6/8MO0ETI7f33VtY5E90Z1WTk+/gFcioXgRMiF670EKsT/7qMykXcGhiJtXcVZOSEXAQsmbdlsKgEhr/Xmfwb1tbWrJUnMTDXpQzTGCGWIwghleAgEBMIGVMH4xCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xKDAmBgNVBAMTH01pY3Jvc29mdCBDb2RlIFNpZ25pbmcgUENBIDIwMTECEzMAAAOvMEAOTKNNBUEAAAAAA68wCwYJYIZIAWUDBAICoFkwFgYJKoZIhvcNAQkDMQkGB4FLg0gHCAkwPwYJKoZIhvcNAQkEMTIEMDBbd8WC98w2hp0LRsyGXkhY0ZY+y0Pl20deVXonOXR+vDsyK96L9uBzpNRlolZD0DANBgkqhkiG9w0BAQEFAASCAQAIaK9t6Unz6YcKR2q8D2Vjvq9j+YK0U1+tb8s2ZslmmL19Yeb+NRy4tkS7lVEmMYRiFTy+jyis6UGL81ziXEXqAfqjkJt/zjN/8Qek91fzKYJMuCfEm6xVv+gfNHCp0fuGn4b9QNoD7UUMe4oBskSSLSiW0ri9FblSdjeoLZKvoRzHFBF94wI2Kw0iCBUQgNKHKT3lyG9D4NQySAaS0BnYG/s/HPgGMPT6peWRWAXkuTQ8zxb98pOzdf3HZ4Zz2n8qEh1BM6nHba2CKnDP0yjEz7OERVWcLUVPcTHC/xG94cp1gdlKQ09t3H7lBwccxmztUt9sIGUAdeJFAChTvvnSoYIXRDCCF0AGCyqGSIb3DQEJEAIOMYIXLzCCFysGCSqGSIb3DQEHAqCCFxwwghcYAgEDMQ8wDQYJYIZIAWUDBAIBBQAwggFzBgsqhkiG9w0BCRABBKCCAWIEggFeMIIBWgIBAQYKKwYBBAGEWQoDATAxMA0GCWCGSAFlAwQCAQUABCALbe+1JlANO/4xRH8dJHYO8uMX6ee/KhxzL1ZHE4fguAIGZnLzb33XGBMyMDI0MDYyMDIzMzgyOS4yMzNaMASAAgH0AhgsprYE/OXhkFp093+I2SkmqEFqhU3g+VWggdikgdUwgdIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xLTArBgNVBAsTJE1pY3Jvc29mdCBJcmVsYW5kIE9wZXJhdGlvbnMgTGltaXRlZDEmMCQGA1UECxMdVGhhbGVzIFRTUyBFU046ODZERi00QkJDLTkzMzUxJTAjBgNVBAMTHE1pY3Jvc29mdCBUaW1lLVN0YW1wIFNlcnZpY2WgghF4MIIHJzCCBQ+gAwIBAgITMwAAAd1dVx2V1K2qGwABAAAB3TANBgkqhkiG9w0BAQsFADB8MQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMSYwJAYDVQQDEx1NaWNyb3NvZnQgVGltZS1TdGFtcCBQQ0EgMjAxMDAeFw0yMzEwMTIxOTA3MDlaFw0yNTAxMTAxOTA3MDlaMIHSMQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMS0wKwYDVQQLEyRNaWNyb3NvZnQgSXJlbGFuZCBPcGVyYXRpb25zIExpbWl0ZWQxJjAkBgNVBAsTHVRoYWxlcyBUU1MgRVNOOjg2REYtNEJCQy05MzM1MSUwIwYDVQQDExxNaWNyb3NvZnQgVGltZS1TdGFtcCBTZXJ2aWNlMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAqE4DlETqLnecdREfiWd8oun70m+Km5O1y1qKsLExRKs9LLkJYrYO2uJA/5PnYdds3aDsCS1DWlBltMMYXMrp3Te9hg2sI+4kr49Gw/YU9UOMFfLmastEXMgcctqIBqhsTm8Um6jFnRlZ0owKzxpyOEdSZ9pj7v38JHu434Hj7GMmrC92lT+anSYCrd5qvIf4Aqa/qWStA3zOCtxsKAfCyq++pPqUQWpimLu4qfswBhtJ4t7Skx1q1XkRbo1Wdcxg5NEq4Y9/J8Ep1KG5qUujzyQbupraZsDmXvv5fTokB6wySjJivj/0KAMWMdSlwdI4O6OUUEoyLXrzNF0t6t2lbRsFf0QO7HbMEwxoQrw3LFrAIS4Crv77uS0UBuXeFQq27NgLUVRm5SXYGrpTXtLgIqypHeK0tP2o1xvakAniOsgN2WXlOCip5/mCm/5hy8EzzfhtcU3DK13e6MMPbg/0N3zF9Um+6aOwFBCQrlP+rLcetAny53WcdK+0VWLlJr+5sa5gSlLyAXoYNY3n8pu94WR2yhNUg+jymRaGM+zRDucDn64HFAHjOWMSMrPlZbsEDjCmYWbbh+EGZGNXg1un6fvxyACO8NJ9OUDoNgFy/aTHUkfZ0iFpGdJ45d49PqEwXQiXn3wsy7SvDflWJRZwBCRQ1RPFGeoYXHPnD5m6wwMCAwEAAaOCAUkwggFFMB0GA1UdDgQWBBRuovW2jI9R2kXLIdIMpaPQjiXD8TAfBgNVHSMEGDAWgBSfpxVdAF5iXYP05dJlpxtTNRnpcjBfBgNVHR8EWDBWMFSgUqBQhk5odHRwOi8vd3d3Lm1pY3Jvc29mdC5jb20vcGtpb3BzL2NybC9NaWNyb3NvZnQlMjBUaW1lLVN0YW1wJTIwUENBJTIwMjAxMCgxKS5jcmwwbAYIKwYBBQUHAQEEYDBeMFwGCCsGAQUFBzAChlBodHRwOi8vd3d3Lm1pY3Jvc29mdC5jb20vcGtpb3BzL2NlcnRzL01pY3Jvc29mdCUyMFRpbWUtU3RhbXAlMjBQQ0ElMjAyMDEwKDEpLmNydDAMBgNVHRMBAf8EAjAAMBYGA1UdJQEB/wQMMAoGCCsGAQUFBwMIMA4GA1UdDwEB/wQEAwIHgDANBgkqhkiG9w0BAQsFAAOCAgEALlTZsg0uBcgdZsxypW5/2ORRP8rzPIsG+7mHwmuphHbP95o7bKjU6hz1KHK/Ft70ZkO7uSRTPFLInUhmSxlnDoUOrrJk1Pc8SMASdESlEEvxL6ZteD47hUtLQtKZvxchmIuxqpnR8MRy/cd4D7/L+oqcJBaReCGloQzAYxDNGSEbBwZ1evXMalDsdPG9+7nvEXFlfUyQqdYUQ0nq6t37i15SBePSeAg7H/+Xdcwrce3xPb7O8Yk0AX7n/moGTuevTv3MgJsVe/G2J003l6hd1b72sAiRL5QYPX0Bl0Gu23p1n450Cq4GIORhDmRV9QwpLfXIdA4aCYXG4I7NOlYdqWuql0iWWzLwo2yPlT2w42JYB3082XIQcdtBkOaL38E2U5jJO3Rh6EtsOi+ZlQ1rOTv0538D3XuaoJ1OqsTHAEZQ9sw/7+91hSpomym6kGdS2M5//voMCFXLx797rNH3w+SmWaWI7ZusvdDesPr5kJV2sYz1GbqFQMEGS9iH5iOYZ1xDkcHpZP1F5zz6oMeZuEuFfhl1pqt3n85d4tuDHZ/svhBBCPcqCqOoM5YidWE0TWBi1NYsd7jzzZ3+Tsu6LQrWDwRmsoPuZo6uwkso8qV6Bx4n0UKpjWwNQpSFFrQQdRb5mQouWiEqtLsXCN2sg1aQ8GBtDOcKN0TabjtCNNswggdxMIIFWaADAgECAhMzAAAAFcXna54Cm0mZAAAAAAAVMA0GCSqGSIb3DQEBCwUAMIGIMQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMTIwMAYDVQQDEylNaWNyb3NvZnQgUm9vdCBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkgMjAxMDAeFw0yMTA5MzAxODIyMjVaFw0zMDA5MzAxODMyMjVaMHwxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xJjAkBgNVBAMTHU1pY3Jvc29mdCBUaW1lLVN0YW1wIFBDQSAyMDEwMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEA5OGmTOe0ciELeaLL1yR5vQ7VgtP97pwHB9KpbE51yMo1V/YBf2xK4OK9uT4XYDP/XE/HZveVU3Fa4n5KWv64NmeFRiMMtY0Tz3cywBAY6GB9alKDRLemjkZrBxTzxXb1hlDcwUTIcVxRMTegCjhuje3XD9gmU3w5YQJ6xKr9cmmvHaus9ja+NSZk2pg7uhp7M62AW36MEBydUv626GIl3GoPz130/o5Tz9bshVZN7928jaTjkY+yOSxRnOlwaQ3KNi1wjjHINSi947SHJMPgyY9+tVSP3PoFVZhtaDuaRr3tpK56KTesy+uDRedGbsoy1cCGMFxPLOJiss254o2I5JasAUq7vnGpF1tnYN74kpEeHT39IM9zfUGaRnXNxF803RKJ1v2lIH1+/NmeRd+2ci/bfV+AutuqfjbsNkz2K26oElHovwUDo9Fzpk03dJQcNIIP8BDyt0cY7afomXw/TNuvXsLz1dhzPUNOwTM5TI4CvEJoLhDqhFFG4tG9ahhaYQFzymeiXtcodgLiMxhy16cg8ML6EgrXY28MyTZki1ugpoMhXV8wdJGUlNi5UPkLiWHzNgY1GIRH29wb0f2y1BzFa/ZcUlFdEtsluq9QBXpsxREdcu+N+VLEhReTwDwV2xo3xwgVGD94q0W29R6HXtqPnhZyacaue7e3PmriLq0CAwEAAaOCAd0wggHZMBIGCSsGAQQBgjcVAQQFAgMBAAEwIwYJKwYBBAGCNxUCBBYEFCqnUv5kxJq+gpE8RjUpzxD/LwTuMB0GA1UdDgQWBBSfpxVdAF5iXYP05dJlpxtTNRnpcjBcBgNVHSAEVTBTMFEGDCsGAQQBgjdMg30BATBBMD8GCCsGAQUFBwIBFjNodHRwOi8vd3d3Lm1pY3Jvc29mdC5jb20vcGtpb3BzL0RvY3MvUmVwb3NpdG9yeS5odG0wEwYDVR0lBAwwCgYIKwYBBQUHAwgwGQYJKwYBBAGCNxQCBAweCgBTAHUAYgBDAEEwCwYDVR0PBAQDAgGGMA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAU1fZWy4/oolxiaNE9lJBb186aGMQwVgYDVR0fBE8wTTBLoEmgR4ZFaHR0cDovL2NybC5taWNyb3NvZnQuY29tL3BraS9jcmwvcHJvZHVjdHMvTWljUm9vQ2VyQXV0XzIwMTAtMDYtMjMuY3JsMFoGCCsGAQUFBwEBBE4wTDBKBggrBgEFBQcwAoY+aHR0cDovL3d3dy5taWNyb3NvZnQuY29tL3BraS9jZXJ0cy9NaWNSb29DZXJBdXRfMjAxMC0wNi0yMy5jcnQwDQYJKoZIhvcNAQELBQADggIBAJ1VffwqreEsH2cBMSRb4Z5yS/ypb+pcFLY+TkdkeLEGk5c9MTO1OdfCcTY/2mRsfNB1OW27DzHkwo/7bNGhlBgi7ulmZzpTTd2YurYeeNg2LpypglYAA7AFvonoaeC6Ce5732pvvinLbtg/SHUB2RjebYIM9W0jVOR4U3UkV7ndn/OOPcbzaN9l9qRWqveVtihVJ9AkvUCgvxm2EhIRXT0n4ECWOKz3+SmJw7wXsFSFQrP8DJ6LGYnn8AtqgcKBGUIZUnWKNsIdw2FzLixre24/LAl4FOmRsqlb30mjdAy87JGA0j3mSj5mO0+7hvoyGtmW9I/2kQH2zsZ0/fZMcm8Qq3UwxTSwethQ/gpY3UA8x1RtnWN0SCyxTkctwRQEcb9k+SS+c23Kjgm9swFXSVRk2XPXfx5bRAGOWhmRaw2fpCjcZxkoJLo4S5pu+yFUa2pFEUep8beuyOiJXk+d0tBMdrVXVAmxaQFEfnyhYWxz/gq77EFmPWn9y8FBSX5+k77L+DvktxW/tM4+pTFRhLy/AsGConsXHRWJjXD+57XQKBqJC4822rpM+Zv/Cuk0+CQ1ZyvgDbjmjJnW4SLq8CdCPSWU5nR0W2rRnj7tfqAxM328y+l7vzhwRNGQ8cirOoo6CGJ/2XBjU02N7oJtpQUQwXEGahC0HVUzWLOhcGbyoYIC1DCCAj0CAQEwggEAoYHYpIHVMIHSMQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMS0wKwYDVQQLEyRNaWNyb3NvZnQgSXJlbGFuZCBPcGVyYXRpb25zIExpbWl0ZWQxJjAkBgNVBAsTHVRoYWxlcyBUU1MgRVNOOjg2REYtNEJCQy05MzM1MSUwIwYDVQQDExxNaWNyb3NvZnQgVGltZS1TdGFtcCBTZXJ2aWNloiMKAQEwBwYFKw4DAhoDFQA2I0cZZds1oM/GfKINsQ5yJKMWEKCBgzCBgKR+MHwxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xJjAkBgNVBAMTHU1pY3Jvc29mdCBUaW1lLVN0YW1wIFBDQSAyMDEwMA0GCSqGSIb3DQEBBQUAAgUA6h4aiTAiGA8yMDI0MDYyMDExMDMzN1oYDzIwMjQwNjIxMTEwMzM3WjB0MDoGCisGAQQBhFkKBAExLDAqMAoCBQDqHhqJAgEAMAcCAQACAgX7MAcCAQACAhH8MAoCBQDqH2wJAgEAMDYGCisGAQQBhFkKBAIxKDAmMAwGCisGAQQBhFkKAwKgCjAIAgEAAgMHoSChCjAIAgEAAgMBhqAwDQYJKoZIhvcNAQEFBQADgYEAGfu+JpdwJYpU+xUOu693Nef9bUv1la7pxXUtY+P82b5q8/FFZp5WUobGx6JrVuJTDuvqbEZYjwTzWIVUHog1kTXjji1NCFLCVnrlJqPwtH9uRQhnFDSmiP0tG1rNwht6ZViFrRexp+7cebOHSPfk+ZzrUyp9DptMAJmagfLClxAxggQNMIIECQIBATCBkzB8MQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMSYwJAYDVQQDEx1NaWNyb3NvZnQgVGltZS1TdGFtcCBQQ0EgMjAxMAITMwAAAd1dVx2V1K2qGwABAAAB3TANBglghkgBZQMEAgEFAKCCAUowGgYJKoZIhvcNAQkDMQ0GCyqGSIb3DQEJEAEEMC8GCSqGSIb3DQEJBDEiBCCZX/UOu+vfJ4kbHbQYoi1Ztz4aZycnWIB1vBYNNo/atDCB+gYLKoZIhvcNAQkQAi8xgeowgecwgeQwgb0EIGH/Di2aZaxPeJmce0fRWTftQI3TaVHFj5GI43rAMWNmMIGYMIGApH4wfDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCldhc2hpbmd0b24xEDAOBgNVBAcTB1JlZG1vbmQxHjAcBgNVBAoTFU1pY3Jvc29mdCBDb3Jwb3JhdGlvbjEmMCQGA1UEAxMdTWljcm9zb2Z0IFRpbWUtU3RhbXAgUENBIDIwMTACEzMAAAHdXVcdldStqhsAAQAAAd0wIgQg5Fd0dBTHG2u3SYEF2YcmJ7rHH4kHcV0GlSr/y6AQOYEwDQYJKoZIhvcNAQELBQAEggIAGcOQBnVMUPnu4d2wmccNjUncMe5i0C5VkJ7/VjqN4W6vSuKz7BFVIaUMoufkY94epjipx+Ip3BTj2heew7xB+f6zBKTlkXfakH7TEWeju3WzUYNt3kjJyS3SJeJGFJEiln1S6apObwPtbSq9EqwwFOt8pJy9bAvoxuRM6Olib/eiHr3uiKkk6FCccUgG0PYN/PRUU7htzv6uyRXzCpuNpld3eorXt6nqt6bP7k1NFcwcYSv7V3WcoQzObk5Y9G5n/1rc5Hy9eRHwnz1l7MWOZGsJ9swOBFmoVUK8tB1vPy3bjooJBm7jRT9AcdGTaRS/t5nYe5sECI51sIyq3UBPCH8rNse1BIX9WCtcar1Bg6L64lzdPC7FVSh03vVlDZhNNf7tWRZqlYID2zTaY4p4LIW47O0/Rw2Swe4+hvl49e0v0m0FnmmwXN5097waF3Xv7FIDxbcrK+0DTv2p810Igwj6tErwxhP/367Q9EBzxODSJ8uD35DGMmHsTnViavQUBzj8LeTiA6sUZhF54AbI5dQkZLPydlR3GCmo1RKKO1VhDZnpFanj/N856MOlQqe/6x8sguPM+OpF6MWGvQH5SxsSzSf6dxhzS2pEHbirwJ4k1+tuF0LKOxNLwVVQQ9qPABNiWqml4bJk9oZ1dOTDd9EFjepHqynKk4olY3kq5sA=" with mock_wire_protocol(data_file) as protocol: extensions = protocol.get_goal_state().extensions_goal_state.extensions self.assertEqual(expected_signature, extensions[0].encoded_signature) data_file["ext_conf"] = "wire/ext_conf-no_encoded_signature.xml" with mock_wire_protocol(data_file) as protocol: extensions = protocol.get_goal_state().extensions_goal_state.extensions # extension.encoded_signature should be an empty string if property is not in the EGS for the extension self.assertEqual(extensions[0].encoded_signature, "") Azure-WALinuxAgent-a976115/tests/common/protocol/test_extensions_goal_state_from_vm_settings.py000066400000000000000000000673351510742556200333620ustar00rootroot00000000000000# Copyright (c) Microsoft Corporation. All rights reserved. # Licensed under the Apache License. import json from azurelinuxagent.common.protocol.goal_state import GoalState from azurelinuxagent.common.protocol.extensions_goal_state import GoalStateChannel from azurelinuxagent.common.protocol.extensions_goal_state_from_vm_settings import _CaseFoldedDict from tests.lib.mock_wire_protocol import wire_protocol_data, mock_wire_protocol from tests.lib.tools import AgentTestCase class ExtensionsGoalStateFromVmSettingsTestCase(AgentTestCase): def test_it_should_parse_vm_settings(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: extensions_goal_state = protocol.get_goal_state().extensions_goal_state def assert_property(name, value): self.assertEqual(value, getattr(extensions_goal_state, name), '{0} was not parsed correctly'.format(name)) assert_property("activity_id", "a33f6f53-43d6-4625-b322-1a39651a00c9") assert_property("correlation_id", "9a47a2a2-e740-4bfc-b11b-4f2f7cfe7d2e") assert_property("created_on_timestamp", "2021-11-16T13:22:50.620529Z") assert_property("status_upload_blob", "https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.status?sv=2018-03-28&sr=b&sk=system-1&sig=KNWgC2%3d&se=9999-01-01T00%3a00%3a00Z&sp=w") assert_property("status_upload_blob_type", "BlockBlob") assert_property("required_features", ["MultipleExtensionsPerHandler"]) assert_property("on_hold", True) # # for the rest of the attributes, we check only 1 item in each container (but check the length of the container) # # agent families self.assertEqual(2, len(extensions_goal_state.agent_families), "Incorrect number of agent families. Got: {0}".format(extensions_goal_state.agent_families)) self.assertEqual("Prod", extensions_goal_state.agent_families[0].name, "Incorrect agent family.") self.assertEqual(2, len(extensions_goal_state.agent_families[0].uris), "Incorrect number of uris. Got: {0}".format(extensions_goal_state.agent_families[0].uris)) expected = "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml" self.assertEqual(expected, extensions_goal_state.agent_families[0].uris[0], "Unexpected URI for the agent manifest.") # extensions self.assertEqual(5, len(extensions_goal_state.extensions), "Incorrect number of extensions. Got: {0}".format(extensions_goal_state.extensions)) self.assertEqual('Microsoft.Azure.Monitor.AzureMonitorLinuxAgent', extensions_goal_state.extensions[0].name, "Incorrect extension name") self.assertEqual(1, len(extensions_goal_state.extensions[0].settings[0].publicSettings), "Incorrect number of public settings") self.assertEqual(True, extensions_goal_state.extensions[0].settings[0].publicSettings["GCS_AUTO_CONFIG"], "Incorrect public settings") # dependency level (single-config) self.assertEqual(1, extensions_goal_state.extensions[2].settings[0].dependencyLevel, "Incorrect dependency level (single-config)") # dependency level (multi-config) self.assertEqual(1, extensions_goal_state.extensions[3].settings[1].dependencyLevel, "Incorrect dependency level (multi-config)") def test_it_should_parse_requested_version_properly(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: goal_state = GoalState(protocol.client) families = goal_state.extensions_goal_state.agent_families for family in families: self.assertIsNone(family.version, "Version should be None") data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() data_file["vm_settings"] = "hostgaplugin/vm_settings-agent_family_version.json" with mock_wire_protocol(data_file) as protocol: protocol.mock_wire_data.set_etag(888) goal_state = GoalState(protocol.client) families = goal_state.extensions_goal_state.agent_families for family in families: self.assertEqual(family.version, "9.9.9.9", "Version should be 9.9.9.9") def test_it_should_parse_from_version_properly(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: goal_state = GoalState(protocol.client) families = goal_state.extensions_goal_state.agent_families for family in families: self.assertIsNone(family.from_version, "fromVersion should be None") data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() data_file["vm_settings"] = "hostgaplugin/vm_settings-agent_family_version.json" with mock_wire_protocol(data_file) as protocol: protocol.mock_wire_data.set_etag(888) goal_state = GoalState(protocol.client) families = goal_state.extensions_goal_state.agent_families for family in families: self.assertEqual(family.from_version, "9.9.9.9", "fromVersion should be 9.9.9.9") def test_it_should_parse_is_version_from_rsm_properly(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: goal_state = GoalState(protocol.client) families = goal_state.extensions_goal_state.agent_families for family in families: self.assertIsNone(family.is_version_from_rsm, "is_version_from_rsm should be None") data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() data_file["vm_settings"] = "hostgaplugin/vm_settings-agent_family_version.json" with mock_wire_protocol(data_file) as protocol: protocol.mock_wire_data.set_etag(888) goal_state = GoalState(protocol.client) families = goal_state.extensions_goal_state.agent_families for family in families: self.assertTrue(family.is_version_from_rsm, "is_version_from_rsm should be True") data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() data_file["vm_settings"] = "hostgaplugin/vm_settings-requested_version_properties_false.json" with mock_wire_protocol(data_file) as protocol: protocol.mock_wire_data.set_etag(888) goal_state = GoalState(protocol.client) families = goal_state.extensions_goal_state.agent_families for family in families: self.assertFalse(family.is_version_from_rsm, "is_version_from_rsm should be False") def test_it_should_parse_is_vm_enabled_for_rsm_upgrades_properly(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: goal_state = GoalState(protocol.client) families = goal_state.extensions_goal_state.agent_families for family in families: self.assertIsNone(family.is_vm_enabled_for_rsm_upgrades, "is_vm_enabled_for_rsm_upgrades should be None") data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() data_file["vm_settings"] = "hostgaplugin/vm_settings-agent_family_version.json" with mock_wire_protocol(data_file) as protocol: protocol.mock_wire_data.set_etag(888) goal_state = GoalState(protocol.client) families = goal_state.extensions_goal_state.agent_families for family in families: self.assertTrue(family.is_vm_enabled_for_rsm_upgrades, "is_vm_enabled_for_rsm_upgrades should be True") data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() data_file["vm_settings"] = "hostgaplugin/vm_settings-requested_version_properties_false.json" with mock_wire_protocol(data_file) as protocol: protocol.mock_wire_data.set_etag(888) goal_state = GoalState(protocol.client) families = goal_state.extensions_goal_state.agent_families for family in families: self.assertFalse(family.is_vm_enabled_for_rsm_upgrades, "is_vm_enabled_for_rsm_upgrades should be False") def test_it_should_parse_missing_status_upload_blob_as_none(self): data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() data_file["vm_settings"] = "hostgaplugin/vm_settings-no_status_upload_blob.json" with mock_wire_protocol(data_file) as protocol: extensions_goal_state = protocol.get_goal_state().extensions_goal_state self.assertIsNone(extensions_goal_state.status_upload_blob, "Expected status upload blob to be None") self.assertEqual("BlockBlob", extensions_goal_state.status_upload_blob_type, "Expected status upload blob to be Block") def test_it_should_parse_missing_agent_manifests_as_empty(self): data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() data_file["vm_settings"] = "hostgaplugin/vm_settings-no_manifests.json" with mock_wire_protocol(data_file) as protocol: extensions_goal_state = protocol.get_goal_state().extensions_goal_state self.assertEqual(1, len(extensions_goal_state.agent_families), "Expected exactly one agent manifest. Got: {0}".format(extensions_goal_state.agent_families)) self.assertListEqual([], extensions_goal_state.agent_families[0].uris, "Expected an empty list of agent manifests") def test_it_should_parse_missing_extension_manifests_as_empty(self): data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() data_file["vm_settings"] = "hostgaplugin/vm_settings-no_manifests.json" with mock_wire_protocol(data_file) as protocol: extensions_goal_state = protocol.get_goal_state().extensions_goal_state self.assertEqual(3, len(extensions_goal_state.extensions), "Incorrect number of extensions. Got: {0}".format(extensions_goal_state.extensions)) self.assertEqual([], extensions_goal_state.extensions[0].manifest_uris, "Expected an empty list of manifests for {0}".format(extensions_goal_state.extensions[0])) self.assertEqual([], extensions_goal_state.extensions[1].manifest_uris, "Expected an empty list of manifests for {0}".format(extensions_goal_state.extensions[1])) self.assertEqual( [ "https://umsakzkwhng2ft0jjptl.blob.core.windows.net/deeb2df6-c025-e6fb-b015-449ed6a676bc/deeb2df6-c025-e6fb-b015-449ed6a676bc_manifest.xml", "https://umsafmqfbv4hgrd1hqff.blob.core.windows.net/deeb2df6-c025-e6fb-b015-449ed6a676bc/deeb2df6-c025-e6fb-b015-449ed6a676bc_manifest.xml", ], extensions_goal_state.extensions[2].manifest_uris, "Incorrect list of manifests for {0}".format(extensions_goal_state.extensions[2])) def test_it_should_default_to_block_blob_when_the_status_blob_type_is_not_valid(self): data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() data_file["vm_settings"] = "hostgaplugin/vm_settings-invalid_blob_type.json" with mock_wire_protocol(data_file) as protocol: extensions_goal_state = protocol.get_goal_state().extensions_goal_state self.assertEqual("BlockBlob", extensions_goal_state.status_upload_blob_type, 'Expected BlockBlob for an invalid statusBlobType') def test_its_source_channel_should_be_host_ga_plugin(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: extensions_goal_state = protocol.get_goal_state().extensions_goal_state self.assertEqual(GoalStateChannel.HostGAPlugin, extensions_goal_state.channel, "The channel is incorrect") def test_it_should_parse_encoded_signature_plugin_property(self): data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() # This vm settings extensions goal state has 1 extension with encodedSignature (AzureMonitorLinuxAgent). The # remaining extensions do not have encodedSignature expected_signature = "MIInEAYJKoZIhvcNAQcCoIInATCCJv0CAQMxDTALBglghkgBZQMEAgIwCQYHgUuDSAcICaCCDXYwggX0MIID3KADAgECAhMzAAADrzBADkyjTQVBAAAAAAOvMA0GCSqGSIb3DQEBCwUAMH4xCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xKDAmBgNVBAMTH01pY3Jvc29mdCBDb2RlIFNpZ25pbmcgUENBIDIwMTEwHhcNMjMxMTE2MTkwOTAwWhcNMjQxMTE0MTkwOTAwWjB0MQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMR4wHAYDVQQDExVNaWNyb3NvZnQgQ29ycG9yYXRpb24wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDOS8s1ra6f0YGtg0OhEaQa/t3Q+q1MEHhWJhqQVuO5amYXQpy8MDPNoJYk+FWAhePP5LxwcSge5aen+f5Q6WNPd6EDxGzotvVpNi5ve0H97S3F7C/axDfKxyNh21MG0W8Sb0vxi/vorcLHOL9i+t2D6yvvDzLlEefUCbQV/zGCBjXGlYJcUj6RAzXyeNANxSpKXAGd7Fh+ocGHPPphcD9LQTOJgG7Y7aYztHqBLJiQQ4eAgZNU4ac6+8LnEGALgo1ydC5BJEuJQjYKbNTy959HrKSu7LO3Ws0w8jw6pYdC1IMpdTkk2puTgY2PDNzBtLM4evG7FYer3WX+8t1UMYNTAgMBAAGjggFzMIIBbzAfBgNVHSUEGDAWBgorBgEEAYI3TAgBBggrBgEFBQcDAzAdBgNVHQ4EFgQURxxxNPIEPGSO8kqz+bgCAQWGXsEwRQYDVR0RBD4wPKQ6MDgxHjAcBgNVBAsTFU1pY3Jvc29mdCBDb3Jwb3JhdGlvbjEWMBQGA1UEBRMNMjMwMDEyKzUwMTgyNjAfBgNVHSMEGDAWgBRIbmTlUAXTgqoXNzcitW2oynUClTBUBgNVHR8ETTBLMEmgR6BFhkNodHRwOi8vd3d3Lm1pY3Jvc29mdC5jb20vcGtpb3BzL2NybC9NaWNDb2RTaWdQQ0EyMDExXzIwMTEtMDctMDguY3JsMGEGCCsGAQUFBwEBBFUwUzBRBggrBgEFBQcwAoZFaHR0cDovL3d3dy5taWNyb3NvZnQuY29tL3BraW9wcy9jZXJ0cy9NaWNDb2RTaWdQQ0EyMDExXzIwMTEtMDctMDguY3J0MAwGA1UdEwEB/wQCMAAwDQYJKoZIhvcNAQELBQADggIBAISxFt/zR2frTFPB45YdmhZpB2nNJoOoi+qlgcTlnO4QwlYN1w/vYwbDy/oFJolD5r6FMJd0RGcgEM8q9TgQ2OC7gQEmhweVJ7yuKJlQBH7P7Pg5RiqgV3cSonJ+OM4kFHbP3gPLiyzssSQdRuPY1mIWoGg9i7Y4ZC8ST7WhpSyc0pns2XsUe1XsIjaUcGu7zd7gg97eCUiLRdVklPmpXobH9CEAWakRUGNICYN2AgjhRTC4j3KJfqMkU04R6Toyh4/Toswm1uoDcGr5laYnTfcX3u5WnJqJLhuPe8Uj9kGAOcyo0O1mNwDa+LhFEzB6CB32+wfJMumfr6degvLTe8x55urQLeTjimBQgS49BSUkhFN7ois3cZyNpnrMca5AZaC7pLI72vuqSsSlLalGOcZmPHZGYJqZ0BacN274OZ80Q8B11iNokns9Od348bMb5Z4fihxaBWebl8kWEi2OPvQImOAeq3nt7UWJBzJYLAGEpfasaA3ZQgIcEXdD+uwo6ymMzDY6UamFOfYqYWXkntxDGu7ngD2ugKUuccYKJJRiiz+LAUcj90BVcSHRLQop9N8zoALr/1sJuwPrVAtxHNEgSW+AKBqIxYWM4Ev32l6agSUAezLMbq5f3d8x9qzT031jMDT+sUAoCw0M5wVtCUQcqINPuYjbS1WgJyZIiEkBMIIHejCCBWKgAwIBAgIKYQ6Q0gAAAAAAAzANBgkqhkiG9w0BAQsFADCBiDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCldhc2hpbmd0b24xEDAOBgNVBAcTB1JlZG1vbmQxHjAcBgNVBAoTFU1pY3Jvc29mdCBDb3Jwb3JhdGlvbjEyMDAGA1UEAxMpTWljcm9zb2Z0IFJvb3QgQ2VydGlmaWNhdGUgQXV0aG9yaXR5IDIwMTEwHhcNMTEwNzA4MjA1OTA5WhcNMjYwNzA4MjEwOTA5WjB+MQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMSgwJgYDVQQDEx9NaWNyb3NvZnQgQ29kZSBTaWduaW5nIFBDQSAyMDExMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAq/D6chAcLq3YbqqCEE00uvK2WCGfQhsqa+laUKq4BjgaBEm6f8MMHt03a8YS2AvwOMKZBrDIOdUBFDFC04kNeWSHfpRgJGyvnkmc6Whe0t+bU7IKLMOv2akrrnoJr9eWWcpgGgXpZnboMlImEi/nqwhQz7NEt13YxC4Ddato88tt8zpcoRb0RrrgOGSsbmQ1eKagYw8t00CT+OPeBw3VXHmlSSnnDb6gE3e+lD3v++MrWhAfTVYoonpy4BI6t0le2O3tQ5GD2Xuye4Yb2T6xjF3oiU+EGvKhL1nkkDstrjNYxbc+/jLTswM9sbKvkjh+0p2ALPVOVpEhNSXDOW5kf1O6nA+tGSOEy/S6A4aN91/w0FK/jJSHvMAhdCVfGCi2zCcoOCWYOUo2z3yxkq4cI6epZuxhH2rhKEmdX4jiJV3TIUs+UsS1Vz8kA/DRelsv1SPjcF0PUUZ3s/gA4bysAoJf28AVs70b1FVL5zmhD+kjSbwYuER8ReTBw3J64HLnJN+/RpnF78IcV9uDjexNSTCnq47f7Fufr/zdsGbiwZeBe+3W7UvnSSmnEyimp31ngOaKYnhfsi+E11ecXL93KCjx7W3DKI8sj0A3T8HhhUSJxAlMxdSlQy90lfdu+HggWCwTXWCVmj5PM4TasIgX3p5O9JawvEagbJjS4NaIjAsCAwEAAaOCAe0wggHpMBAGCSsGAQQBgjcVAQQDAgEAMB0GA1UdDgQWBBRIbmTlUAXTgqoXNzcitW2oynUClTAZBgkrBgEEAYI3FAIEDB4KAFMAdQBiAEMAQTALBgNVHQ8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zAfBgNVHSMEGDAWgBRyLToCMZBDuRQFTuHqp8cx0SOJNDBaBgNVHR8EUzBRME+gTaBLhklodHRwOi8vY3JsLm1pY3Jvc29mdC5jb20vcGtpL2NybC9wcm9kdWN0cy9NaWNSb29DZXJBdXQyMDExXzIwMTFfMDNfMjIuY3JsMF4GCCsGAQUFBwEBBFIwUDBOBggrBgEFBQcwAoZCaHR0cDovL3d3dy5taWNyb3NvZnQuY29tL3BraS9jZXJ0cy9NaWNSb29DZXJBdXQyMDExXzIwMTFfMDNfMjIuY3J0MIGfBgNVHSAEgZcwgZQwgZEGCSsGAQQBgjcuAzCBgzA/BggrBgEFBQcCARYzaHR0cDovL3d3dy5taWNyb3NvZnQuY29tL3BraW9wcy9kb2NzL3ByaW1hcnljcHMuaHRtMEAGCCsGAQUFBwICMDQeMiAdAEwAZQBnAGEAbABfAHAAbwBsAGkAYwB5AF8AcwB0AGEAdABlAG0AZQBuAHQALiAdMA0GCSqGSIb3DQEBCwUAA4ICAQBn8oalmOBUeRou09h0ZyKbC5YR4WOSmUKWfdJ5DJDBZV8uLD74w3LRbYP+vj/oCso7v0epo/Np22O/IjWll11lhJB9i0ZQVdgMknzSGksc8zxCi1LQsP1r4z4HLimb5j0bpdS1HXeUOeLpZMlEPXh6I/MTfaaQdION9MsmAkYqwooQu6SpBQyb7Wj6aC6VoCo/KmtYSWMfCWluWpiW5IP0wI/zRive/DvQvTXvbiWu5a8n7dDd8w6vmSiXmE0OPQvyCInWH8MyGOLwxS3OW560STkKxgrCxq2u5bLZ2xWIUUVYODJxJxp/sfQn+N4sOiBpmLJZiWhub6e3dMNABQamASooPoI/E01mC8CzTfXhj38cbxV9Rad25UAqZaPDXVJihsMdYzaXht/a8/jyFqGaJ+HNpZfQ7l1jQeNbB5yHPgZ3BtEGsXUfFL5hYbXw3MYbBL7fQccOKO7eZS/sl/ahXJbYANahRr1Z85elCUtIEJmAH9AAKcWxm6U/RXceNcbSoqKfenoi+kiVH6v7RyOA9Z74v2u3S5fi63V4GuzqN5l5GEv/1rMjaHXmr/r8i+sLgOppO6/8MO0ETI7f33VtY5E90Z1WTk+/gFcioXgRMiF670EKsT/7qMykXcGhiJtXcVZOSEXAQsmbdlsKgEhr/Xmfwb1tbWrJUnMTDXpQzTGCGWIwghleAgEBMIGVMH4xCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xKDAmBgNVBAMTH01pY3Jvc29mdCBDb2RlIFNpZ25pbmcgUENBIDIwMTECEzMAAAOvMEAOTKNNBUEAAAAAA68wCwYJYIZIAWUDBAICoFkwFgYJKoZIhvcNAQkDMQkGB4FLg0gHCAkwPwYJKoZIhvcNAQkEMTIEMDBbd8WC98w2hp0LRsyGXkhY0ZY+y0Pl20deVXonOXR+vDsyK96L9uBzpNRlolZD0DANBgkqhkiG9w0BAQEFAASCAQAIaK9t6Unz6YcKR2q8D2Vjvq9j+YK0U1+tb8s2ZslmmL19Yeb+NRy4tkS7lVEmMYRiFTy+jyis6UGL81ziXEXqAfqjkJt/zjN/8Qek91fzKYJMuCfEm6xVv+gfNHCp0fuGn4b9QNoD7UUMe4oBskSSLSiW0ri9FblSdjeoLZKvoRzHFBF94wI2Kw0iCBUQgNKHKT3lyG9D4NQySAaS0BnYG/s/HPgGMPT6peWRWAXkuTQ8zxb98pOzdf3HZ4Zz2n8qEh1BM6nHba2CKnDP0yjEz7OERVWcLUVPcTHC/xG94cp1gdlKQ09t3H7lBwccxmztUt9sIGUAdeJFAChTvvnSoYIXRDCCF0AGCyqGSIb3DQEJEAIOMYIXLzCCFysGCSqGSIb3DQEHAqCCFxwwghcYAgEDMQ8wDQYJYIZIAWUDBAIBBQAwggFzBgsqhkiG9w0BCRABBKCCAWIEggFeMIIBWgIBAQYKKwYBBAGEWQoDATAxMA0GCWCGSAFlAwQCAQUABCALbe+1JlANO/4xRH8dJHYO8uMX6ee/KhxzL1ZHE4fguAIGZnLzb33XGBMyMDI0MDYyMDIzMzgyOS4yMzNaMASAAgH0AhgsprYE/OXhkFp093+I2SkmqEFqhU3g+VWggdikgdUwgdIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xLTArBgNVBAsTJE1pY3Jvc29mdCBJcmVsYW5kIE9wZXJhdGlvbnMgTGltaXRlZDEmMCQGA1UECxMdVGhhbGVzIFRTUyBFU046ODZERi00QkJDLTkzMzUxJTAjBgNVBAMTHE1pY3Jvc29mdCBUaW1lLVN0YW1wIFNlcnZpY2WgghF4MIIHJzCCBQ+gAwIBAgITMwAAAd1dVx2V1K2qGwABAAAB3TANBgkqhkiG9w0BAQsFADB8MQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMSYwJAYDVQQDEx1NaWNyb3NvZnQgVGltZS1TdGFtcCBQQ0EgMjAxMDAeFw0yMzEwMTIxOTA3MDlaFw0yNTAxMTAxOTA3MDlaMIHSMQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMS0wKwYDVQQLEyRNaWNyb3NvZnQgSXJlbGFuZCBPcGVyYXRpb25zIExpbWl0ZWQxJjAkBgNVBAsTHVRoYWxlcyBUU1MgRVNOOjg2REYtNEJCQy05MzM1MSUwIwYDVQQDExxNaWNyb3NvZnQgVGltZS1TdGFtcCBTZXJ2aWNlMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAqE4DlETqLnecdREfiWd8oun70m+Km5O1y1qKsLExRKs9LLkJYrYO2uJA/5PnYdds3aDsCS1DWlBltMMYXMrp3Te9hg2sI+4kr49Gw/YU9UOMFfLmastEXMgcctqIBqhsTm8Um6jFnRlZ0owKzxpyOEdSZ9pj7v38JHu434Hj7GMmrC92lT+anSYCrd5qvIf4Aqa/qWStA3zOCtxsKAfCyq++pPqUQWpimLu4qfswBhtJ4t7Skx1q1XkRbo1Wdcxg5NEq4Y9/J8Ep1KG5qUujzyQbupraZsDmXvv5fTokB6wySjJivj/0KAMWMdSlwdI4O6OUUEoyLXrzNF0t6t2lbRsFf0QO7HbMEwxoQrw3LFrAIS4Crv77uS0UBuXeFQq27NgLUVRm5SXYGrpTXtLgIqypHeK0tP2o1xvakAniOsgN2WXlOCip5/mCm/5hy8EzzfhtcU3DK13e6MMPbg/0N3zF9Um+6aOwFBCQrlP+rLcetAny53WcdK+0VWLlJr+5sa5gSlLyAXoYNY3n8pu94WR2yhNUg+jymRaGM+zRDucDn64HFAHjOWMSMrPlZbsEDjCmYWbbh+EGZGNXg1un6fvxyACO8NJ9OUDoNgFy/aTHUkfZ0iFpGdJ45d49PqEwXQiXn3wsy7SvDflWJRZwBCRQ1RPFGeoYXHPnD5m6wwMCAwEAAaOCAUkwggFFMB0GA1UdDgQWBBRuovW2jI9R2kXLIdIMpaPQjiXD8TAfBgNVHSMEGDAWgBSfpxVdAF5iXYP05dJlpxtTNRnpcjBfBgNVHR8EWDBWMFSgUqBQhk5odHRwOi8vd3d3Lm1pY3Jvc29mdC5jb20vcGtpb3BzL2NybC9NaWNyb3NvZnQlMjBUaW1lLVN0YW1wJTIwUENBJTIwMjAxMCgxKS5jcmwwbAYIKwYBBQUHAQEEYDBeMFwGCCsGAQUFBzAChlBodHRwOi8vd3d3Lm1pY3Jvc29mdC5jb20vcGtpb3BzL2NlcnRzL01pY3Jvc29mdCUyMFRpbWUtU3RhbXAlMjBQQ0ElMjAyMDEwKDEpLmNydDAMBgNVHRMBAf8EAjAAMBYGA1UdJQEB/wQMMAoGCCsGAQUFBwMIMA4GA1UdDwEB/wQEAwIHgDANBgkqhkiG9w0BAQsFAAOCAgEALlTZsg0uBcgdZsxypW5/2ORRP8rzPIsG+7mHwmuphHbP95o7bKjU6hz1KHK/Ft70ZkO7uSRTPFLInUhmSxlnDoUOrrJk1Pc8SMASdESlEEvxL6ZteD47hUtLQtKZvxchmIuxqpnR8MRy/cd4D7/L+oqcJBaReCGloQzAYxDNGSEbBwZ1evXMalDsdPG9+7nvEXFlfUyQqdYUQ0nq6t37i15SBePSeAg7H/+Xdcwrce3xPb7O8Yk0AX7n/moGTuevTv3MgJsVe/G2J003l6hd1b72sAiRL5QYPX0Bl0Gu23p1n450Cq4GIORhDmRV9QwpLfXIdA4aCYXG4I7NOlYdqWuql0iWWzLwo2yPlT2w42JYB3082XIQcdtBkOaL38E2U5jJO3Rh6EtsOi+ZlQ1rOTv0538D3XuaoJ1OqsTHAEZQ9sw/7+91hSpomym6kGdS2M5//voMCFXLx797rNH3w+SmWaWI7ZusvdDesPr5kJV2sYz1GbqFQMEGS9iH5iOYZ1xDkcHpZP1F5zz6oMeZuEuFfhl1pqt3n85d4tuDHZ/svhBBCPcqCqOoM5YidWE0TWBi1NYsd7jzzZ3+Tsu6LQrWDwRmsoPuZo6uwkso8qV6Bx4n0UKpjWwNQpSFFrQQdRb5mQouWiEqtLsXCN2sg1aQ8GBtDOcKN0TabjtCNNswggdxMIIFWaADAgECAhMzAAAAFcXna54Cm0mZAAAAAAAVMA0GCSqGSIb3DQEBCwUAMIGIMQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMTIwMAYDVQQDEylNaWNyb3NvZnQgUm9vdCBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkgMjAxMDAeFw0yMTA5MzAxODIyMjVaFw0zMDA5MzAxODMyMjVaMHwxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xJjAkBgNVBAMTHU1pY3Jvc29mdCBUaW1lLVN0YW1wIFBDQSAyMDEwMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEA5OGmTOe0ciELeaLL1yR5vQ7VgtP97pwHB9KpbE51yMo1V/YBf2xK4OK9uT4XYDP/XE/HZveVU3Fa4n5KWv64NmeFRiMMtY0Tz3cywBAY6GB9alKDRLemjkZrBxTzxXb1hlDcwUTIcVxRMTegCjhuje3XD9gmU3w5YQJ6xKr9cmmvHaus9ja+NSZk2pg7uhp7M62AW36MEBydUv626GIl3GoPz130/o5Tz9bshVZN7928jaTjkY+yOSxRnOlwaQ3KNi1wjjHINSi947SHJMPgyY9+tVSP3PoFVZhtaDuaRr3tpK56KTesy+uDRedGbsoy1cCGMFxPLOJiss254o2I5JasAUq7vnGpF1tnYN74kpEeHT39IM9zfUGaRnXNxF803RKJ1v2lIH1+/NmeRd+2ci/bfV+AutuqfjbsNkz2K26oElHovwUDo9Fzpk03dJQcNIIP8BDyt0cY7afomXw/TNuvXsLz1dhzPUNOwTM5TI4CvEJoLhDqhFFG4tG9ahhaYQFzymeiXtcodgLiMxhy16cg8ML6EgrXY28MyTZki1ugpoMhXV8wdJGUlNi5UPkLiWHzNgY1GIRH29wb0f2y1BzFa/ZcUlFdEtsluq9QBXpsxREdcu+N+VLEhReTwDwV2xo3xwgVGD94q0W29R6HXtqPnhZyacaue7e3PmriLq0CAwEAAaOCAd0wggHZMBIGCSsGAQQBgjcVAQQFAgMBAAEwIwYJKwYBBAGCNxUCBBYEFCqnUv5kxJq+gpE8RjUpzxD/LwTuMB0GA1UdDgQWBBSfpxVdAF5iXYP05dJlpxtTNRnpcjBcBgNVHSAEVTBTMFEGDCsGAQQBgjdMg30BATBBMD8GCCsGAQUFBwIBFjNodHRwOi8vd3d3Lm1pY3Jvc29mdC5jb20vcGtpb3BzL0RvY3MvUmVwb3NpdG9yeS5odG0wEwYDVR0lBAwwCgYIKwYBBQUHAwgwGQYJKwYBBAGCNxQCBAweCgBTAHUAYgBDAEEwCwYDVR0PBAQDAgGGMA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAU1fZWy4/oolxiaNE9lJBb186aGMQwVgYDVR0fBE8wTTBLoEmgR4ZFaHR0cDovL2NybC5taWNyb3NvZnQuY29tL3BraS9jcmwvcHJvZHVjdHMvTWljUm9vQ2VyQXV0XzIwMTAtMDYtMjMuY3JsMFoGCCsGAQUFBwEBBE4wTDBKBggrBgEFBQcwAoY+aHR0cDovL3d3dy5taWNyb3NvZnQuY29tL3BraS9jZXJ0cy9NaWNSb29DZXJBdXRfMjAxMC0wNi0yMy5jcnQwDQYJKoZIhvcNAQELBQADggIBAJ1VffwqreEsH2cBMSRb4Z5yS/ypb+pcFLY+TkdkeLEGk5c9MTO1OdfCcTY/2mRsfNB1OW27DzHkwo/7bNGhlBgi7ulmZzpTTd2YurYeeNg2LpypglYAA7AFvonoaeC6Ce5732pvvinLbtg/SHUB2RjebYIM9W0jVOR4U3UkV7ndn/OOPcbzaN9l9qRWqveVtihVJ9AkvUCgvxm2EhIRXT0n4ECWOKz3+SmJw7wXsFSFQrP8DJ6LGYnn8AtqgcKBGUIZUnWKNsIdw2FzLixre24/LAl4FOmRsqlb30mjdAy87JGA0j3mSj5mO0+7hvoyGtmW9I/2kQH2zsZ0/fZMcm8Qq3UwxTSwethQ/gpY3UA8x1RtnWN0SCyxTkctwRQEcb9k+SS+c23Kjgm9swFXSVRk2XPXfx5bRAGOWhmRaw2fpCjcZxkoJLo4S5pu+yFUa2pFEUep8beuyOiJXk+d0tBMdrVXVAmxaQFEfnyhYWxz/gq77EFmPWn9y8FBSX5+k77L+DvktxW/tM4+pTFRhLy/AsGConsXHRWJjXD+57XQKBqJC4822rpM+Zv/Cuk0+CQ1ZyvgDbjmjJnW4SLq8CdCPSWU5nR0W2rRnj7tfqAxM328y+l7vzhwRNGQ8cirOoo6CGJ/2XBjU02N7oJtpQUQwXEGahC0HVUzWLOhcGbyoYIC1DCCAj0CAQEwggEAoYHYpIHVMIHSMQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMS0wKwYDVQQLEyRNaWNyb3NvZnQgSXJlbGFuZCBPcGVyYXRpb25zIExpbWl0ZWQxJjAkBgNVBAsTHVRoYWxlcyBUU1MgRVNOOjg2REYtNEJCQy05MzM1MSUwIwYDVQQDExxNaWNyb3NvZnQgVGltZS1TdGFtcCBTZXJ2aWNloiMKAQEwBwYFKw4DAhoDFQA2I0cZZds1oM/GfKINsQ5yJKMWEKCBgzCBgKR+MHwxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xJjAkBgNVBAMTHU1pY3Jvc29mdCBUaW1lLVN0YW1wIFBDQSAyMDEwMA0GCSqGSIb3DQEBBQUAAgUA6h4aiTAiGA8yMDI0MDYyMDExMDMzN1oYDzIwMjQwNjIxMTEwMzM3WjB0MDoGCisGAQQBhFkKBAExLDAqMAoCBQDqHhqJAgEAMAcCAQACAgX7MAcCAQACAhH8MAoCBQDqH2wJAgEAMDYGCisGAQQBhFkKBAIxKDAmMAwGCisGAQQBhFkKAwKgCjAIAgEAAgMHoSChCjAIAgEAAgMBhqAwDQYJKoZIhvcNAQEFBQADgYEAGfu+JpdwJYpU+xUOu693Nef9bUv1la7pxXUtY+P82b5q8/FFZp5WUobGx6JrVuJTDuvqbEZYjwTzWIVUHog1kTXjji1NCFLCVnrlJqPwtH9uRQhnFDSmiP0tG1rNwht6ZViFrRexp+7cebOHSPfk+ZzrUyp9DptMAJmagfLClxAxggQNMIIECQIBATCBkzB8MQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMSYwJAYDVQQDEx1NaWNyb3NvZnQgVGltZS1TdGFtcCBQQ0EgMjAxMAITMwAAAd1dVx2V1K2qGwABAAAB3TANBglghkgBZQMEAgEFAKCCAUowGgYJKoZIhvcNAQkDMQ0GCyqGSIb3DQEJEAEEMC8GCSqGSIb3DQEJBDEiBCCZX/UOu+vfJ4kbHbQYoi1Ztz4aZycnWIB1vBYNNo/atDCB+gYLKoZIhvcNAQkQAi8xgeowgecwgeQwgb0EIGH/Di2aZaxPeJmce0fRWTftQI3TaVHFj5GI43rAMWNmMIGYMIGApH4wfDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCldhc2hpbmd0b24xEDAOBgNVBAcTB1JlZG1vbmQxHjAcBgNVBAoTFU1pY3Jvc29mdCBDb3Jwb3JhdGlvbjEmMCQGA1UEAxMdTWljcm9zb2Z0IFRpbWUtU3RhbXAgUENBIDIwMTACEzMAAAHdXVcdldStqhsAAQAAAd0wIgQg5Fd0dBTHG2u3SYEF2YcmJ7rHH4kHcV0GlSr/y6AQOYEwDQYJKoZIhvcNAQELBQAEggIAGcOQBnVMUPnu4d2wmccNjUncMe5i0C5VkJ7/VjqN4W6vSuKz7BFVIaUMoufkY94epjipx+Ip3BTj2heew7xB+f6zBKTlkXfakH7TEWeju3WzUYNt3kjJyS3SJeJGFJEiln1S6apObwPtbSq9EqwwFOt8pJy9bAvoxuRM6Olib/eiHr3uiKkk6FCccUgG0PYN/PRUU7htzv6uyRXzCpuNpld3eorXt6nqt6bP7k1NFcwcYSv7V3WcoQzObk5Y9G5n/1rc5Hy9eRHwnz1l7MWOZGsJ9swOBFmoVUK8tB1vPy3bjooJBm7jRT9AcdGTaRS/t5nYe5sECI51sIyq3UBPCH8rNse1BIX9WCtcar1Bg6L64lzdPC7FVSh03vVlDZhNNf7tWRZqlYID2zTaY4p4LIW47O0/Rw2Swe4+hvl49e0v0m0FnmmwXN5097waF3Xv7FIDxbcrK+0DTv2p810Igwj6tErwxhP/367Q9EBzxODSJ8uD35DGMmHsTnViavQUBzj8LeTiA6sUZhF54AbI5dQkZLPydlR3GCmo1RKKO1VhDZnpFanj/N856MOlQqe/6x8sguPM+OpF6MWGvQH5SxsSzSf6dxhzS2pEHbirwJ4k1+tuF0LKOxNLwVVQQ9qPABNiWqml4bJk9oZ1dOTDd9EFjepHqynKk4olY3kq5sA=" with mock_wire_protocol(data_file) as protocol: extensions = protocol.get_goal_state().extensions_goal_state.extensions self.assertEqual(expected_signature, extensions[0].encoded_signature) # extension.encoded_signature should be an empty string if the property does not exist for the extension for i in range(1, 5): self.assertEqual(extensions[i].encoded_signature, "") class CaseFoldedDictionaryTestCase(AgentTestCase): def test_it_should_retrieve_items_ignoring_case(self): dictionary = json.loads('''{ "activityId": "2e7f8b5d-f637-4721-b757-cb190d49b4e9", "StatusUploadBlob": { "statusBlobType": "BlockBlob", "value": "https://dcrcqabsr1.blob.core.windows.net/$system/edpxmal5j1.058b176d-445b-4e75-bd97-4911511b7d96.status" }, "gaFamilies": [ { "Name": "Prod", "Version": "2.5.0.2", "Uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml" ] } ] }''') case_folded = _CaseFoldedDict.from_dict(dictionary) def test_retrieve_item(key, expected_value): """ Test for operators [] and in, and methods get() and has_key() """ try: self.assertEqual(expected_value, case_folded[key], "Operator [] retrieved incorrect value for '{0}'".format(key)) except KeyError: self.fail("Operator [] failed to retrieve '{0}'".format(key)) self.assertTrue(case_folded.has_key(key), "Method has_key() did not find '{0}'".format(key)) self.assertEqual(expected_value, case_folded.get(key), "Method get() retrieved incorrect value for '{0}'".format(key)) self.assertTrue(key in case_folded, "Operator in did not find key '{0}'".format(key)) test_retrieve_item("activityId", "2e7f8b5d-f637-4721-b757-cb190d49b4e9") test_retrieve_item("activityid", "2e7f8b5d-f637-4721-b757-cb190d49b4e9") test_retrieve_item("ACTIVITYID", "2e7f8b5d-f637-4721-b757-cb190d49b4e9") self.assertEqual("BlockBlob", case_folded["statusuploadblob"]["statusblobtype"], "Failed to retrieve item in nested dictionary") self.assertEqual("Prod", case_folded["gafamilies"][0]["name"], "Failed to retrieve item in nested array") Azure-WALinuxAgent-a976115/tests/common/protocol/test_goal_state.py000066400000000000000000001275671510742556200254420ustar00rootroot00000000000000# Copyright (c) Microsoft Corporation. All rights reserved. # Licensed under the Apache License. import contextlib import datetime import glob import os import re import subprocess import shutil import time from azurelinuxagent.common import conf from azurelinuxagent.common.future import httpclient, urlparse, UTC from azurelinuxagent.common.protocol.extensions_goal_state import GoalStateSource, GoalStateChannel from azurelinuxagent.common.protocol.extensions_goal_state_from_extensions_config import ExtensionsGoalStateFromExtensionsConfig from azurelinuxagent.common.protocol.extensions_goal_state_from_vm_settings import ExtensionsGoalStateFromVmSettings from azurelinuxagent.common.protocol import hostplugin from azurelinuxagent.common.protocol.goal_state import GoalState, _GET_GOAL_STATE_MAX_ATTEMPTS, GoalStateProperties from azurelinuxagent.common.exception import ProtocolError from azurelinuxagent.common.utils import fileutil from azurelinuxagent.common.utils.archive import ARCHIVE_DIRECTORY_NAME from azurelinuxagent.common.event import WALAEventOperation from azurelinuxagent.common.protocol.restapi import ExtensionRequestedState from tests.lib.mock_wire_protocol import mock_wire_protocol, MockHttpResponse from tests.lib import wire_protocol_data from tests.lib.http_request_predicates import HttpRequestPredicates from tests.lib.tools import AgentTestCase, patch, load_data class GoalStateTestCase(AgentTestCase, HttpRequestPredicates): def test_it_should_use_vm_settings_by_default(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: protocol.mock_wire_data.set_etag(888) extensions_goal_state = GoalState(protocol.client).extensions_goal_state self.assertTrue( isinstance(extensions_goal_state, ExtensionsGoalStateFromVmSettings), 'The extensions goal state should have been created from the vmSettings (got: {0})'.format(type(extensions_goal_state))) def _assert_is_extensions_goal_state_from_extensions_config(self, extensions_goal_state): self.assertTrue( isinstance(extensions_goal_state, ExtensionsGoalStateFromExtensionsConfig), 'The extensions goal state should have been created from the extensionsConfig (got: {0})'.format(type(extensions_goal_state))) def test_it_should_use_extensions_config_when_fast_track_is_disabled(self): with patch("azurelinuxagent.common.conf.get_enable_fast_track", return_value=False): with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: self._assert_is_extensions_goal_state_from_extensions_config(GoalState(protocol.client).extensions_goal_state) def test_it_should_use_extensions_config_when_fast_track_is_not_supported(self): def http_get_handler(url, *_, **__): if self.is_host_plugin_vm_settings_request(url): return MockHttpResponse(httpclient.NOT_FOUND) return None with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS, http_get_handler=http_get_handler) as protocol: self._assert_is_extensions_goal_state_from_extensions_config(GoalState(protocol.client).extensions_goal_state) def test_it_should_use_extensions_config_when_the_host_ga_plugin_version_is_not_supported(self): data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() data_file["vm_settings"] = "hostgaplugin/vm_settings-unsupported_version.json" with mock_wire_protocol(data_file) as protocol: self._assert_is_extensions_goal_state_from_extensions_config(GoalState(protocol.client).extensions_goal_state) def test_it_should_retry_get_vm_settings_on_resource_gone_error(self): # Requests to the hostgaplugin incude the Container ID and the RoleConfigName as headers; when the hostgaplugin returns GONE (HTTP status 410) the agent # needs to get a new goal state and retry the request with updated values for the Container ID and RoleConfigName headers. with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: # Do not mock the vmSettings request at the level of azurelinuxagent.common.utils.restutil.http_request. The GONE status is handled # in the internal _http_request, which we mock below. protocol.do_not_mock = lambda method, url: method == "GET" and self.is_host_plugin_vm_settings_request(url) request_headers = [] # we expect a retry with new headers and use this array to persist the headers of each request def http_get_vm_settings(_method, _host, _relative_url, _timeout, **kwargs): request_headers.append(kwargs["headers"]) if len(request_headers) == 1: # Fail the first request with status GONE and update the mock data to return the new Container ID and RoleConfigName that should be # used in the headers of the retry request. protocol.mock_wire_data.set_container_id("GET_VM_SETTINGS_TEST_CONTAINER_ID") protocol.mock_wire_data.set_role_config_name("GET_VM_SETTINGS_TEST_ROLE_CONFIG_NAME") return MockHttpResponse(status=httpclient.GONE) # For this test we are interested only on the retry logic, so the second request (the retry) is not important; we use NOT_MODIFIED (304) for simplicity. return MockHttpResponse(status=httpclient.NOT_MODIFIED) with patch("azurelinuxagent.common.utils.restutil._http_request", side_effect=http_get_vm_settings): protocol.client.update_goal_state() self.assertEqual(2, len(request_headers), "We expected 2 requests for vmSettings: the original request and the retry request") self.assertEqual("GET_VM_SETTINGS_TEST_CONTAINER_ID", request_headers[1][hostplugin._HEADER_CONTAINER_ID], "The retry request did not include the expected header for the ContainerId") self.assertEqual("GET_VM_SETTINGS_TEST_ROLE_CONFIG_NAME", request_headers[1][hostplugin._HEADER_HOST_CONFIG_NAME], "The retry request did not include the expected header for the RoleConfigName") def test_fetch_goal_state_should_raise_on_incomplete_goal_state(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: protocol.mock_wire_data.data_files = wire_protocol_data.DATA_FILE_NOOP_GS protocol.mock_wire_data.reload() protocol.mock_wire_data.set_incarnation(2) with patch('time.sleep') as mock_sleep: with self.assertRaises(ProtocolError): GoalState(protocol.client) self.assertEqual(_GET_GOAL_STATE_MAX_ATTEMPTS, mock_sleep.call_count, "Unexpected number of retries") def test_fetching_the_goal_state_should_save_the_shared_config(self): # SharedConfig.xml is used by other components (Azsec and Singularity/HPC Infiniband); verify that we do not delete it with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: _ = GoalState(protocol.client) shared_config = os.path.join(conf.get_lib_dir(), 'SharedConfig.xml') self.assertTrue(os.path.exists(shared_config), "{0} should have been created".format(shared_config)) def test_fetching_the_goal_state_should_save_the_goal_state_to_the_history_directory(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: protocol.mock_wire_data.set_incarnation(999) protocol.mock_wire_data.set_etag(888) _ = GoalState(protocol.client, save_to_history=True) self._assert_directory_contents( self._find_history_subdirectory("999-888"), ["GoalState.xml", "ExtensionsConfig.xml", "VmSettings.json", "Certificates.json", "SharedConfig.xml", "HostingEnvironmentConfig.xml"]) @staticmethod def _get_history_directory(): return os.path.join(conf.get_lib_dir(), ARCHIVE_DIRECTORY_NAME) def _find_history_subdirectory(self, tag): matches = glob.glob(os.path.join(self._get_history_directory(), "*_{0}".format(tag))) self.assertTrue(len(matches) == 1, "Expected one history directory for tag {0}. Got: {1}".format(tag, matches)) return matches[0] def _assert_directory_contents(self, directory, expected_files): actual_files = os.listdir(directory) expected_files.sort() actual_files.sort() self.assertEqual(expected_files, actual_files, "The expected files were not saved to {0}".format(directory)) def test_update_should_create_new_history_subdirectories(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: protocol.mock_wire_data.set_incarnation(123) protocol.mock_wire_data.set_etag(654) goal_state = GoalState(protocol.client, save_to_history=True) self._assert_directory_contents( self._find_history_subdirectory("123-654"), ["GoalState.xml", "ExtensionsConfig.xml", "VmSettings.json", "Certificates.json", "SharedConfig.xml", "HostingEnvironmentConfig.xml"]) def http_get_handler(url, *_, **__): if HttpRequestPredicates.is_host_plugin_vm_settings_request(url): return MockHttpResponse(status=httpclient.NOT_MODIFIED) return None protocol.mock_wire_data.set_incarnation(234) protocol.set_http_handlers(http_get_handler=http_get_handler) goal_state.update() self._assert_directory_contents( self._find_history_subdirectory("234-654"), ["GoalState.xml", "ExtensionsConfig.xml", "Certificates.json", "SharedConfig.xml", "HostingEnvironmentConfig.xml"]) protocol.mock_wire_data.set_etag(987) protocol.set_http_handlers(http_get_handler=None) goal_state.update() self._assert_directory_contents( self._find_history_subdirectory("234-987"), ["VmSettings.json"]) def test_it_should_redact_extensions_config(self): data_file = wire_protocol_data.DATA_FILE_IN_VM_ARTIFACTS_PROFILE.copy() data_file["ext_conf"] = "wire/ext_conf_redact.xml" with mock_wire_protocol(data_file, detect_protocol=False) as protocol: protocol.mock_wire_data.set_incarnation(888) # set the incarnation to a known value that we can use to find the history directory goal_state = GoalState(protocol.client, save_to_history=True) if goal_state.extensions_goal_state.source != GoalStateSource.Fabric: raise Exception("The test goal state should be Fabric (it is {0})".format(goal_state.extensions_goal_state.source)) protected_settings = [s.protectedSettings for s in [e.settings[0] for e in goal_state.extensions_goal_state.extensions]] if len(protected_settings) == 0: raise Exception("The test goal state does not include any protected settings") history_directory = self._find_history_subdirectory("888") extensions_config = os.path.join(history_directory, "ExtensionsConfig.xml") with open(extensions_config, "r") as f: history_contents = f.read() vmap_blob = re.sub(r'(?s)(.*)(.*)(.*)', r'\2', goal_state.extensions_goal_state._text) query = urlparse(vmap_blob).query redacted = vmap_blob.replace(query, "***REDACTED***") self.assertNotIn(query, history_contents, "The VMAP query string was not redacted from the history") self.assertNotIn(vmap_blob, history_contents, "The VMAP URL was not redacted in the history") self.assertIn(redacted, history_contents, "Could not find the redacted VMAP URL in the history") status_blob = re.sub(r'(?s)(.*)(.*)(.*)', r'\2', goal_state.extensions_goal_state._text) query = urlparse(status_blob).query redacted = status_blob.replace(query, "***REDACTED***") self.assertNotIn(query, history_contents, "The Status query string was not redacted from the history") self.assertNotIn(status_blob, history_contents, "The Status URL was not redacted in the history") self.assertIn(redacted, history_contents, "Could not find the redacted Status URL in the history") for s in protected_settings: self.assertNotIn(s, history_contents, "The protected settings were not redacted from the history") matches = re.findall(r'"protectedSettings"\s*:\s*"\*\*\*REDACTED\*\*\*"', history_contents) self.assertEqual(len(matches), len(protected_settings), "Could not find the expected number of redacted settings in {0}.\nExpected {1}.\n{2}".format(extensions_config, len(protected_settings), history_contents)) def test_it_should_redact_vm_settings(self): # NOTE: vm_settings-redact_formatted.json is the same as vm_settings-redact.json, but formatted for easier reading for test_file in ["hostgaplugin/vm_settings-redact.json", "hostgaplugin/vm_settings-redact_formatted.json"]: data_file = wire_protocol_data.DATA_FILE_IN_VM_ARTIFACTS_PROFILE.copy() data_file["vm_settings"] = test_file data_file["ETag"] = "123" with mock_wire_protocol(data_file, detect_protocol=False) as protocol: goal_state = GoalState(protocol.client, save_to_history=True) if goal_state.extensions_goal_state.source != GoalStateSource.FastTrack: raise Exception("The test goal state should be FastTrack (it is {0}) [test: {1}]".format(goal_state.extensions_goal_state.source, test_file)) protected_settings = [s.protectedSettings for s in [e.settings[0] for e in goal_state.extensions_goal_state.extensions]] if len(protected_settings) == 0: raise Exception("The test goal state does not include any protected settings [test: {0}]".format(test_file)) history_directory = self._find_history_subdirectory("*-123") vm_settings = os.path.join(history_directory, "VmSettings.json") with open(vm_settings, "r") as f: history_contents = f.read() status_blob = goal_state.extensions_goal_state.status_upload_blob query = urlparse(status_blob).query redacted = status_blob.replace(query, "***REDACTED***") self.assertNotIn(query, history_contents, "The Status query string was not redacted from the history [test: {0}]".format(test_file)) self.assertNotIn(status_blob, history_contents, "The Status URL was not redacted in the history [test: {0}]".format(test_file)) self.assertIn(redacted, history_contents, "Could not find the redacted Status URL in the history [test: {0}]".format(test_file)) for s in protected_settings: self.assertNotIn(s, history_contents, "The protected settings were not redacted from the history [test: {0}]".format(test_file)) matches = re.findall(r'"protectedSettings"\s*:\s*"\*\*\*REDACTED\*\*\*"', history_contents) self.assertEqual(len(matches), len(protected_settings), "Could not find the expected number of redacted settings in {0} [test {1}].\nExpected {2}.\n{3}".format(vm_settings, test_file, len(protected_settings), history_contents)) shutil.rmtree(history_directory) # clean up the history directory in-between test cases to avoid stale history files def test_it_should_save_vm_settings_on_parse_errors(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: invalid_vm_settings_file = "hostgaplugin/vm_settings-parse_error.json" data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() data_file["vm_settings"] = invalid_vm_settings_file protocol.mock_wire_data = wire_protocol_data.WireProtocolData(data_file) with self.assertRaises(ProtocolError): # the parsing error will cause an exception _ = GoalState(protocol.client) # Do an extra call to update the goal state; this should save the vmsettings to the history directory # only once (self._find_history_subdirectory asserts 1 single match) time.sleep(0.1) # add a short delay to ensure that a new timestamp would be saved in the history folder protocol.mock_wire_data.set_etag(888) with self.assertRaises(ProtocolError): _ = GoalState(protocol.client) history_directory = self._find_history_subdirectory("888") vm_settings_file = os.path.join(history_directory, "VmSettings.json") self.assertTrue(os.path.exists(vm_settings_file), "{0} was not saved".format(vm_settings_file)) expected = load_data(invalid_vm_settings_file) actual = fileutil.read_file(vm_settings_file) self.assertEqual(expected, actual, "The vmSettings were not saved correctly") def test_should_not_save_to_the_history_by_default(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: _ = GoalState(protocol.client) # omit the save_to_history parameter history = self._get_history_directory() self.assertFalse(os.path.exists(history), "The history directory not should have been created") @staticmethod @contextlib.contextmanager def _create_protocol_ws_and_hgap_in_sync(): """ Creates a mock protocol in which the HostGAPlugin and the WireServer are in sync, both of them returning the same Fabric goal state. """ data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() with mock_wire_protocol(data_file) as protocol: timestamp = datetime.datetime.now(UTC) incarnation = '111' etag = '111111' protocol.mock_wire_data.set_incarnation(incarnation, timestamp=timestamp) protocol.mock_wire_data.set_etag(etag, timestamp=timestamp) protocol.mock_wire_data.set_vm_settings_source(GoalStateSource.Fabric) # Do a few checks on the mock data to ensure we catch changes in internal implementations # that may invalidate this setup. vm_settings, _ = protocol.client.get_host_plugin().fetch_vm_settings() if vm_settings.etag != etag: raise Exception("The HostGAPlugin is not in sync. Expected ETag {0}. Got {1}".format(etag, vm_settings.etag)) if vm_settings.source != GoalStateSource.Fabric: raise Exception("The HostGAPlugin should be returning a Fabric goal state. Got {0}".format(vm_settings.source)) goal_state = GoalState(protocol.client) if goal_state.incarnation != incarnation: raise Exception("The WireServer is not in sync. Expected incarnation {0}. Got {1}".format(incarnation, goal_state.incarnation)) if goal_state.extensions_goal_state.correlation_id != vm_settings.correlation_id: raise Exception( "The correlation ID in the WireServer and HostGAPlugin are not in sync. WS: {0} HGAP: {1}".format( goal_state.extensions_goal_state.correlation_id, vm_settings.correlation_id)) yield protocol def _assert_goal_state(self, goal_state, goal_state_id, channel=None, source=None): self.assertIn(goal_state_id, goal_state.extensions_goal_state.id, "Incorrect Goal State ID") if channel is not None: self.assertEqual(channel, goal_state.extensions_goal_state.channel, "Incorrect Goal State channel") if source is not None: self.assertEqual(source, goal_state.extensions_goal_state.source, "Incorrect Goal State source") def test_it_should_ignore_fabric_goal_states_from_the_host_ga_plugin(self): with GoalStateTestCase._create_protocol_ws_and_hgap_in_sync() as protocol: # # Verify __init__() # expected_incarnation = '111' # test setup initializes to this value timestamp = datetime.datetime.now(UTC) + datetime.timedelta(seconds=15) protocol.mock_wire_data.set_etag('22222', timestamp) goal_state = GoalState(protocol.client) self._assert_goal_state(goal_state, expected_incarnation, channel=GoalStateChannel.WireServer) # # Verify update() # timestamp += datetime.timedelta(seconds=15) protocol.mock_wire_data.set_etag('333333', timestamp) goal_state.update() self._assert_goal_state(goal_state, expected_incarnation, channel=GoalStateChannel.WireServer) def test_it_should_use_fast_track_goal_states_from_the_host_ga_plugin(self): with GoalStateTestCase._create_protocol_ws_and_hgap_in_sync() as protocol: protocol.mock_wire_data.set_vm_settings_source(GoalStateSource.FastTrack) # # Verify __init__() # expected_etag = '22222' timestamp = datetime.datetime.now(UTC) + datetime.timedelta(seconds=15) protocol.mock_wire_data.set_etag(expected_etag, timestamp) goal_state = GoalState(protocol.client) self._assert_goal_state(goal_state, expected_etag, channel=GoalStateChannel.HostGAPlugin) # # Verify update() # expected_etag = '333333' timestamp += datetime.timedelta(seconds=15) protocol.mock_wire_data.set_etag(expected_etag, timestamp) goal_state.update() self._assert_goal_state(goal_state, expected_etag, channel=GoalStateChannel.HostGAPlugin) def test_it_should_use_the_most_recent_goal_state(self): with GoalStateTestCase._create_protocol_ws_and_hgap_in_sync() as protocol: goal_state = GoalState(protocol.client) # The most recent goal state is FastTrack timestamp = datetime.datetime.now(UTC) + datetime.timedelta(seconds=15) protocol.mock_wire_data.set_vm_settings_source(GoalStateSource.FastTrack) protocol.mock_wire_data.set_etag('222222', timestamp) goal_state.update() self._assert_goal_state(goal_state, '222222', channel=GoalStateChannel.HostGAPlugin, source=GoalStateSource.FastTrack) # The most recent goal state is Fabric timestamp += datetime.timedelta(seconds=15) protocol.mock_wire_data.set_incarnation('222', timestamp) goal_state.update() self._assert_goal_state(goal_state, '222', channel=GoalStateChannel.WireServer, source=GoalStateSource.Fabric) # The most recent goal state is Fabric, but it is coming from the HostGAPlugin (should be ignored) timestamp += datetime.timedelta(seconds=15) protocol.mock_wire_data.set_vm_settings_source(GoalStateSource.Fabric) protocol.mock_wire_data.set_etag('333333', timestamp) goal_state.update() self._assert_goal_state(goal_state, '222', channel=GoalStateChannel.WireServer, source=GoalStateSource.Fabric) def test_it_should_mark_outdated_goal_states(self): with GoalStateTestCase._create_protocol_ws_and_hgap_in_sync() as protocol: goal_state = GoalState(protocol.client) initial_incarnation = goal_state.incarnation initial_timestamp = goal_state.extensions_goal_state.created_on_timestamp # Make the most recent goal state FastTrack timestamp = datetime.datetime.now(UTC) + datetime.timedelta(seconds=15) protocol.mock_wire_data.set_vm_settings_source(GoalStateSource.FastTrack) protocol.mock_wire_data.set_etag('444444', timestamp) goal_state.update() # Update the goal state after the HGAP plugin stops supporting vmSettings def http_get_handler(url, *_, **__): if self.is_host_plugin_vm_settings_request(url): return MockHttpResponse(httpclient.NOT_FOUND) return None protocol.set_http_handlers(http_get_handler=http_get_handler) goal_state.update() self._assert_goal_state(goal_state, initial_incarnation, channel=GoalStateChannel.WireServer, source=GoalStateSource.Fabric) self.assertEqual(initial_timestamp, goal_state.extensions_goal_state.created_on_timestamp, "The timestamp of the updated goal state is incorrect") self.assertTrue(goal_state.extensions_goal_state.is_outdated, "The updated goal state should be marked as outdated") def test_it_should_download_certs_on_a_new_fast_track_goal_state(self): data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() with mock_wire_protocol(data_file) as protocol: goal_state = GoalState(protocol.client) cert = "F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9" crt_path = os.path.join(self.tmp_dir, cert + ".crt") prv_path = os.path.join(self.tmp_dir, cert + ".prv") # Check that crt and prv files are downloaded after processing goal state self.assertTrue(os.path.isfile(crt_path)) self.assertTrue(os.path.isfile(prv_path)) # Remove .crt file os.remove(crt_path) if os.path.isfile(crt_path): raise Exception("{0}.crt was not removed.".format(cert)) # Update goal state and check that .crt was downloaded protocol.mock_wire_data.set_etag(888) goal_state.update() self.assertTrue(os.path.isfile(crt_path)) def test_it_should_download_certs_on_a_new_fabric_goal_state(self): data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() with mock_wire_protocol(data_file) as protocol: protocol.mock_wire_data.set_vm_settings_source(GoalStateSource.Fabric) goal_state = GoalState(protocol.client) cert = "F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9" crt_path = os.path.join(self.tmp_dir, cert + ".crt") prv_path = os.path.join(self.tmp_dir, cert + ".prv") # Check that crt and prv files are downloaded after processing goal state self.assertTrue(os.path.isfile(crt_path)) self.assertTrue(os.path.isfile(prv_path)) # Remove .crt file os.remove(crt_path) if os.path.isfile(crt_path): raise Exception("{0}.crt was not removed.".format(cert)) # Update goal state and check that .crt was downloaded protocol.mock_wire_data.set_incarnation(999) goal_state.update() self.assertTrue(os.path.isfile(crt_path)) def test_goal_state_should_contain_empty_certs_when_it_is_fails_to_decrypt_certs(self): # This test simulates that scenario by mocking the goal state request is fabric, and it contains incorrect certs(incorrect-certs.xml) data_file = "wire/incorrect-certs.xml" def http_get_handler(url, *_, **__): if HttpRequestPredicates.is_certificates_request(url): http_get_handler.certificate_requests += 1 data = load_data(data_file) return MockHttpResponse(status=200, body=data.encode('utf-8')) return None http_get_handler.certificate_requests = 0 with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: protocol.set_http_handlers(http_get_handler=http_get_handler) protocol.mock_wire_data.reset_call_counts() goal_state = GoalState(protocol.client) self.assertEqual(0, len(goal_state.certs.summary), "Certificates should be empty") self.assertEqual(2, http_get_handler.certificate_requests, "There should have been exactly 2 requests for the goal state certificates") # 1 for the initial request, 1 for the retry with an older cypher def test_goal_state_should_try_legacy_cypher_and_then_fail_when_no_cyphers_are_supported_by_the_wireserver(self): cyphers = [] def http_get_handler(url, *_, **kwargs): if HttpRequestPredicates.is_certificates_request(url): cypher = kwargs["headers"].get("x-ms-cipher-name") if cypher is None: raise Exception("x-ms-cipher-name header is missing from the Certificates request") cyphers.append(cypher) return MockHttpResponse(status=400, body="unsupported cypher: {0}".format(cypher).encode('utf-8')) return None with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: with patch("azurelinuxagent.common.event.LogEvent.error") as log_error_patch: protocol.set_http_handlers(http_get_handler=http_get_handler) goal_state = GoalState(protocol.client) log_error_args, _ = log_error_patch.call_args self.assertEqual(cyphers, ["AES128_CBC", "DES_EDE3_CBC"], "There should have been 2 requests for the goal state certificates (AES128_CBC and DES_EDE3_CBC)") self.assertEqual(log_error_args[0], "GoalStateCertificates", "An error fetching the goal state Certificates should have been reported") self.assertEqual(0, len(goal_state.certs.summary), "Certificates should be empty") self.assertFalse(os.path.exists(os.path.join(conf.get_lib_dir(), "Certificates.pfx")), "The Certificates.pfx file should not have been created") def test_goal_state_should_try_legacy_cypher_and_then_fail_when_no_cyphers_are_supported_by_openssl(self): cyphers = [] def http_get_handler(url, *_, **kwargs): if HttpRequestPredicates.is_certificates_request(url): cyphers.append(kwargs["headers"].get("x-ms-cipher-name")) return None original_popen = subprocess.Popen openssl = conf.get_openssl_cmd() decrypt_calls = [] def mock_fail_popen(command, *args, **kwargs): if len(command) > 3 and command[0:3] == [openssl, "cms", "-decrypt"]: decrypt_calls.append(command) command[1] = "fake_openssl_command" # force an error on the openssl to simulate a decryption failure return original_popen(command, *args, **kwargs) with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: protocol.set_http_handlers(http_get_handler=http_get_handler) with patch("azurelinuxagent.common.event.LogEvent.error") as log_error_patch: with patch("azurelinuxagent.ga.cgroupapi.subprocess.Popen", mock_fail_popen): goal_state = GoalState(protocol.client) log_error_args, _ = log_error_patch.call_args self.assertEqual(cyphers, ["AES128_CBC", "DES_EDE3_CBC"], "There should have been 2 requests for the goal state certificates (AES128_CBC and DES_EDE3_CBC)") self.assertEqual(2, len(decrypt_calls), "There should have been 2 calls to 'openssl cms -decrypt'") self.assertEqual(log_error_args[0], "GoalStateCertificates", "An error fetching the goal state Certificates should have been reported") self.assertEqual(0, len(goal_state.certs.summary), "Certificates should be empty") self.assertFalse(os.path.exists(os.path.join(conf.get_lib_dir(), "Certificates.pfx")), "The Certificates.pfx file should not have been created") def test_goal_state_should_try_without_and_with_mac_verification_then_fail_when_the_pfx_cannot_be_converted(self): original_popen = subprocess.Popen openssl = conf.get_openssl_cmd() nomacver = [] def mock_fail_popen(command, *args, **kwargs): if len(command) > 2 and command[0] == openssl and command[1] == "pkcs12": nomacver.append("-nomacver" in command) # force an error on the openssl to simulate the conversion failure command[1] = "fake_openssl_command" return original_popen(command, *args, **kwargs) with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: with patch("azurelinuxagent.common.event.LogEvent.error") as log_error_patch: with patch("azurelinuxagent.ga.cgroupapi.subprocess.Popen", mock_fail_popen): goal_state = GoalState(protocol.client) log_error_args, _ = log_error_patch.call_args self.assertEqual(nomacver, [True, False], "There should have been 2 attempts to parse the PFX (with and without -nomacver)") self.assertEqual(log_error_args[0], "GoalStateCertificates", "An error fetching the goal state Certificates should have been reported") self.assertEqual(0, len(goal_state.certs.summary), "Certificates should be empty") def test_it_should_raise_when_goal_state_properties_not_initialized(self): with GoalStateTestCase._create_protocol_ws_and_hgap_in_sync() as protocol: goal_state = GoalState( protocol.client, goal_state_properties=~GoalStateProperties.All) goal_state.update() with self.assertRaises(ProtocolError) as context: _ = goal_state.container_id expected_message = "ContainerId is not in goal state properties" self.assertIn(expected_message, str(context.exception)) with self.assertRaises(ProtocolError) as context: _ = goal_state.role_config_name expected_message = "RoleConfig is not in goal state properties" self.assertIn(expected_message, str(context.exception)) with self.assertRaises(ProtocolError) as context: _ = goal_state.role_instance_id expected_message = "RoleInstanceId is not in goal state properties" self.assertIn(expected_message, str(context.exception)) with self.assertRaises(ProtocolError) as context: _ = goal_state.extensions_goal_state expected_message = "ExtensionsGoalState is not in goal state properties" self.assertIn(expected_message, str(context.exception)) with self.assertRaises(ProtocolError) as context: _ = goal_state.hosting_env expected_message = "HostingEnvironment is not in goal state properties" self.assertIn(expected_message, str(context.exception)) with self.assertRaises(ProtocolError) as context: _ = goal_state.certs expected_message = "Certificates is not in goal state properties" self.assertIn(expected_message, str(context.exception)) with self.assertRaises(ProtocolError) as context: _ = goal_state.shared_conf expected_message = "SharedConfig is not in goal state properties" self.assertIn(expected_message, str(context.exception)) with self.assertRaises(ProtocolError) as context: _ = goal_state.remote_access expected_message = "RemoteAccessInfo is not in goal state properties" self.assertIn(expected_message, str(context.exception)) goal_state = GoalState( protocol.client, goal_state_properties=GoalStateProperties.All & ~GoalStateProperties.HostingEnv) goal_state.update() _ = goal_state.container_id, goal_state.role_instance_id, goal_state.role_config_name, \ goal_state.extensions_goal_state, goal_state.certs, goal_state.shared_conf, goal_state.remote_access with self.assertRaises(ProtocolError) as context: _ = goal_state.hosting_env expected_message = "HostingEnvironment is not in goal state properties" self.assertIn(expected_message, str(context.exception)) def test_it_should_pick_up_most_recent_goal_state_when_the_tenant_certificate_is_rotated(self): # # During rotation of the tenant certificate a new Fabric goal state is generated; however, neither the vmSettings nor the extensionsConfig change. In that case, the agent should pick up the most recent of # vmSettings and extensionsConfig. The test data below comes from an actual incident, in which the tenant certificate was rotated on incarnation 4. # goal_state_data = wire_protocol_data.DATA_FILE.copy() goal_state_data.update({ "goal_state": "tenant_certificate_rotation/GoalState-incarnation-3.xml", "certs": "tenant_certificate_rotation/Certificates-incarnation-3.xml", "ext_conf": "tenant_certificate_rotation/ExtensionsConfig-incarnation-3.xml", "vm_settings": "tenant_certificate_rotation/VmSettings-etag-10016425637754081485.json", "trans_cert": "tenant_certificate_rotation/TransportCert.pem", "trans_prv": "tenant_certificate_rotation/TransportPrivate.pem", "ETag": "10016425637754081485" }) with mock_wire_protocol(goal_state_data) as protocol: # Verify the test setup. Protocol detection should initialize the goal state to incarnation 3 goal_state = protocol.client.get_goal_state() if goal_state.incarnation != '3': raise Exception("Incarnation 3 should have been picked up during protocol detection. Got {0}".format(goal_state.incarnation)) if goal_state.extensions_goal_state.source != "FastTrack": raise Exception("The Fast Track goal state should have picked up on initialization, since it is the most recent goal state. Got {0}".format(goal_state.extensions_goal_state.source)) if all(c["thumbprint"] != "F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9" for c in goal_state.certs.summary): raise Exception("The tenant certificate on incarnation 3, 'F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9', is missing from the goal state. Certificates: {0}".format(goal_state.certs.summary)) # Update the test data to incarnation 4, which has the newly rotated tenant certificate goal_state_data.update({ "goal_state": "tenant_certificate_rotation/GoalState-incarnation-4.xml", "certs": "tenant_certificate_rotation/Certificates-incarnation-4.xml", "ext_conf": "tenant_certificate_rotation/ExtensionsConfig-incarnation-4.xml", }) protocol.mock_wire_data.reload() # The incarnation in the test data changed, but not the ETag; even so, the goal state should pick up the Fast Track extensions, since that is the most recent goal state. This needs to be # verified for 3 scenarios: initializing a new goal state, force-updating the goal state, and updating the goal state. def assert_fast_track(test_case): self.assertEqual('4', goal_state.incarnation, "Incarnation 4 should have been picked up on {0}".format(test_case)) self.assertEqual("FastTrack", goal_state.extensions_goal_state.source, "The Fast Track goal state should have picked up on {0}, since it is the most recent goal state".format(test_case)) self.assertTrue( any(c["thumbprint"] == "C0EDFF1B408001B0FD14F8F615E567F7833822D0" for c in goal_state.certs.summary), "The tenant certificate on incarnation 4, 'C0EDFF1B408001B0FD14F8F615E567F7833822D0', is missing from the goal state. Certificates: {0}".format(goal_state.certs.summary)) goal_state = GoalState(protocol.client) assert_fast_track("initialization") goal_state.update(force_update=True) assert_fast_track("force-update") goal_state.update() assert_fast_track("update") def test_it_should_send_telemetry_for_extension_signed_or_unsigned(self): # Should send telemetry for signed extension for extensionsConfig goal state with patch("azurelinuxagent.common.protocol.goal_state.add_event") as add_event: with mock_wire_protocol(wire_protocol_data.DATA_FILE): telemetry = [kw for _, kw in add_event.call_args_list if kw['op'] == WALAEventOperation.ExtensionSigned and kw['is_success']] self.assertEqual(1, len(telemetry), "Should send telemetry for signed extension in extensionsConfig goal state") # Should send telemetry for unsigned extension in extensionsConfig goal state ext_conf_data_file = wire_protocol_data.DATA_FILE.copy() ext_conf_data_file["ext_conf"] = "wire/ext_conf-no_encoded_signature.xml" with patch("azurelinuxagent.common.protocol.goal_state.add_event") as add_event: with mock_wire_protocol(ext_conf_data_file): telemetry = [kw for _, kw in add_event.call_args_list if kw['op'] == WALAEventOperation.ExtensionSigned and not kw['is_success']] self.assertEqual(1, len(telemetry), "Should send telemetry for unsigned extension in extensionsConfig goal state") # Should send telemetry for both signed and unsigned extensions in fast track goal state vm_settings_data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() # This vm settings extensions goal state has 1 extension with encodedSignature (AzureMonitorLinuxAgent), and # 1 extension without encodedSignature (AzureSecurityLinuxAgent). The HGAP version supports signature. vm_settings_data_file["vm_settings"] = "hostgaplugin/vm_settings-supported_hgap_version_for_signature.json" with patch("azurelinuxagent.common.protocol.goal_state.add_event") as add_event: with mock_wire_protocol(vm_settings_data_file): signed_telemetry = [kw for _, kw in add_event.call_args_list if kw['op'] == WALAEventOperation.ExtensionSigned and kw['is_success']] self.assertEqual(1, len(signed_telemetry), "Should send telemetry for signed extension in fast track goal state") unsigned_telemetry = [kw for _, kw in add_event.call_args_list if kw['op'] == WALAEventOperation.ExtensionSigned and not kw['is_success']] self.assertEqual(1, len(unsigned_telemetry), "Should send telemetry for unsigned extensions in fast track goal state") def test_it_should_not_send_telemetry_for_extension_signature_for_uninstall(self): data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf-no_encoded_signature.xml" with mock_wire_protocol(data_file) as protocol: with patch("azurelinuxagent.common.protocol.goal_state.add_event") as add_event: # Generate a new mock goal state to uninstall the extension - increment the incarnation protocol.mock_wire_data.set_incarnation(2) protocol.mock_wire_data.set_extensions_config_state(ExtensionRequestedState.Uninstall) goal_state = GoalState(protocol.client) goal_state.update() telemetry = [kw for _, kw in add_event.call_args_list if kw['op'] == WALAEventOperation.ExtensionSigned and not kw['is_success']] self.assertEqual(0, len(telemetry), "Should not send telemetry for unsigned extension when requested operation is uninstall") def test_it_should_not_send_telemetry_for_unsupported_hgap_version(self): # This vm settings extensions goal state has a version of HGAP that does not support the 'encodedSignature' # property, and it includes an extension with no signature. Telemetry should not be sent, in this case. data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() data_file["vm_settings"] = "hostgaplugin/vm_settings-unsupported_hgap_version_for_signature.json" with patch("azurelinuxagent.common.protocol.goal_state.add_event") as add_event: with mock_wire_protocol(data_file): unsigned_telemetry = [kw for _, kw in add_event.call_args_list if kw['op'] == WALAEventOperation.ExtensionSigned and not kw['is_success']] self.assertEqual(0, len(unsigned_telemetry), "Should not send telemetry for unsigned extensions in fast track goal state if HGAP version does not support signature") Azure-WALinuxAgent-a976115/tests/common/protocol/test_healthservice.py000066400000000000000000000261241510742556200261310ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import json from azurelinuxagent.common.exception import HttpError from azurelinuxagent.common.protocol.healthservice import Observation, HealthService from azurelinuxagent.common.utils import restutil from tests.common.protocol.test_hostplugin import MockResponse from tests.lib.tools import AgentTestCase, patch class TestHealthService(AgentTestCase): def assert_status_code(self, status_code, expected_healthy): response = MockResponse('response', status_code) is_healthy = not restutil.request_failed_at_hostplugin(response) self.assertEqual(expected_healthy, is_healthy) def assert_observation(self, call_args, name, is_healthy, value, description): endpoint = call_args[0][0] content = call_args[0][1] jo = json.loads(content) api = jo['Api'] source = jo['Source'] version = jo['Version'] obs = jo['Observations'] fo = obs[0] obs_name = fo['ObservationName'] obs_healthy = fo['IsHealthy'] obs_value = fo['Value'] obs_description = fo['Description'] self.assertEqual('application/json', call_args[1]['headers']['Content-Type']) self.assertEqual('http://endpoint:80/HealthService', endpoint) self.assertEqual('reporttargethealth', api) self.assertEqual('WALinuxAgent', source) self.assertEqual('1.0', version) self.assertEqual(name, obs_name) self.assertEqual(value, obs_value) self.assertEqual(is_healthy, obs_healthy) self.assertEqual(description, obs_description) def assert_telemetry(self, call_args, response=''): args, kw_args = call_args # pylint: disable=unused-variable self.assertFalse(kw_args['is_success']) self.assertEqual('HealthObservation', kw_args['op']) obs = json.loads(kw_args['message']) self.assertEqual(obs['Value'], response) def test_observation_validity(self): try: Observation(name=None, is_healthy=True) self.fail('Empty observation name should raise ValueError') except ValueError: pass try: Observation(name='Name', is_healthy=None) self.fail('Empty measurement should raise ValueError') except ValueError: pass o = Observation(name='Name', is_healthy=True, value=None, description=None) self.assertEqual('', o.value) self.assertEqual('', o.description) long_str = 's' * 200 o = Observation(name=long_str, is_healthy=True, value=long_str, description=long_str) self.assertEqual(200, len(o.name)) self.assertEqual(200, len(o.value)) self.assertEqual(200, len(o.description)) self.assertEqual(64, len(o.as_obj['ObservationName'])) self.assertEqual(128, len(o.as_obj['Value'])) self.assertEqual(128, len(o.as_obj['Description'])) def test_observation_json(self): health_service = HealthService('endpoint') health_service.observations.append(Observation(name='name', is_healthy=True, value='value', description='description')) expected_json = '{"Source": "WALinuxAgent", ' \ '"Api": "reporttargethealth", ' \ '"Version": "1.0", ' \ '"Observations": [{' \ '"Value": "value", ' \ '"ObservationName": "name", ' \ '"Description": "description", ' \ '"IsHealthy": true' \ '}]}' expected = sorted(json.loads(expected_json).items()) actual = sorted(json.loads(health_service.as_json).items()) self.assertEqual(expected, actual) @patch('azurelinuxagent.common.event.add_event') @patch("azurelinuxagent.common.utils.restutil.http_post") def test_reporting(self, patch_post, patch_add_event): health_service = HealthService('endpoint') health_service.report_host_plugin_status(is_healthy=True, response='response') self.assertEqual(1, patch_post.call_count) self.assertEqual(0, patch_add_event.call_count) self.assert_observation(call_args=patch_post.call_args, name=HealthService.HOST_PLUGIN_STATUS_OBSERVATION_NAME, is_healthy=True, value='response', description='') self.assertEqual(0, len(health_service.observations)) health_service.report_host_plugin_status(is_healthy=False, response='error') self.assertEqual(2, patch_post.call_count) self.assertEqual(1, patch_add_event.call_count) self.assert_telemetry(call_args=patch_add_event.call_args, response='error') self.assert_observation(call_args=patch_post.call_args, name=HealthService.HOST_PLUGIN_STATUS_OBSERVATION_NAME, is_healthy=False, value='error', description='') self.assertEqual(0, len(health_service.observations)) health_service.report_host_plugin_extension_artifact(is_healthy=True, source='source', response='response') self.assertEqual(3, patch_post.call_count) self.assertEqual(1, patch_add_event.call_count) self.assert_observation(call_args=patch_post.call_args, name=HealthService.HOST_PLUGIN_ARTIFACT_OBSERVATION_NAME, is_healthy=True, value='response', description='source') self.assertEqual(0, len(health_service.observations)) health_service.report_host_plugin_extension_artifact(is_healthy=False, source='source', response='response') self.assertEqual(4, patch_post.call_count) self.assertEqual(2, patch_add_event.call_count) self.assert_telemetry(call_args=patch_add_event.call_args, response='response') self.assert_observation(call_args=patch_post.call_args, name=HealthService.HOST_PLUGIN_ARTIFACT_OBSERVATION_NAME, is_healthy=False, value='response', description='source') self.assertEqual(0, len(health_service.observations)) health_service.report_host_plugin_heartbeat(is_healthy=True) self.assertEqual(5, patch_post.call_count) self.assertEqual(2, patch_add_event.call_count) self.assert_observation(call_args=patch_post.call_args, name=HealthService.HOST_PLUGIN_HEARTBEAT_OBSERVATION_NAME, is_healthy=True, value='', description='') self.assertEqual(0, len(health_service.observations)) health_service.report_host_plugin_heartbeat(is_healthy=False) self.assertEqual(3, patch_add_event.call_count) self.assert_telemetry(call_args=patch_add_event.call_args) self.assertEqual(6, patch_post.call_count) self.assert_observation(call_args=patch_post.call_args, name=HealthService.HOST_PLUGIN_HEARTBEAT_OBSERVATION_NAME, is_healthy=False, value='', description='') self.assertEqual(0, len(health_service.observations)) health_service.report_host_plugin_versions(is_healthy=True, response='response') self.assertEqual(7, patch_post.call_count) self.assertEqual(3, patch_add_event.call_count) self.assert_observation(call_args=patch_post.call_args, name=HealthService.HOST_PLUGIN_VERSIONS_OBSERVATION_NAME, is_healthy=True, value='response', description='') self.assertEqual(0, len(health_service.observations)) health_service.report_host_plugin_versions(is_healthy=False, response='response') self.assertEqual(8, patch_post.call_count) self.assertEqual(4, patch_add_event.call_count) self.assert_telemetry(call_args=patch_add_event.call_args, response='response') self.assert_observation(call_args=patch_post.call_args, name=HealthService.HOST_PLUGIN_VERSIONS_OBSERVATION_NAME, is_healthy=False, value='response', description='') self.assertEqual(0, len(health_service.observations)) patch_post.side_effect = HttpError() health_service.report_host_plugin_versions(is_healthy=True, response='') self.assertEqual(9, patch_post.call_count) self.assertEqual(4, patch_add_event.call_count) self.assertEqual(0, len(health_service.observations)) def test_observation_length(self): health_service = HealthService('endpoint') # make 100 observations for i in range(0, 100): health_service._observe(is_healthy=True, name='{0}'.format(i)) # ensure we keep only 10 self.assertEqual(10, len(health_service.observations)) # ensure we keep the most recent 10 self.assertEqual('90', health_service.observations[0].name) self.assertEqual('99', health_service.observations[9].name) def test_status_codes(self): # healthy self.assert_status_code(status_code=200, expected_healthy=True) self.assert_status_code(status_code=201, expected_healthy=True) self.assert_status_code(status_code=302, expected_healthy=True) self.assert_status_code(status_code=400, expected_healthy=True) self.assert_status_code(status_code=416, expected_healthy=True) self.assert_status_code(status_code=419, expected_healthy=True) self.assert_status_code(status_code=429, expected_healthy=True) self.assert_status_code(status_code=502, expected_healthy=True) # unhealthy self.assert_status_code(status_code=500, expected_healthy=False) self.assert_status_code(status_code=501, expected_healthy=False) self.assert_status_code(status_code=503, expected_healthy=False) self.assert_status_code(status_code=504, expected_healthy=False) Azure-WALinuxAgent-a976115/tests/common/protocol/test_hostplugin.py000066400000000000000000001440711510742556200255010ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import base64 import contextlib import datetime import json import os.path import sys import unittest from azurelinuxagent.common.protocol import hostplugin, restapi, wire from azurelinuxagent.common import conf from azurelinuxagent.common.errorstate import ErrorState from azurelinuxagent.common.exception import HttpError, ResourceGoneError, ProtocolError from azurelinuxagent.common.future import ustr, UTC, httpclient from azurelinuxagent.common.osutil.default import UUID_PATTERN from azurelinuxagent.common.protocol.hostplugin import API_VERSION, _VmSettingsErrorReporter, VmSettingsNotSupported, VmSettingsSupportStopped from azurelinuxagent.common.protocol.extensions_goal_state import GoalStateSource from azurelinuxagent.common.protocol.goal_state import GoalState from azurelinuxagent.common.utils import restutil from azurelinuxagent.common.version import AGENT_VERSION, AGENT_NAME from tests.lib.mock_wire_protocol import mock_wire_protocol, wire_protocol_data, MockHttpResponse from tests.lib.http_request_predicates import HttpRequestPredicates from tests.lib.wire_protocol_data import DATA_FILE, DATA_FILE_NO_EXT from tests.lib.tools import AgentTestCase, PY_VERSION_MAJOR, Mock, patch hostplugin_status_url = "http://168.63.129.16:32526/status" hostplugin_versions_url = "http://168.63.129.16:32526/versions" health_service_url = 'http://168.63.129.16:80/HealthService' hostplugin_logs_url = "http://168.63.129.16:32526/vmAgentLog" sas_url = "http://sas_url" wireserver_url = "168.63.129.16" block_blob_type = 'BlockBlob' page_blob_type = 'PageBlob' api_versions = '["2015-09-01"]' storage_version = "2014-02-14" faux_status = "{ 'dummy' : 'data' }" faux_status_b64 = base64.b64encode(bytes(bytearray(faux_status, encoding='utf-8'))) if PY_VERSION_MAJOR > 2: faux_status_b64 = faux_status_b64.decode('utf-8') class TestHostPlugin(HttpRequestPredicates, AgentTestCase): def _init_host(self): with mock_wire_protocol(DATA_FILE) as protocol: host_plugin = wire.HostPluginProtocol(wireserver_url) GoalState.update_host_plugin_headers(protocol.client) self.assertTrue(host_plugin.health_service is not None) return host_plugin def _init_status_blob(self): wire_protocol_client = wire.WireProtocol(wireserver_url).client status_blob = wire_protocol_client.status_blob status_blob.data = faux_status status_blob.vm_status = restapi.VMStatus(message="Ready", status="Ready") return status_blob def _relax_timestamp(self, headers): new_headers = [] for header in headers: header_value = header['headerValue'] if header['headerName'] == 'x-ms-date': timestamp = header['headerValue'] header_value = timestamp[:timestamp.rfind(":")] new_header = {header['headerName']: header_value} new_headers.append(new_header) return new_headers def _compare_data(self, actual, expected): # Remove seconds from the timestamps for testing purposes, that level or granularity introduces test flakiness actual['headers'] = self._relax_timestamp(actual['headers']) expected['headers'] = self._relax_timestamp(expected['headers']) for k in iter(expected.keys()): if k == 'content' or k == 'requestUri': if actual[k] != expected[k]: print("Mismatch: Actual '{0}'='{1}', " "Expected '{0}'='{2}'".format(k, actual[k], expected[k])) return False elif k == 'headers': for h in expected['headers']: if not (h in actual['headers']): print("Missing Header: '{0}'".format(h)) return False else: print("Unexpected Key: '{0}'".format(k)) return False return True def _hostplugin_data(self, blob_headers, content=None): headers = [] for name in iter(blob_headers.keys()): headers.append({ 'headerName': name, 'headerValue': blob_headers[name] }) data = { 'requestUri': sas_url, 'headers': headers } if not content is None: s = base64.b64encode(bytes(content)) if PY_VERSION_MAJOR > 2: s = s.decode('utf-8') data['content'] = s return data def _hostplugin_headers(self, goal_state): return { 'x-ms-version': '2015-09-01', 'Content-type': 'application/json', 'x-ms-containerid': goal_state.container_id, 'x-ms-host-config-name': goal_state.role_config_name } def _validate_hostplugin_args(self, args, goal_state, exp_method, exp_url, exp_data): args, kwargs = args self.assertEqual(exp_method, args[0]) self.assertEqual(exp_url, args[1]) self.assertTrue(self._compare_data(json.loads(args[2]), exp_data)) headers = kwargs['headers'] self.assertEqual(headers['x-ms-containerid'], goal_state.container_id) self.assertEqual(headers['x-ms-host-config-name'], goal_state.role_config_name) @staticmethod @contextlib.contextmanager def create_mock_protocol(): data_file = DATA_FILE_NO_EXT.copy() data_file["ext_conf"] = "wire/ext_conf_no_extensions-page_blob.xml" with mock_wire_protocol(data_file) as protocol: status = restapi.VMStatus(status="Ready", message="Guest Agent is running") protocol.client.status_blob.set_vm_status(status) # Also, they mock WireClient.update_goal_state() to verify how it is called protocol.client.update_goal_state = Mock() yield protocol @patch("azurelinuxagent.common.protocol.healthservice.HealthService.report_host_plugin_versions") @patch("azurelinuxagent.common.protocol.hostplugin.restutil.http_get") @patch("azurelinuxagent.common.protocol.hostplugin.add_event") def assert_ensure_initialized(self, patch_event, patch_http_get, patch_report_health, response_body, response_status_code, should_initialize, should_report_healthy): host = hostplugin.HostPluginProtocol(endpoint='ws') host.is_initialized = False patch_http_get.return_value = MockResponse(body=response_body, reason='reason', status_code=response_status_code) return_value = host.ensure_initialized() self.assertEqual(return_value, host.is_available) self.assertEqual(should_initialize, host.is_initialized) init_events = [kwargs for _, kwargs in patch_event.call_args_list if kwargs['op'] == 'InitializeHostPlugin'] self.assertEqual(1, len(init_events), 'Expected exactly 1 InitializeHostPlugin event') self.assertEqual(should_initialize, init_events[0]['is_success']) self.assertEqual(1, patch_report_health.call_count) self.assertEqual(should_report_healthy, patch_report_health.call_args[1]['is_healthy']) actual_response = patch_report_health.call_args[1]['response'] if should_initialize: self.assertEqual('', actual_response) else: self.assertTrue('HTTP Failed' in actual_response) self.assertTrue(response_body in actual_response) self.assertTrue(ustr(response_status_code) in actual_response) def test_ensure_initialized(self): """ Test calls to ensure_initialized """ self.assert_ensure_initialized(response_body=api_versions, # pylint: disable=no-value-for-parameter response_status_code=200, should_initialize=True, should_report_healthy=True) self.assert_ensure_initialized(response_body='invalid ip', # pylint: disable=no-value-for-parameter response_status_code=400, should_initialize=False, should_report_healthy=True) self.assert_ensure_initialized(response_body='generic bad request', # pylint: disable=no-value-for-parameter response_status_code=400, should_initialize=False, should_report_healthy=True) self.assert_ensure_initialized(response_body='resource gone', # pylint: disable=no-value-for-parameter response_status_code=410, should_initialize=False, should_report_healthy=True) self.assert_ensure_initialized(response_body='generic error', # pylint: disable=no-value-for-parameter response_status_code=500, should_initialize=False, should_report_healthy=False) self.assert_ensure_initialized(response_body='upstream error', # pylint: disable=no-value-for-parameter response_status_code=502, should_initialize=False, should_report_healthy=True) @patch("azurelinuxagent.common.protocol.hostplugin.HostPluginProtocol.ensure_initialized", return_value=True) @patch("azurelinuxagent.common.protocol.wire.StatusBlob.upload", return_value=False) @patch("azurelinuxagent.common.protocol.hostplugin.HostPluginProtocol._put_page_blob_status") def test_default_channel(self, patch_put, patch_upload, _): """ Status now defaults to HostPlugin. Validate that any errors on the public channel are ignored. Validate that the default channel is never changed as part of status upload. """ with self.create_mock_protocol() as wire_protocol: wire.HostPluginProtocol.is_default_channel = False wire_protocol.client.update_goal_state() # act wire_protocol.client.upload_status_blob() # assert direct route is not called self.assertEqual(0, patch_upload.call_count, "Direct channel was used") # assert host plugin route is called self.assertEqual(1, patch_put.call_count, "Host plugin was not used") # assert update goal state is only called once self.assertEqual(1, wire_protocol.client.update_goal_state.call_count, "Unexpected call count") # ensure the correct url is used self.assertEqual(sas_url, patch_put.call_args[0][0]) # ensure host plugin is not set as default self.assertFalse(wire.HostPluginProtocol.is_default_channel) @patch("azurelinuxagent.common.protocol.hostplugin.HostPluginProtocol.ensure_initialized", return_value=True) @patch("azurelinuxagent.common.protocol.wire.StatusBlob.upload", return_value=True) @patch("azurelinuxagent.common.protocol.hostplugin.HostPluginProtocol._put_page_blob_status", side_effect=HttpError("503")) def test_fallback_channel_503(self, patch_put, patch_upload, _): """ When host plugin returns a 503, we should fall back to the direct channel """ with self.create_mock_protocol() as wire_protocol: wire.HostPluginProtocol.is_default_channel = False wire_protocol.client.update_goal_state() # act wire_protocol.client.upload_status_blob() # assert direct route is called self.assertEqual(1, patch_upload.call_count, "Direct channel was not used") # assert host plugin route is called self.assertEqual(1, patch_put.call_count, "Host plugin was not used") # assert update goal state is only called once self.assertEqual(1, wire_protocol.client.update_goal_state.call_count, "Update goal state unexpected call count") # ensure the correct url is used self.assertEqual(sas_url, patch_put.call_args[0][0]) # ensure host plugin is not set as default self.assertFalse(wire.HostPluginProtocol.is_default_channel) @patch("azurelinuxagent.common.protocol.hostplugin.HostPluginProtocol.ensure_initialized", return_value=True) @patch("azurelinuxagent.common.protocol.wire.StatusBlob.upload", return_value=True) @patch("azurelinuxagent.common.protocol.hostplugin.HostPluginProtocol._put_page_blob_status", side_effect=ResourceGoneError("410")) @patch("azurelinuxagent.common.protocol.wire.WireClient.update_host_plugin_from_goal_state") def test_fallback_channel_410(self, patch_refresh_host_plugin, patch_put, patch_upload, _): """ When host plugin returns a 410, we should force the goal state update and return """ with self.create_mock_protocol() as wire_protocol: wire.HostPluginProtocol.is_default_channel = False wire_protocol.client.update_goal_state() # act wire_protocol.client.upload_status_blob() # assert direct route is not called self.assertEqual(0, patch_upload.call_count, "Direct channel was used") # assert host plugin route is called self.assertEqual(1, patch_put.call_count, "Host plugin was not used") # assert update goal state is called, then update_host_plugin_from_goal_state is called self.assertEqual(1, wire_protocol.client.update_goal_state.call_count, "Update goal state unexpected call count") self.assertEqual(1, patch_refresh_host_plugin.call_count, "Refresh host plugin unexpected call count") # ensure the correct url is used self.assertEqual(sas_url, patch_put.call_args[0][0]) # ensure host plugin is not set as default self.assertFalse(wire.HostPluginProtocol.is_default_channel) @patch("azurelinuxagent.common.protocol.hostplugin.HostPluginProtocol.ensure_initialized", return_value=True) @patch("azurelinuxagent.common.protocol.wire.StatusBlob.upload", return_value=False) @patch("azurelinuxagent.common.protocol.hostplugin.HostPluginProtocol._put_page_blob_status", side_effect=HttpError("500")) def test_fallback_channel_failure(self, patch_put, patch_upload, _): """ When host plugin returns a 500, and direct fails, we should raise a ProtocolError """ with self.create_mock_protocol() as wire_protocol: wire.HostPluginProtocol.is_default_channel = False wire_protocol.client.update_goal_state() # act self.assertRaises(wire.ProtocolError, wire_protocol.client.upload_status_blob) # assert direct route is not called self.assertEqual(1, patch_upload.call_count, "Direct channel was not used") # assert host plugin route is called self.assertEqual(1, patch_put.call_count, "Host plugin was not used") # assert update goal state is called twice self.assertEqual(1, wire_protocol.client.update_goal_state.call_count, "Update goal state unexpected call count") # ensure the correct url is used self.assertEqual(sas_url, patch_put.call_args[0][0]) # ensure host plugin is not set as default self.assertFalse(wire.HostPluginProtocol.is_default_channel) def test_put_status_error_reporting(self): """ Validate the telemetry when uploading status fails """ wire.HostPluginProtocol.is_default_channel = False with patch.object(wire.StatusBlob, "upload", return_value=False): with self.create_mock_protocol() as wire_protocol: wire_protocol_client = wire_protocol.client put_error = wire.HttpError("put status http error") with patch.object(restutil, "http_put", side_effect=put_error): with patch.object(wire.HostPluginProtocol, "ensure_initialized", return_value=True): with patch("azurelinuxagent.common.event.add_event") as patch_add_event: self.assertRaises(wire.ProtocolError, wire_protocol_client.upload_status_blob) # The agent tries to upload via HostPlugin and that fails due to # http_put having a side effect of "put_error" # # The agent tries to upload using a direct connection, and that succeeds. self.assertEqual(1, wire_protocol_client.status_blob.upload.call_count) # pylint: disable=no-member # The agent never touches the default protocol is this code path, so no change. self.assertFalse(wire.HostPluginProtocol.is_default_channel) # The agent never logs telemetry event for direct fallback self.assertEqual(1, patch_add_event.call_count) self.assertEqual('ReportStatus', patch_add_event.call_args[1]['op']) self.assertTrue('Falling back to direct' in patch_add_event.call_args[1]['message']) self.assertEqual(True, patch_add_event.call_args[1]['is_success']) def test_validate_http_request_when_uploading_status(self): """Validate correct set of data is sent to HostGAPlugin when reporting VM status""" with mock_wire_protocol(DATA_FILE) as protocol: test_goal_state = protocol.client._goal_state plugin = protocol.client.get_host_plugin() status_blob = protocol.client.status_blob status_blob.data = faux_status status_blob.vm_status = restapi.VMStatus(message="Ready", status="Ready") exp_method = 'PUT' exp_url = hostplugin_status_url exp_data = self._hostplugin_data( status_blob.get_block_blob_headers(len(faux_status)), bytearray(faux_status, encoding='utf-8')) with patch.object(restutil, "http_request") as patch_http: patch_http.return_value = Mock(status=httpclient.OK) with patch.object(plugin, 'get_api_versions') as patch_api: patch_api.return_value = API_VERSION plugin.put_vm_status(status_blob, sas_url, block_blob_type) self.assertTrue(patch_http.call_count == 2) # first call is to host plugin self._validate_hostplugin_args( patch_http.call_args_list[0], test_goal_state, exp_method, exp_url, exp_data) # second call is to health service self.assertEqual('POST', patch_http.call_args_list[1][0][0]) self.assertEqual(health_service_url, patch_http.call_args_list[1][0][1]) def test_validate_block_blob(self): with mock_wire_protocol(DATA_FILE) as protocol: host_client = protocol.client.get_host_plugin() self.assertFalse(host_client.is_initialized) self.assertTrue(host_client.api_versions is None) self.assertTrue(host_client.health_service is not None) status_blob = protocol.client.status_blob status_blob.data = faux_status status_blob.type = block_blob_type status_blob.vm_status = restapi.VMStatus(message="Ready", status="Ready") exp_method = 'PUT' exp_url = hostplugin_status_url exp_data = self._hostplugin_data( status_blob.get_block_blob_headers(len(faux_status)), bytearray(faux_status, encoding='utf-8')) with patch.object(restutil, "http_request") as patch_http: patch_http.return_value = Mock(status=httpclient.OK) with patch.object(wire.HostPluginProtocol, "get_api_versions") as patch_get: patch_get.return_value = api_versions host_client.put_vm_status(status_blob, sas_url) self.assertTrue(patch_http.call_count == 2) # first call is to host plugin self._validate_hostplugin_args( patch_http.call_args_list[0], protocol.get_goal_state(), exp_method, exp_url, exp_data) # second call is to health service self.assertEqual('POST', patch_http.call_args_list[1][0][0]) self.assertEqual(health_service_url, patch_http.call_args_list[1][0][1]) def test_validate_page_blobs(self): """Validate correct set of data is sent for page blobs""" with mock_wire_protocol(DATA_FILE) as protocol: test_goal_state = protocol.get_goal_state() host_client = protocol.client.get_host_plugin() self.assertFalse(host_client.is_initialized) self.assertTrue(host_client.api_versions is None) status_blob = protocol.client.status_blob status_blob.data = faux_status status_blob.type = page_blob_type status_blob.vm_status = restapi.VMStatus(message="Ready", status="Ready") exp_method = 'PUT' exp_url = hostplugin_status_url page_status = bytearray(status_blob.data, encoding='utf-8') page_size = int((len(page_status) + 511) / 512) * 512 page_status = bytearray(status_blob.data.ljust(page_size), encoding='utf-8') page = bytearray(page_size) page[0: page_size] = page_status[0: len(page_status)] mock_response = MockResponse('', httpclient.OK) with patch.object(restutil, "http_request", return_value=mock_response) as patch_http: with patch.object(wire.HostPluginProtocol, "get_api_versions") as patch_get: patch_get.return_value = api_versions host_client.put_vm_status(status_blob, sas_url) self.assertTrue(patch_http.call_count == 3) # first call is to host plugin exp_data = self._hostplugin_data( status_blob.get_page_blob_create_headers( page_size)) self._validate_hostplugin_args( patch_http.call_args_list[0], test_goal_state, exp_method, exp_url, exp_data) # second call is to health service self.assertEqual('POST', patch_http.call_args_list[1][0][0]) self.assertEqual(health_service_url, patch_http.call_args_list[1][0][1]) # last call is to host plugin exp_data = self._hostplugin_data( status_blob.get_page_blob_page_headers( 0, page_size), page) exp_data['requestUri'] += "?comp=page" self._validate_hostplugin_args( patch_http.call_args_list[2], test_goal_state, exp_method, exp_url, exp_data) def test_validate_http_request_for_put_vm_log(self): def http_put_handler(url, *args, **kwargs): # pylint: disable=inconsistent-return-statements if self.is_host_plugin_put_logs_request(url): http_put_handler.args, http_put_handler.kwargs = args, kwargs return MockResponse(body=b'', status_code=200) http_put_handler.args, http_put_handler.kwargs = [], {} with mock_wire_protocol(DATA_FILE, http_put_handler=http_put_handler) as protocol: test_goal_state = protocol.get_goal_state() expected_url = hostplugin.URI_FORMAT_PUT_LOG.format(wireserver_url, hostplugin.HOST_PLUGIN_PORT) expected_headers = {'x-ms-version': '2015-09-01', "x-ms-containerid": test_goal_state.container_id, "x-ms-vmagentlog-deploymentid": test_goal_state.role_config_name.split(".")[0], "x-ms-client-name": AGENT_NAME, "x-ms-client-version": AGENT_VERSION} host_client = protocol.client.get_host_plugin() self.assertFalse(host_client.is_initialized, "Host plugin should not be initialized!") content = b"test" host_client.put_vm_log(content) self.assertTrue(host_client.is_initialized, "Host plugin is not initialized!") urls = protocol.get_tracked_urls() self.assertEqual(expected_url, urls[0], "Unexpected request URL!") self.assertEqual(content, http_put_handler.args[0], "Unexpected content for HTTP PUT request!") headers = http_put_handler.kwargs['headers'] for k in expected_headers: self.assertTrue(k in headers, "Header {0} not found in headers!".format(k)) self.assertEqual(expected_headers[k], headers[k], "Request headers don't match!") # Special check for correlation id header value, check for pattern, not exact value self.assertTrue("x-ms-client-correlationid" in headers.keys(), "Correlation id not found in headers!") self.assertTrue(UUID_PATTERN.match(headers["x-ms-client-correlationid"]), "Correlation id is not in GUID form!") def test_put_vm_log_should_raise_an_exception_when_request_fails(self): def http_put_handler(url, *args, **kwargs): # pylint: disable=inconsistent-return-statements if self.is_host_plugin_put_logs_request(url): http_put_handler.args, http_put_handler.kwargs = args, kwargs return MockResponse(body=ustr('Gone'), status_code=410) http_put_handler.args, http_put_handler.kwargs = [], {} with mock_wire_protocol(DATA_FILE, http_put_handler=http_put_handler) as protocol: host_client = wire.HostPluginProtocol(wireserver_url) GoalState.update_host_plugin_headers(protocol.client) self.assertFalse(host_client.is_initialized, "Host plugin should not be initialized!") with self.assertRaises(HttpError) as context_manager: content = b"test" host_client.put_vm_log(content) self.assertIsInstance(context_manager.exception, HttpError) self.assertIn("410", ustr(context_manager.exception)) self.assertIn("Gone", ustr(context_manager.exception)) def test_validate_get_extension_artifacts(self): with mock_wire_protocol(DATA_FILE) as protocol: test_goal_state = protocol.get_goal_state() expected_url = hostplugin.URI_FORMAT_GET_EXTENSION_ARTIFACT.format(wireserver_url, hostplugin.HOST_PLUGIN_PORT) expected_headers = {'x-ms-version': '2015-09-01', "x-ms-containerid": test_goal_state.container_id, "x-ms-host-config-name": test_goal_state.role_config_name, "x-ms-artifact-location": sas_url} host_client = protocol.client.get_host_plugin() self.assertFalse(host_client.is_initialized) self.assertTrue(host_client.api_versions is None) self.assertTrue(host_client.health_service is not None) with patch.object(wire.HostPluginProtocol, "get_api_versions", return_value=api_versions) as patch_get: # pylint: disable=unused-variable actual_url, actual_headers = host_client.get_artifact_request(sas_url, use_verify_header=False) self.assertTrue(host_client.is_initialized) self.assertFalse(host_client.api_versions is None) self.assertEqual(expected_url, actual_url) for k in expected_headers: self.assertTrue(k in actual_headers) self.assertEqual(expected_headers[k], actual_headers[k]) def test_health(self): host_plugin = self._init_host() with patch("azurelinuxagent.common.utils.restutil.http_get") as patch_http_get: patch_http_get.return_value = MockResponse('', 200) result = host_plugin.get_health() self.assertEqual(1, patch_http_get.call_count) self.assertTrue(result) patch_http_get.return_value = MockResponse('', 500) result = host_plugin.get_health() self.assertFalse(result) patch_http_get.side_effect = IOError('client IO error') try: host_plugin.get_health() self.fail('IO error expected to be raised') except IOError: # expected pass def test_ensure_health_service_called(self): host_plugin = self._init_host() with patch("azurelinuxagent.common.utils.restutil.http_get", return_value=MockHttpResponse(200)) as patch_http_get: with patch("azurelinuxagent.common.protocol.healthservice.HealthService.report_host_plugin_versions") as patch_report_versions: host_plugin.get_api_versions() self.assertEqual(1, patch_http_get.call_count) self.assertEqual(1, patch_report_versions.call_count) def test_put_status_healthy_signal(self): host_plugin = self._init_host() with patch("azurelinuxagent.common.utils.restutil.http_get") as patch_http_get: with patch("azurelinuxagent.common.utils.restutil.http_post") as patch_http_post: with patch("azurelinuxagent.common.utils.restutil.http_put") as patch_http_put: status_blob = self._init_status_blob() # get_api_versions patch_http_get.return_value = MockResponse(api_versions, 200) # put status blob patch_http_put.return_value = MockResponse(None, 201) host_plugin.put_vm_status(status_blob=status_blob, sas_url=sas_url) get_versions = [args for args in patch_http_get.call_args_list if args[0][0] == hostplugin_versions_url] self.assertEqual(1, len(get_versions), "Expected exactly 1 GET on {0}".format(hostplugin_versions_url)) self.assertEqual(2, patch_http_put.call_count) self.assertEqual(hostplugin_status_url, patch_http_put.call_args_list[0][0][0]) self.assertEqual(hostplugin_status_url, patch_http_put.call_args_list[1][0][0]) self.assertEqual(2, patch_http_post.call_count) # signal for /versions self.assertEqual(health_service_url, patch_http_post.call_args_list[0][0][0]) jstr = patch_http_post.call_args_list[0][0][1] obj = json.loads(jstr) self.assertEqual(1, len(obj['Observations'])) self.assertTrue(obj['Observations'][0]['IsHealthy']) self.assertEqual('GuestAgentPluginVersions', obj['Observations'][0]['ObservationName']) # signal for /status self.assertEqual(health_service_url, patch_http_post.call_args_list[1][0][0]) jstr = patch_http_post.call_args_list[1][0][1] obj = json.loads(jstr) self.assertEqual(1, len(obj['Observations'])) self.assertTrue(obj['Observations'][0]['IsHealthy']) self.assertEqual('GuestAgentPluginStatus', obj['Observations'][0]['ObservationName']) def test_put_status_unhealthy_signal_transient(self): host_plugin = self._init_host() with patch("azurelinuxagent.common.utils.restutil.http_get") as patch_http_get: with patch("azurelinuxagent.common.utils.restutil.http_post") as patch_http_post: with patch("azurelinuxagent.common.utils.restutil.http_put") as patch_http_put: status_blob = self._init_status_blob() # get_api_versions patch_http_get.return_value = MockResponse(api_versions, 200) # put status blob patch_http_put.return_value = MockResponse(None, 500) with self.assertRaises(HttpError): host_plugin.put_vm_status(status_blob=status_blob, sas_url=sas_url) get_versions = [args for args in patch_http_get.call_args_list if args[0][0] == hostplugin_versions_url] self.assertEqual(1, len(get_versions), "Expected exactly 1 GET on {0}".format(hostplugin_versions_url)) self.assertEqual(1, patch_http_put.call_count) self.assertEqual(hostplugin_status_url, patch_http_put.call_args[0][0]) self.assertEqual(2, patch_http_post.call_count) # signal for /versions self.assertEqual(health_service_url, patch_http_post.call_args_list[0][0][0]) jstr = patch_http_post.call_args_list[0][0][1] obj = json.loads(jstr) self.assertEqual(1, len(obj['Observations'])) self.assertTrue(obj['Observations'][0]['IsHealthy']) self.assertEqual('GuestAgentPluginVersions', obj['Observations'][0]['ObservationName']) # signal for /status self.assertEqual(health_service_url, patch_http_post.call_args_list[1][0][0]) jstr = patch_http_post.call_args_list[1][0][1] obj = json.loads(jstr) self.assertEqual(1, len(obj['Observations'])) self.assertTrue(obj['Observations'][0]['IsHealthy']) self.assertEqual('GuestAgentPluginStatus', obj['Observations'][0]['ObservationName']) def test_put_status_unhealthy_signal_permanent(self): host_plugin = self._init_host() with patch("azurelinuxagent.common.utils.restutil.http_get") as patch_http_get: with patch("azurelinuxagent.common.utils.restutil.http_post") as patch_http_post: with patch("azurelinuxagent.common.utils.restutil.http_put") as patch_http_put: status_blob = self._init_status_blob() # get_api_versions patch_http_get.return_value = MockResponse(api_versions, 200) # put status blob patch_http_put.return_value = MockResponse(None, 500) host_plugin.status_error_state.is_triggered = Mock(return_value=True) with self.assertRaises(HttpError): host_plugin.put_vm_status(status_blob=status_blob, sas_url=sas_url) get_versions = [args for args in patch_http_get.call_args_list if args[0][0] == hostplugin_versions_url] self.assertEqual(1, len(get_versions), "Expected exactly 1 GET on {0}".format(hostplugin_versions_url)) self.assertEqual(1, patch_http_put.call_count) self.assertEqual(hostplugin_status_url, patch_http_put.call_args[0][0]) self.assertEqual(2, patch_http_post.call_count) # signal for /versions self.assertEqual(health_service_url, patch_http_post.call_args_list[0][0][0]) jstr = patch_http_post.call_args_list[0][0][1] obj = json.loads(jstr) self.assertEqual(1, len(obj['Observations'])) self.assertTrue(obj['Observations'][0]['IsHealthy']) self.assertEqual('GuestAgentPluginVersions', obj['Observations'][0]['ObservationName']) # signal for /status self.assertEqual(health_service_url, patch_http_post.call_args_list[1][0][0]) jstr = patch_http_post.call_args_list[1][0][1] obj = json.loads(jstr) self.assertEqual(1, len(obj['Observations'])) self.assertFalse(obj['Observations'][0]['IsHealthy']) self.assertEqual('GuestAgentPluginStatus', obj['Observations'][0]['ObservationName']) @patch("azurelinuxagent.common.protocol.hostplugin.HostPluginProtocol.should_report", return_value=True) @patch("azurelinuxagent.common.protocol.healthservice.HealthService.report_host_plugin_extension_artifact") def test_report_fetch_health(self, patch_report_artifact, patch_should_report): host_plugin = self._init_host() host_plugin.report_fetch_health(uri='', is_healthy=True) self.assertEqual(0, patch_should_report.call_count) host_plugin.report_fetch_health(uri='http://169.254.169.254/extensionArtifact', is_healthy=True) self.assertEqual(0, patch_should_report.call_count) host_plugin.report_fetch_health(uri='http://168.63.129.16:32526/status', is_healthy=True) self.assertEqual(0, patch_should_report.call_count) self.assertEqual(None, host_plugin.fetch_last_timestamp) host_plugin.report_fetch_health(uri='http://168.63.129.16:32526/extensionArtifact', is_healthy=True) self.assertNotEqual(None, host_plugin.fetch_last_timestamp) self.assertEqual(1, patch_should_report.call_count) self.assertEqual(1, patch_report_artifact.call_count) @patch("azurelinuxagent.common.protocol.hostplugin.HostPluginProtocol.should_report", return_value=True) @patch("azurelinuxagent.common.protocol.healthservice.HealthService.report_host_plugin_status") def test_report_status_health(self, patch_report_status, patch_should_report): host_plugin = self._init_host() self.assertEqual(None, host_plugin.status_last_timestamp) host_plugin.report_status_health(is_healthy=True) self.assertNotEqual(None, host_plugin.status_last_timestamp) self.assertEqual(1, patch_should_report.call_count) self.assertEqual(1, patch_report_status.call_count) def test_should_report(self): host_plugin = self._init_host() error_state = ErrorState(min_timedelta=datetime.timedelta(minutes=5)) period = datetime.timedelta(minutes=1) last_timestamp = None # first measurement at 0s, should report is_healthy = True actual = host_plugin.should_report(is_healthy, error_state, last_timestamp, period) self.assertEqual(True, actual) # second measurement at 30s, should not report last_timestamp = datetime.datetime.now(UTC) - datetime.timedelta(seconds=30) actual = host_plugin.should_report(is_healthy, error_state, last_timestamp, period) self.assertEqual(False, actual) # third measurement at 60s, should report last_timestamp = datetime.datetime.now(UTC) - datetime.timedelta(seconds=60) actual = host_plugin.should_report(is_healthy, error_state, last_timestamp, period) self.assertEqual(True, actual) # fourth measurement unhealthy, should report and increment counter is_healthy = False self.assertEqual(0, error_state.count) actual = host_plugin.should_report(is_healthy, error_state, last_timestamp, period) self.assertEqual(1, error_state.count) self.assertEqual(True, actual) # fifth measurement, should not report and reset counter is_healthy = True last_timestamp = datetime.datetime.now(UTC) - datetime.timedelta(seconds=30) self.assertEqual(1, error_state.count) actual = host_plugin.should_report(is_healthy, error_state, last_timestamp, period) self.assertEqual(0, error_state.count) self.assertEqual(False, actual) class TestHostPluginVmSettings(HttpRequestPredicates, AgentTestCase): def test_it_should_raise_protocol_error_when_the_vm_settings_request_fails(self): def http_get_handler(url, *_, **__): if self.is_host_plugin_vm_settings_request(url): return MockHttpResponse(httpclient.INTERNAL_SERVER_ERROR, body="TEST ERROR") return None with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: protocol.set_http_handlers(http_get_handler=http_get_handler) with self.assertRaisesRegexCM(ProtocolError, r'GET vmSettings \[correlation ID: .* eTag: .*\]: \[HTTP Failed\] \[500: None].*TEST ERROR.*'): protocol.client.get_host_plugin().fetch_vm_settings() @staticmethod def _fetch_vm_settings_ignoring_errors(protocol): try: protocol.client.get_host_plugin().fetch_vm_settings() except (ProtocolError, VmSettingsNotSupported): pass def test_it_should_keep_track_of_errors_in_vm_settings_requests(self): mock_response = None def http_get_handler(url, *_, **__): if self.is_host_plugin_vm_settings_request(url): if isinstance(mock_response, Exception): # E0702: Raising NoneType while only classes or instances are allowed (raising-bad-type) - Disabled: we never raise None raise mock_response # pylint: disable=raising-bad-type return mock_response return None with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS, http_get_handler=http_get_handler) as protocol: mock_response = MockHttpResponse(httpclient.INTERNAL_SERVER_ERROR) self._fetch_vm_settings_ignoring_errors(protocol) mock_response = MockHttpResponse(httpclient.BAD_REQUEST) self._fetch_vm_settings_ignoring_errors(protocol) self._fetch_vm_settings_ignoring_errors(protocol) mock_response = IOError("timed out") self._fetch_vm_settings_ignoring_errors(protocol) mock_response = httpclient.HTTPException() self._fetch_vm_settings_ignoring_errors(protocol) self._fetch_vm_settings_ignoring_errors(protocol) # force the summary by resetting its period and calling update_goal_state with patch("azurelinuxagent.common.protocol.hostplugin.add_event") as add_event: mock_response = None # stop producing errors protocol.client._host_plugin._vm_settings_error_reporter._next_period = datetime.datetime.now(UTC) self._fetch_vm_settings_ignoring_errors(protocol) summary_text = [kwargs["message"] for _, kwargs in add_event.call_args_list if kwargs["op"] == "VmSettingsSummary"] self.assertEqual(1, len(summary_text), "Exactly 1 summary should have been produced. Got: {0} ".format(summary_text)) summary = json.loads(summary_text[0]) expected = { "requests": 6 + 2, # two extra calls to update_goal_state (when creating the mock protocol and when forcing the summary) "errors": 6, "serverErrors": 1, "clientErrors": 2, "timeouts": 1, "failedRequests": 2 } self.assertEqual(expected, summary, "The count of errors is incorrect") def test_it_should_limit_the_number_of_errors_it_reports(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: def http_get_handler(url, *_, **__): if self.is_host_plugin_vm_settings_request(url): return MockHttpResponse(httpclient.BAD_GATEWAY) # HostGAPlugin returns 502 for internal errors return None protocol.set_http_handlers(http_get_handler=http_get_handler) def get_telemetry_messages(): return [kwargs["message"] for _, kwargs in add_event.call_args_list if kwargs["op"] == "VmSettings"] with patch("azurelinuxagent.common.protocol.hostplugin.add_event") as add_event: for _ in range(_VmSettingsErrorReporter._MaxErrors + 3): self._fetch_vm_settings_ignoring_errors(protocol) telemetry_messages = get_telemetry_messages() self.assertEqual(_VmSettingsErrorReporter._MaxErrors, len(telemetry_messages), "The number of errors reported to telemetry is not the max allowed (got: {0})".format(telemetry_messages)) # Reset the error reporter and verify that additional errors are reported protocol.client.get_host_plugin()._vm_settings_error_reporter._next_period = datetime.datetime.now(UTC) self._fetch_vm_settings_ignoring_errors(protocol) # this triggers the reset with patch("azurelinuxagent.common.protocol.hostplugin.add_event") as add_event: self._fetch_vm_settings_ignoring_errors(protocol) telemetry_messages = get_telemetry_messages() self.assertEqual(1, len(telemetry_messages), "Expected additional errors to be reported to telemetry in the next period (got: {0})".format(telemetry_messages)) def test_it_should_stop_issuing_vm_settings_requests_when_api_is_not_supported(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: def http_get_handler(url, *_, **__): if self.is_host_plugin_vm_settings_request(url): return MockHttpResponse(httpclient.NOT_FOUND) # HostGAPlugin returns 404 if the API is not supported return None protocol.set_http_handlers(http_get_handler=http_get_handler) def get_vm_settings_call_count(): return len([url for url in protocol.get_tracked_urls() if "vmSettings" in url]) self._fetch_vm_settings_ignoring_errors(protocol) self.assertEqual(1, get_vm_settings_call_count(), "There should have been an initial call to vmSettings.") self._fetch_vm_settings_ignoring_errors(protocol) self._fetch_vm_settings_ignoring_errors(protocol) self.assertEqual(1, get_vm_settings_call_count(), "Additional calls to update_goal_state should not have produced extra calls to vmSettings.") # reset the vmSettings check period; this should restart the calls to the API protocol.client._host_plugin._supports_vm_settings_next_check = datetime.datetime.now(UTC) protocol.client.update_goal_state() self.assertEqual(2, get_vm_settings_call_count(), "A second call to vmSettings was expecting after the check period has elapsed.") def test_it_should_raise_when_the_vm_settings_api_stops_being_supported(self): def http_get_handler(url, *_, **__): if self.is_host_plugin_vm_settings_request(url): return MockHttpResponse(httpclient.NOT_FOUND) # HostGAPlugin returns 404 if the API is not supported return None with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: host_ga_plugin = protocol.client.get_host_plugin() # Do an initial call to ensure the API is supported vm_settings, _ = host_ga_plugin.fetch_vm_settings() # Now return NOT_FOUND to indicate the API is not supported protocol.set_http_handlers(http_get_handler=http_get_handler) with self.assertRaises(VmSettingsSupportStopped) as cm: host_ga_plugin.fetch_vm_settings() self.assertEqual(vm_settings.created_on_timestamp, cm.exception.timestamp) def test_it_should_save_the_timestamp_of_the_most_recent_fast_track_goal_state(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: host_ga_plugin = protocol.client.get_host_plugin() vm_settings, _ = host_ga_plugin.fetch_vm_settings() state_file = os.path.join(conf.get_lib_dir(), "fast_track.json") self.assertTrue(os.path.exists(state_file), "The timestamp was not saved (can't find {0})".format(state_file)) with open(state_file, "r") as state_file_: state = json.load(state_file_) self.assertEqual(vm_settings.created_on_timestamp, state["timestamp"], "{0} does not contain the expected timestamp".format(state_file)) # A fabric goal state should remove the state file protocol.mock_wire_data.set_vm_settings_source(GoalStateSource.Fabric) protocol.mock_wire_data.set_etag(888) _ = host_ga_plugin.fetch_vm_settings() self.assertFalse(os.path.exists(state_file), "{0} was not removed by a Fabric goal state".format(state_file)) class MockResponse: def __init__(self, body, status_code, reason=''): self.body = body self.status = status_code self.reason = reason def read(self): return self.body if sys.version_info[0] == 2 else bytes(self.body, encoding='utf-8') if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-a976115/tests/common/protocol/test_image_info_matcher.py000066400000000000000000000114421510742556200271000ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import unittest from azurelinuxagent.common.protocol.imds import ImageInfoMatcher class TestImageInfoMatcher(unittest.TestCase): def test_image_does_not_exist(self): doc = '{}' test_subject = ImageInfoMatcher(doc) self.assertFalse(test_subject.is_match("Red Hat", "RHEL", "6.3", "")) def test_image_exists_by_sku(self): doc = '''{ "CANONICAL": { "UBUNTUSERVER": { "16.04-LTS": { "Match": ".*" } } } }''' test_subject = ImageInfoMatcher(doc) self.assertTrue(test_subject.is_match("Canonical", "UbuntuServer", "16.04-LTS", "")) self.assertTrue(test_subject.is_match("Canonical", "UbuntuServer", "16.04-LTS", "16.04.201805090")) self.assertFalse(test_subject.is_match("Canonical", "UbuntuServer", "14.04.0-LTS", "16.04.201805090")) def test_image_exists_by_version(self): doc = '''{ "REDHAT": { "RHEL": { "Minimum": "6.3" } } }''' test_subject = ImageInfoMatcher(doc) self.assertFalse(test_subject.is_match("RedHat", "RHEL", "6.1", "")) self.assertFalse(test_subject.is_match("RedHat", "RHEL", "6.2", "")) self.assertTrue(test_subject.is_match("RedHat", "RHEL", "6.3", "")) self.assertTrue(test_subject.is_match("RedHat", "RHEL", "6.4", "")) self.assertTrue(test_subject.is_match("RedHat", "RHEL", "6.5", "")) self.assertTrue(test_subject.is_match("RedHat", "RHEL", "7.0", "")) self.assertTrue(test_subject.is_match("RedHat", "RHEL", "7.1", "")) def test_image_exists_by_version01(self): """ Test case to ensure the matcher exhaustively searches all cases. REDHAT/RHEL have a SKU >= 6.3 is less precise than REDHAT/RHEL/7-LVM have a any version. Both should return a successful match. """ doc = '''{ "REDHAT": { "RHEL": { "Minimum": "6.3", "7-LVM": { "Match": ".*" } } } }''' test_subject = ImageInfoMatcher(doc) self.assertTrue(test_subject.is_match("RedHat", "RHEL", "6.3", "")) self.assertTrue(test_subject.is_match("RedHat", "RHEL", "7-LVM", "")) def test_ignores_case(self): doc = '''{ "CANONICAL": { "UBUNTUSERVER": { "16.04-LTS": { "Match": ".*" } } } }''' test_subject = ImageInfoMatcher(doc) self.assertTrue(test_subject.is_match("canonical", "ubuntuserver", "16.04-lts", "")) self.assertFalse(test_subject.is_match("canonical", "ubuntuserver", "14.04.0-lts", "16.04.201805090")) def test_list_operator(self): doc = '''{ "CANONICAL": { "UBUNTUSERVER": { "List": [ "14.04.0-LTS", "14.04.1-LTS" ] } } }''' test_subject = ImageInfoMatcher(doc) self.assertTrue(test_subject.is_match("Canonical", "UbuntuServer", "14.04.0-LTS", "")) self.assertTrue(test_subject.is_match("Canonical", "UbuntuServer", "14.04.1-LTS", "")) self.assertFalse(test_subject.is_match("Canonical", "UbuntuServer", "22.04-LTS", "")) def test_invalid_version(self): doc = '''{ "REDHAT": { "RHEL": { "Minimum": "6.3" } } }''' test_subject = ImageInfoMatcher(doc) self.assertFalse(test_subject.is_match("RedHat", "RHEL", "16.04-LTS", "")) # This is *expected* behavior as opposed to desirable. The specification is # controlled by the agent, so there is no reason to use these values, but if # one does this is expected behavior. # # FlexibleVersion chops off all leading zeros. self.assertTrue(test_subject.is_match("RedHat", "RHEL", "6.04", "")) # FlexibleVersion coerces everything to a string self.assertTrue(test_subject.is_match("RedHat", "RHEL", 6.04, "")) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-a976115/tests/common/protocol/test_imds.py000066400000000000000000000702131510742556200242350ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import json import os import unittest from azurelinuxagent.common.protocol import imds from azurelinuxagent.common.datacontract import set_properties from azurelinuxagent.common.exception import HttpError, ResourceGoneError from azurelinuxagent.common.future import ustr, httpclient from azurelinuxagent.common.utils import restutil from tests.lib.mock_wire_protocol import MockHttpResponse from tests.lib.tools import AgentTestCase, data_dir, MagicMock, Mock, patch def get_mock_compute_response(): return MockHttpResponse(status=httpclient.OK, body='''{ "location": "westcentralus", "name": "unit_test", "offer": "UnitOffer", "osType": "Linux", "placementGroupId": "", "platformFaultDomain": "0", "platformUpdateDomain": "0", "publisher": "UnitPublisher", "resourceGroupName": "UnitResourceGroupName", "sku": "UnitSku", "subscriptionId": "e4402c6c-2804-4a0a-9dee-d61918fc4d28", "tags": "Key1:Value1;Key2:Value2", "vmId": "f62f23fb-69e2-4df0-a20b-cb5c201a3e7a", "version": "UnitVersion", "vmSize": "Standard_D1_v2" }'''.encode('utf-8')) class TestImds(AgentTestCase): @patch("azurelinuxagent.common.protocol.imds.restutil.http_get") def test_get(self, mock_http_get): mock_http_get.return_value = get_mock_compute_response() test_subject = imds.ImdsClient() test_subject.get_compute() self.assertEqual(1, mock_http_get.call_count) positional_args, kw_args = mock_http_get.call_args self.assertEqual('http://169.254.169.254/metadata/instance/compute?api-version=2018-02-01', positional_args[0]) self.assertTrue('User-Agent' in kw_args['headers']) self.assertTrue('Metadata' in kw_args['headers']) self.assertEqual(True, kw_args['headers']['Metadata']) @patch("azurelinuxagent.common.protocol.imds.restutil.http_get") def test_get_bad_request(self, mock_http_get): mock_http_get.return_value = MockHttpResponse(status=restutil.httpclient.BAD_REQUEST) test_subject = imds.ImdsClient() self.assertRaises(HttpError, test_subject.get_compute) @patch("azurelinuxagent.common.protocol.imds.restutil.http_get") def test_get_internal_service_error(self, mock_http_get): mock_http_get.return_value = MockHttpResponse(status=restutil.httpclient.INTERNAL_SERVER_ERROR) test_subject = imds.ImdsClient() self.assertRaises(HttpError, test_subject.get_compute) @patch("azurelinuxagent.common.protocol.imds.restutil.http_get") def test_get_empty_response(self, mock_http_get): mock_http_get.return_value = MockHttpResponse(status=httpclient.OK, body=''.encode('utf-8')) test_subject = imds.ImdsClient() self.assertRaises(ValueError, test_subject.get_compute) def test_deserialize_ComputeInfo(self): s = '''{ "location": "westcentralus", "name": "unit_test", "offer": "UnitOffer", "osType": "Linux", "placementGroupId": "", "platformFaultDomain": "0", "platformUpdateDomain": "0", "publisher": "UnitPublisher", "resourceGroupName": "UnitResourceGroupName", "sku": "UnitSku", "subscriptionId": "e4402c6c-2804-4a0a-9dee-d61918fc4d28", "tags": "Key1:Value1;Key2:Value2", "vmId": "f62f23fb-69e2-4df0-a20b-cb5c201a3e7a", "version": "UnitVersion", "vmSize": "Standard_D1_v2", "vmScaleSetName": "MyScaleSet", "zone": "In" }''' data = json.loads(s) compute_info = imds.ComputeInfo() set_properties("compute", compute_info, data) self.assertEqual('westcentralus', compute_info.location) self.assertEqual('unit_test', compute_info.name) self.assertEqual('UnitOffer', compute_info.offer) self.assertEqual('Linux', compute_info.osType) self.assertEqual('', compute_info.placementGroupId) self.assertEqual('0', compute_info.platformFaultDomain) self.assertEqual('0', compute_info.platformUpdateDomain) self.assertEqual('UnitPublisher', compute_info.publisher) self.assertEqual('UnitResourceGroupName', compute_info.resourceGroupName) self.assertEqual('UnitSku', compute_info.sku) self.assertEqual('e4402c6c-2804-4a0a-9dee-d61918fc4d28', compute_info.subscriptionId) self.assertEqual('Key1:Value1;Key2:Value2', compute_info.tags) self.assertEqual('f62f23fb-69e2-4df0-a20b-cb5c201a3e7a', compute_info.vmId) self.assertEqual('UnitVersion', compute_info.version) self.assertEqual('Standard_D1_v2', compute_info.vmSize) self.assertEqual('MyScaleSet', compute_info.vmScaleSetName) self.assertEqual('In', compute_info.zone) self.assertEqual('UnitPublisher:UnitOffer:UnitSku:UnitVersion', compute_info.image_info) def test_is_custom_image(self): image_origin = self._setup_image_origin_assert("", "", "", "") self.assertEqual(imds.IMDS_IMAGE_ORIGIN_CUSTOM, image_origin) def test_is_endorsed_CentOS(self): self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "6.3", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "6.4", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "6.5", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "6.6", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "6.7", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "6.8", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "6.9", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "7.0", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "7.1", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "7.2", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "7.3", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "7.4", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "7-LVM", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "7-RAW", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS-HPC", "6.5", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS-HPC", "6.8", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS-HPC", "7.1", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS-HPC", "7.3", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS-HPC", "7.4", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_PLATFORM, self._setup_image_origin_assert("OpenLogic", "CentOS", "6.2", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_PLATFORM, self._setup_image_origin_assert("OpenLogic", "CentOS", "6.1", "")) def test_is_endorsed_CoreOS(self): self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("CoreOS", "CoreOS", "stable", "494.4.0")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("CoreOS", "CoreOS", "stable", "899.17.0")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("CoreOS", "CoreOS", "stable", "1688.5.3")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_PLATFORM, self._setup_image_origin_assert("CoreOS", "CoreOS", "stable", "494.3.0")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_PLATFORM, self._setup_image_origin_assert("CoreOS", "CoreOS", "alpha", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_PLATFORM, self._setup_image_origin_assert("CoreOS", "CoreOS", "beta", "")) def test_is_endorsed_Debian(self): self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("credativ", "Debian", "7", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("credativ", "Debian", "8", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("credativ", "Debian", "9", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_PLATFORM, self._setup_image_origin_assert("credativ", "Debian", "9-DAILY", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_PLATFORM, self._setup_image_origin_assert("credativ", "Debian", "10-DAILY", "")) def test_is_endorsed_Rhel(self): self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL", "6.7", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL", "6.8", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL", "6.9", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL", "7.0", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL", "7.1", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL", "7.2", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL", "7.3", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL", "7.4", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL", "7-LVM", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL", "7-RAW", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL-SAP-HANA", "7.2", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL-SAP-HANA", "7.3", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL-SAP-HANA", "7.4", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL-SAP", "7.2", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL-SAP", "7.3", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL-SAP", "7.4", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL-SAP-APPS", "7.2", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL-SAP-APPS", "7.3", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL-SAP-APPS", "7.4", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_PLATFORM, self._setup_image_origin_assert("RedHat", "RHEL", "6.6", "")) def test_is_endorsed_SuSE(self): self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES", "11-SP4", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES-BYOS", "11-SP4", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES", "12-SP1", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES", "12-SP2", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES", "12-SP3", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES", "12-SP4", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES", "12-SP5", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES-BYOS", "12-SP1", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES-BYOS", "12-SP2", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES-BYOS", "12-SP3", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES-BYOS", "12-SP4", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES-BYOS", "12-SP5", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES-SAP", "12-SP1", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES-SAP", "12-SP2", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES-SAP", "12-SP3", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES-SAP", "12-SP4", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES-SAP", "12-SP5", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_PLATFORM, self._setup_image_origin_assert("SuSE", "SLES", "11-SP3", "")) def test_is_endorsed_UbuntuServer(self): self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "14.04.0-LTS", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "14.04.1-LTS", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "14.04.2-LTS", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "14.04.3-LTS", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "14.04.4-LTS", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "14.04.5-LTS", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "14.04.6-LTS", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "14.04.7-LTS", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "14.04.8-LTS", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "16.04-LTS", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "18.04-LTS", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "20.04-LTS", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "22.04-LTS", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_PLATFORM, self._setup_image_origin_assert("Canonical", "UbuntuServer", "12.04-LTS", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_PLATFORM, self._setup_image_origin_assert("Canonical", "UbuntuServer", "17.10", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_PLATFORM, self._setup_image_origin_assert("Canonical", "UbuntuServer", "18.04-DAILY-LTS", "")) @staticmethod def _setup_image_origin_assert(publisher, offer, sku, version): s = '''{{ "publisher": "{0}", "offer": "{1}", "sku": "{2}", "version": "{3}" }}'''.format(publisher, offer, sku, version) data = json.loads(s) compute_info = imds.ComputeInfo() set_properties("compute", compute_info, data) return compute_info.image_origin def test_response_validation(self): # invalid json or empty response self._assert_validation(http_status_code=200, http_response='', expected_valid=False, expected_response='JSON parsing failed') self._assert_validation(http_status_code=200, http_response=None, expected_valid=False, expected_response='JSON parsing failed') self._assert_validation(http_status_code=200, http_response='{ bad json ', expected_valid=False, expected_response='JSON parsing failed') # 500 response self._assert_validation(http_status_code=500, http_response='error response', expected_valid=False, expected_response='IMDS error in /metadata/instance: [HTTP Failed] [500: reason] error response') # 429 response - throttling does not mean service is unhealthy self._assert_validation(http_status_code=429, http_response='server busy', expected_valid=True, expected_response='[HTTP Failed] [429: reason] server busy') # 404 response - error responses do not mean service is unhealthy self._assert_validation(http_status_code=404, http_response='not found', expected_valid=True, expected_response='[HTTP Failed] [404: reason] not found') # valid json self._assert_validation(http_status_code=200, http_response=self._imds_response('valid'), expected_valid=True, expected_response='') # unicode self._assert_validation(http_status_code=200, http_response=self._imds_response('unicode'), expected_valid=True, expected_response='') def test_field_validation(self): # TODO: compute fields (#1249) self._assert_field('network', 'interface', 'ipv4', 'ipAddress', 'privateIpAddress') self._assert_field('network', 'interface', 'ipv4', 'ipAddress') self._assert_field('network', 'interface', 'ipv4') self._assert_field('network', 'interface', 'macAddress') self._assert_field('network') def _assert_field(self, *fields): response = self._imds_response('valid') response_obj = json.loads(ustr(response, encoding="utf-8")) # assert empty value self._update_field(response_obj, fields, '') altered_response = json.dumps(response_obj).encode() self._assert_validation(http_status_code=200, http_response=altered_response, expected_valid=False, expected_response='Empty field: [{0}]'.format(fields[-1])) # assert missing value self._update_field(response_obj, fields, None) altered_response = json.dumps(response_obj).encode() self._assert_validation(http_status_code=200, http_response=altered_response, expected_valid=False, expected_response='Missing field: [{0}]'.format(fields[-1])) def _update_field(self, obj, fields, val): if isinstance(obj, list): self._update_field(obj[0], fields, val) else: f = fields[0] if len(fields) == 1: if val is None: del obj[f] else: obj[f] = val else: self._update_field(obj[f], fields[1:], val) @staticmethod def _imds_response(f): path = os.path.join(data_dir, "imds", "{0}.json".format(f)) with open(path, "rb") as fh: return fh.read() def _assert_validation(self, http_status_code, http_response, expected_valid, expected_response): test_subject = imds.ImdsClient() with patch("azurelinuxagent.common.utils.restutil.http_get") as mock_http_get: mock_http_get.return_value = MockHttpResponse(status=http_status_code, reason='reason', body=http_response) validate_response = test_subject.validate() self.assertEqual(1, mock_http_get.call_count) positional_args, kw_args = mock_http_get.call_args self.assertTrue('User-Agent' in kw_args['headers']) self.assertEqual(restutil.HTTP_USER_AGENT_HEALTH, kw_args['headers']['User-Agent']) self.assertTrue('Metadata' in kw_args['headers']) self.assertEqual(True, kw_args['headers']['Metadata']) self.assertEqual('http://169.254.169.254/metadata/instance?api-version=2018-02-01', positional_args[0]) self.assertEqual(expected_valid, validate_response[0]) self.assertTrue(expected_response in validate_response[1], "Expected: '{0}', Actual: '{1}'" .format(expected_response, validate_response[1])) def test_endpoint_fallback(self): # http error status codes are tested in test_response_validation, none of which # should trigger a fallback. This is confirmed as _assert_validation will count # http GET calls and enforces a single GET call (fallback would cause 2) and # checks the url called. test_subject = imds.ImdsClient() # ensure user-agent gets set correctly for is_health, expected_useragent in [(False, restutil.HTTP_USER_AGENT), (True, restutil.HTTP_USER_AGENT_HEALTH)]: # set a different resource path for health query to make debugging unit test easier resource_path = 'something/health' if is_health else 'something' # IMDS success test_subject._http_get = Mock(side_effect=self._mock_http_get) self._mock_imds_setup() result = test_subject.get_metadata(resource_path=resource_path, is_health=is_health) self.assertTrue(result.success) self.assertFalse(result.service_error) self.assertEqual('Mock success response', result.response) for _, kwargs in test_subject._http_get.call_args_list: self.assertTrue('User-Agent' in kwargs['headers']) self.assertEqual(expected_useragent, kwargs['headers']['User-Agent']) self.assertEqual(1, test_subject._http_get.call_count) # Connection error test_subject._http_get = Mock(side_effect=self._mock_http_get) self._mock_imds_setup(ioerror=True) result = test_subject.get_metadata(resource_path=resource_path, is_health=is_health) self.assertFalse(result.success) self.assertFalse(result.service_error) self.assertEqual('IMDS error in /metadata/{0}: Unable to connect to endpoint'.format(resource_path), result.response) for _, kwargs in test_subject._http_get.call_args_list: self.assertTrue('User-Agent' in kwargs['headers']) self.assertEqual(expected_useragent, kwargs['headers']['User-Agent']) self.assertEqual(1, test_subject._http_get.call_count) # IMDS throttled test_subject._http_get = Mock(side_effect=self._mock_http_get) self._mock_imds_setup(throttled=True) result = test_subject.get_metadata(resource_path=resource_path, is_health=is_health) self.assertFalse(result.success) self.assertFalse(result.service_error) self.assertEqual('IMDS error in /metadata/{0}: Throttled'.format(resource_path), result.response) for _, kwargs in test_subject._http_get.call_args_list: self.assertTrue('User-Agent' in kwargs['headers']) self.assertEqual(expected_useragent, kwargs['headers']['User-Agent']) self.assertEqual(1, test_subject._http_get.call_count) # IMDS gone error test_subject._http_get = Mock(side_effect=self._mock_http_get) self._mock_imds_setup(gone_error=True) result = test_subject.get_metadata(resource_path=resource_path, is_health=is_health) self.assertFalse(result.success) self.assertTrue(result.service_error) self.assertEqual('IMDS error in /metadata/{0}: HTTP Failed with Status Code 410: Gone'.format(resource_path), result.response) for _, kwargs in test_subject._http_get.call_args_list: self.assertTrue('User-Agent' in kwargs['headers']) self.assertEqual(expected_useragent, kwargs['headers']['User-Agent']) self.assertEqual(1, test_subject._http_get.call_count) # IMDS bad request test_subject._http_get = Mock(side_effect=self._mock_http_get) self._mock_imds_setup(bad_request=True) result = test_subject.get_metadata(resource_path=resource_path, is_health=is_health) self.assertFalse(result.success) self.assertFalse(result.service_error) self.assertEqual('IMDS error in /metadata/{0}: [HTTP Failed] [404: reason] Mock not found'.format(resource_path), result.response) for _, kwargs in test_subject._http_get.call_args_list: self.assertTrue('User-Agent' in kwargs['headers']) self.assertEqual(expected_useragent, kwargs['headers']['User-Agent']) self.assertEqual(1, test_subject._http_get.call_count) def _mock_imds_setup(self, ioerror=False, gone_error=False, throttled=False, bad_request=False): self._mock_imds_ioerror = ioerror # pylint: disable=attribute-defined-outside-init self._mock_imds_gone_error = gone_error # pylint: disable=attribute-defined-outside-init self._mock_imds_throttled = throttled # pylint: disable=attribute-defined-outside-init self._mock_imds_bad_request = bad_request # pylint: disable=attribute-defined-outside-init def _mock_http_get(self, *_, **kwargs): if self._mock_imds_ioerror: raise HttpError("[HTTP Failed] GET http://{0}/metadata/{1} -- IOError timed out -- 6 attempts made".format(kwargs['endpoint'], kwargs['resource_path'])) if self._mock_imds_gone_error: raise ResourceGoneError("Resource is gone") if self._mock_imds_throttled: raise HttpError("[HTTP Retry] GET http://{0}/metadata/{1} -- Status Code 429 -- 25 attempts made".format(kwargs['endpoint'], kwargs['resource_path'])) resp = MagicMock() resp.reason = 'reason' if self._mock_imds_bad_request: resp.status = httpclient.NOT_FOUND resp.read.return_value = 'Mock not found' else: resp.status = httpclient.OK resp.read.return_value = 'Mock success response' return resp if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-a976115/tests/common/protocol/test_metadata_server_migration_util.py000066400000000000000000000146711510742556200315630ustar00rootroot00000000000000# Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import tempfile import unittest import azurelinuxagent.common.protocol.metadata_server_migration_util as migration_util from azurelinuxagent.common.protocol.metadata_server_migration_util import _LEGACY_METADATA_SERVER_TRANSPORT_PRV_FILE_NAME, \ _LEGACY_METADATA_SERVER_TRANSPORT_CERT_FILE_NAME, \ _LEGACY_METADATA_SERVER_P7B_FILE_NAME from tests.lib.tools import AgentTestCase, patch class TestMetadataServerMigrationUtil(AgentTestCase): @patch('azurelinuxagent.common.conf.get_lib_dir') def test_is_metadata_server_artifact_present(self, mock_get_lib_dir): dir = tempfile.gettempdir() # pylint: disable=redefined-builtin metadata_server_transport_cert_file = os.path.join(dir, _LEGACY_METADATA_SERVER_TRANSPORT_CERT_FILE_NAME) open(metadata_server_transport_cert_file, 'w').close() mock_get_lib_dir.return_value = dir self.assertTrue(migration_util.is_metadata_server_artifact_present()) @patch('azurelinuxagent.common.conf.get_lib_dir') def test_is_metadata_server_artifact_not_present(self, mock_get_lib_dir): mock_get_lib_dir.return_value = tempfile.gettempdir() self.assertFalse(migration_util.is_metadata_server_artifact_present()) @patch('azurelinuxagent.common.conf.enable_firewall') @patch('azurelinuxagent.common.conf.get_lib_dir') def test_cleanup_metadata_server_artifacts_does_not_throw_with_no_metadata_certs(self, mock_get_lib_dir, mock_enable_firewall): mock_get_lib_dir.return_value = tempfile.gettempdir() mock_enable_firewall.return_value = False migration_util.cleanup_metadata_server_artifacts() @patch('azurelinuxagent.common.conf.enable_firewall') @patch('azurelinuxagent.common.conf.get_lib_dir') @patch('os.getuid') @patch("azurelinuxagent.common.protocol.metadata_server_migration_util._get_firewall_will_wait", return_value="-w") def test_cleanup_metadata_server_artifacts_firewall_enabled(self, _, mock_os_getuid, mock_get_lib_dir, mock_enable_firewall): # Setup Certificate Files dir = tempfile.gettempdir() # pylint: disable=redefined-builtin metadata_server_transport_prv_file = os.path.join(dir, _LEGACY_METADATA_SERVER_TRANSPORT_PRV_FILE_NAME) metadata_server_transport_cert_file = os.path.join(dir, _LEGACY_METADATA_SERVER_TRANSPORT_CERT_FILE_NAME) metadata_server_p7b_file = os.path.join(dir, _LEGACY_METADATA_SERVER_P7B_FILE_NAME) open(metadata_server_transport_prv_file, 'w').close() open(metadata_server_transport_cert_file, 'w').close() open(metadata_server_p7b_file, 'w').close() # Setup Mocks mock_get_lib_dir.return_value = dir mock_enable_firewall.return_value = True fixed_uid = 0 mock_os_getuid.return_value = fixed_uid # Run with patch("azurelinuxagent.common.protocol.metadata_server_migration_util._remove_firewall") as mock_remove_firewall: migration_util.cleanup_metadata_server_artifacts() # Assert files deleted self.assertFalse(os.path.exists(metadata_server_transport_prv_file)) self.assertFalse(os.path.exists(metadata_server_transport_cert_file)) self.assertFalse(os.path.exists(metadata_server_p7b_file)) # Assert Firewall rule calls self.assertEqual(1, mock_remove_firewall.call_count, "_remove_firewall should be called once") @patch('azurelinuxagent.common.conf.enable_firewall') @patch('azurelinuxagent.common.conf.get_lib_dir') @patch('os.getuid') @patch("azurelinuxagent.common.protocol.metadata_server_migration_util._get_firewall_will_wait", return_value="-w") def test_cleanup_metadata_server_artifacts_firewall_disabled(self, _, mock_os_getuid, mock_get_lib_dir, mock_enable_firewall): # Setup Certificate Files dir = tempfile.gettempdir() # pylint: disable=redefined-builtin metadata_server_transport_prv_file = os.path.join(dir, _LEGACY_METADATA_SERVER_TRANSPORT_PRV_FILE_NAME) metadata_server_transport_cert_file = os.path.join(dir, _LEGACY_METADATA_SERVER_TRANSPORT_CERT_FILE_NAME) metadata_server_p7b_file = os.path.join(dir, _LEGACY_METADATA_SERVER_P7B_FILE_NAME) open(metadata_server_transport_prv_file, 'w').close() open(metadata_server_transport_cert_file, 'w').close() open(metadata_server_p7b_file, 'w').close() # Setup Mocks mock_get_lib_dir.return_value = dir mock_enable_firewall.return_value = False fixed_uid = 0 mock_os_getuid.return_value = fixed_uid # Run with patch("azurelinuxagent.common.protocol.metadata_server_migration_util._remove_firewall") as mock_remove_firewall: migration_util.cleanup_metadata_server_artifacts() # Assert files deleted self.assertFalse(os.path.exists(metadata_server_transport_prv_file)) self.assertFalse(os.path.exists(metadata_server_transport_cert_file)) self.assertFalse(os.path.exists(metadata_server_p7b_file)) # Assert Firewall rule calls self.assertEqual(1, mock_remove_firewall.call_count, "_remove_firewall should be called once") # Cleanup certificate files def tearDown(self): # pylint: disable=redefined-builtin dir = tempfile.gettempdir() for file in [_LEGACY_METADATA_SERVER_TRANSPORT_PRV_FILE_NAME, \ _LEGACY_METADATA_SERVER_TRANSPORT_CERT_FILE_NAME, \ _LEGACY_METADATA_SERVER_P7B_FILE_NAME]: path = os.path.join(dir, file) if os.path.exists(path): os.remove(path) # pylint: enable=redefined-builtin super(TestMetadataServerMigrationUtil, self).tearDown() if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-a976115/tests/common/protocol/test_protocol_util.py000066400000000000000000000417151510742556200262040ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import tempfile import unittest from errno import ENOENT from threading import Thread from azurelinuxagent.common.exception import ProtocolError, DhcpError, OSUtilError from azurelinuxagent.common.protocol.goal_state import TRANSPORT_CERT_FILE_NAME, TRANSPORT_PRV_FILE_NAME from azurelinuxagent.common.protocol.metadata_server_migration_util import _METADATA_PROTOCOL_NAME, \ _LEGACY_METADATA_SERVER_TRANSPORT_PRV_FILE_NAME, \ _LEGACY_METADATA_SERVER_TRANSPORT_CERT_FILE_NAME, \ _LEGACY_METADATA_SERVER_P7B_FILE_NAME from azurelinuxagent.common.protocol.util import get_protocol_util, ProtocolUtil, PROTOCOL_FILE_NAME, \ WIRE_PROTOCOL_NAME, ENDPOINT_FILE_NAME from azurelinuxagent.common.utils.restutil import KNOWN_WIRESERVER_IP from tests.lib.tools import AgentTestCase, MagicMock, Mock, patch, clear_singleton_instances @patch("time.sleep") class TestProtocolUtil(AgentTestCase): MDS_CERTIFICATES = [_LEGACY_METADATA_SERVER_TRANSPORT_PRV_FILE_NAME, \ _LEGACY_METADATA_SERVER_TRANSPORT_CERT_FILE_NAME, \ _LEGACY_METADATA_SERVER_P7B_FILE_NAME] WIRESERVER_CERTIFICATES = [TRANSPORT_CERT_FILE_NAME, TRANSPORT_PRV_FILE_NAME] def setUp(self): super(TestProtocolUtil, self).setUp() # Since ProtocolUtil is a singleton per thread, we need to clear it to ensure that the test cases do not # reuse a previous state clear_singleton_instances(ProtocolUtil) # Cleanup certificate files, protocol file, and endpoint files def tearDown(self): dir = tempfile.gettempdir() # pylint: disable=redefined-builtin for path in [os.path.join(dir, mds_cert) for mds_cert in TestProtocolUtil.MDS_CERTIFICATES]: if os.path.exists(path): os.remove(path) for path in [os.path.join(dir, ws_cert) for ws_cert in TestProtocolUtil.WIRESERVER_CERTIFICATES]: if os.path.exists(path): os.remove(path) protocol_path = os.path.join(dir, PROTOCOL_FILE_NAME) if os.path.exists(protocol_path): os.remove(protocol_path) endpoint_path = os.path.join(dir, ENDPOINT_FILE_NAME) if os.path.exists(endpoint_path): os.remove(endpoint_path) super(TestProtocolUtil, self).tearDown() def test_get_protocol_util_should_return_same_object_for_same_thread(self, _): protocol_util1 = get_protocol_util() protocol_util2 = get_protocol_util() self.assertEqual(protocol_util1, protocol_util2) def test_get_protocol_util_should_return_different_object_for_different_thread(self, _): protocol_util_instances = [] errors = [] def get_protocol_util_instance(): try: protocol_util_instances.append(get_protocol_util()) except Exception as e: errors.append(e) t1 = Thread(target=get_protocol_util_instance) t2 = Thread(target=get_protocol_util_instance) t1.start() t2.start() t1.join() t2.join() self.assertEqual(len(protocol_util_instances), 2, "Could not create the expected number of protocols. Errors: [{0}]".format(errors)) self.assertNotEqual(protocol_util_instances[0], protocol_util_instances[1], "The instances created by different threads should be different") @patch("azurelinuxagent.common.protocol.util.WireProtocol") def test_detect_protocol(self, WireProtocol, _): WireProtocol.return_value = MagicMock() protocol_util = get_protocol_util() protocol_util.dhcp_handler = MagicMock() protocol_util.dhcp_handler.endpoint = "foo.bar" # Test wire protocol is available protocol = protocol_util.get_protocol() self.assertEqual(WireProtocol.return_value, protocol) # Test wire protocol is not available protocol_util.clear_protocol() WireProtocol.return_value.detect.side_effect = ProtocolError() self.assertRaises(ProtocolError, protocol_util.get_protocol) @patch("azurelinuxagent.common.conf.get_lib_dir") @patch("azurelinuxagent.common.protocol.util.WireProtocol") def test_detect_protocol_dhcp_unavailable(self, WireProtocol, mock_get_lib_dir, _): WireProtocol.return_value.detect = Mock() mock_get_lib_dir.return_value = self.tmp_dir protocol_util = get_protocol_util() protocol_util.osutil = MagicMock() protocol_util.osutil.is_dhcp_available.return_value = False protocol_util.dhcp_handler = MagicMock() protocol_util.dhcp_handler.endpoint = None protocol_util.dhcp_handler.run = Mock() endpoint_file = protocol_util._get_wireserver_endpoint_file_path() # pylint: disable=unused-variable # Test wire protocol when no endpoint file has been written protocol_util._detect_protocol(init_goal_state=True, create_transport_certificate=True, save_to_history=False) self.assertEqual(KNOWN_WIRESERVER_IP, protocol_util.get_wireserver_endpoint()) # Test wire protocol on dhcp failure protocol_util.osutil.is_dhcp_available.return_value = True protocol_util.dhcp_handler.run.side_effect = DhcpError() self.assertRaises(ProtocolError, lambda: protocol_util._detect_protocol(init_goal_state=True, create_transport_certificate=True, save_to_history=False)) @patch("azurelinuxagent.common.conf.get_lib_dir") @patch("azurelinuxagent.common.protocol.util.WireProtocol") @patch("azurelinuxagent.common.conf.get_dhcp_discovery_enabled") def test_detect_protocol_dhcp_discovery_disabled(self, mock_get_dhcp_discovery_enabled, WireProtocol, mock_get_lib_dir, _): mock_get_dhcp_discovery_enabled.return_value = False WireProtocol.return_value.detect = Mock() mock_get_lib_dir.return_value = self.tmp_dir protocol_util = get_protocol_util() protocol_util.dhcp_handler = MagicMock() protocol_util.dhcp_handler.endpoint = None protocol_util.dhcp_handler.run = Mock() # Test wire protocol when no endpoint file has been written, dhcp handler should not be called protocol_util._detect_protocol(init_goal_state=True, create_transport_certificate=True, save_to_history=False) self.assertEqual(KNOWN_WIRESERVER_IP, protocol_util.get_wireserver_endpoint()) self.assertTrue(protocol_util.dhcp_handler.run.call_count == 0) @patch("azurelinuxagent.common.protocol.util.WireProtocol") def test_get_protocol(self, WireProtocol, _): WireProtocol.return_value = MagicMock() protocol_util = get_protocol_util() protocol_util.get_wireserver_endpoint = Mock() protocol_util._detect_protocol = MagicMock() protocol_util._save_protocol("WireProtocol") protocol = protocol_util.get_protocol() self.assertEqual(WireProtocol.return_value, protocol) protocol_util.get_wireserver_endpoint.assert_any_call() @patch('azurelinuxagent.common.conf.get_lib_dir') @patch('azurelinuxagent.common.conf.enable_firewall') @patch("azurelinuxagent.common.protocol.metadata_server_migration_util._get_firewall_will_wait", return_value="-w") def test_get_protocol_wireserver_to_wireserver_update_removes_metadataserver_artifacts(self, _, mock_enable_firewall, mock_get_lib_dir, __): """ This is for testing that agent upgrade from WireServer to WireServer protocol will clean up leftover MDS Certificates (from a previous Metadata Server to Wireserver update, intermediate updated agent does not clean up MDS certificates) and reset firewall rules. We don't test that WireServer certificates, protocol file, or endpoint file were created because we already expect them to be created since we are updating from a WireServer agent. """ # Setup Protocol file with WireProtocol dir = tempfile.gettempdir() # pylint: disable=redefined-builtin filename = os.path.join(dir, PROTOCOL_FILE_NAME) with open(filename, "w") as f: f.write(WIRE_PROTOCOL_NAME) # Setup MDS Certificates mds_cert_paths = [os.path.join(dir, mds_cert) for mds_cert in TestProtocolUtil.MDS_CERTIFICATES] for mds_cert_path in mds_cert_paths: open(mds_cert_path, "w").close() # Setup mocks mock_get_lib_dir.return_value = dir mock_enable_firewall.return_value = True protocol_util = get_protocol_util() protocol_util.dhcp_handler = MagicMock() protocol_util.dhcp_handler.endpoint = KNOWN_WIRESERVER_IP # Run with patch("azurelinuxagent.common.protocol.metadata_server_migration_util._remove_firewall") as mock_remove_firewall: protocol_util.get_protocol() # Check MDS Certs do not exist for mds_cert_path in mds_cert_paths: self.assertFalse(os.path.exists(mds_cert_path)) # Check firewall rules was reset self.assertEqual(1, mock_remove_firewall.call_count, "remove_firewall should be called once") @patch('azurelinuxagent.common.conf.get_lib_dir') @patch('azurelinuxagent.common.conf.enable_firewall') @patch('azurelinuxagent.common.protocol.wire.WireClient') @patch("azurelinuxagent.common.protocol.metadata_server_migration_util._get_firewall_will_wait", return_value="-w") def test_get_protocol_metadataserver_to_wireserver_update_removes_metadataserver_artifacts(self, _, mock_wire_client, mock_enable_firewall, mock_get_lib_dir, __): """ This is for testing that agent upgrade from MetadataServer to WireServer protocol will clean up leftover MDS Certificates and reset firewall rules. Also check that WireServer certificates are present, and protocol/endpoint files are written to appropriately. """ # Setup Protocol file with MetadataProtocol dir = tempfile.gettempdir() # pylint: disable=redefined-builtin protocol_filename = os.path.join(dir, PROTOCOL_FILE_NAME) with open(protocol_filename, "w") as f: f.write(_METADATA_PROTOCOL_NAME) # Setup MDS Certificates mds_cert_paths = [os.path.join(dir, mds_cert) for mds_cert in TestProtocolUtil.MDS_CERTIFICATES] for mds_cert_path in mds_cert_paths: open(mds_cert_path, "w").close() # Setup mocks mock_get_lib_dir.return_value = dir mock_enable_firewall.return_value = True protocol_util = get_protocol_util() protocol_util.osutil = MagicMock() mock_wire_client.return_value = MagicMock() protocol_util.dhcp_handler = MagicMock() protocol_util.dhcp_handler.endpoint = KNOWN_WIRESERVER_IP # Run with patch("azurelinuxagent.common.protocol.metadata_server_migration_util._remove_firewall") as mock_remove_firewall: protocol_util.get_protocol() # Check MDS Certs do not exist for mds_cert_path in mds_cert_paths: self.assertFalse(os.path.exists(mds_cert_path)) # Check that WireServer Certs exist ws_cert_paths = [os.path.join(dir, ws_cert) for ws_cert in TestProtocolUtil.WIRESERVER_CERTIFICATES] for ws_cert_path in ws_cert_paths: self.assertTrue(os.path.isfile(ws_cert_path)) # Check firewall rules was reset self.assertEqual(1, mock_remove_firewall.call_count, "remove_firewall should be called once") # Check Protocol File is updated to WireProtocol with open(os.path.join(dir, PROTOCOL_FILE_NAME), "r") as f: self.assertEqual(f.read(), WIRE_PROTOCOL_NAME) # Check Endpoint file is updated to WireServer IP with open(os.path.join(dir, ENDPOINT_FILE_NAME), 'r') as f: self.assertEqual(f.read(), KNOWN_WIRESERVER_IP) @patch('azurelinuxagent.common.conf.get_lib_dir') @patch('azurelinuxagent.common.conf.enable_firewall') @patch('azurelinuxagent.common.protocol.wire.WireClient') def test_get_protocol_new_wireserver_agent_generates_certificates(self, mock_wire_client, mock_enable_firewall, mock_get_lib_dir, _): """ This is for testing that a new WireServer Linux Agent generates appropriate certificates, protocol file, and endpoint file. """ # Setup mocks dir = tempfile.gettempdir() # pylint: disable=redefined-builtin mock_get_lib_dir.return_value = dir mock_enable_firewall.return_value = True protocol_util = get_protocol_util() protocol_util.osutil = MagicMock() mock_wire_client.return_value = MagicMock() protocol_util.dhcp_handler = MagicMock() protocol_util.dhcp_handler.endpoint = KNOWN_WIRESERVER_IP # Run with patch("azurelinuxagent.common.protocol.metadata_server_migration_util._remove_firewall") as mock_remove_firewall: protocol_util.get_protocol() # Check that WireServer Certs exist ws_cert_paths = [os.path.join(dir, ws_cert) for ws_cert in TestProtocolUtil.WIRESERVER_CERTIFICATES] for ws_cert_path in ws_cert_paths: self.assertTrue(os.path.isfile(ws_cert_path)) # Check firewall rules were not reset mock_remove_firewall.assert_not_called() # Check Protocol File is updated to WireProtocol with open(os.path.join(dir, PROTOCOL_FILE_NAME), "r") as f: self.assertEqual(f.read(), WIRE_PROTOCOL_NAME) # Check Endpoint file is updated to WireServer IP with open(os.path.join(dir, ENDPOINT_FILE_NAME), 'r') as f: self.assertEqual(f.read(), KNOWN_WIRESERVER_IP) @patch("azurelinuxagent.common.protocol.util.fileutil") @patch("azurelinuxagent.common.conf.get_lib_dir") def test_endpoint_file_states(self, mock_get_lib_dir, mock_fileutil, _): mock_get_lib_dir.return_value = self.tmp_dir protocol_util = get_protocol_util() endpoint_file = protocol_util._get_wireserver_endpoint_file_path() # Test get endpoint for io error mock_fileutil.read_file.side_effect = IOError() ep = protocol_util.get_wireserver_endpoint() self.assertEqual(ep, KNOWN_WIRESERVER_IP) # Test get endpoint when file not found mock_fileutil.read_file.side_effect = IOError(ENOENT, 'File not found') ep = protocol_util.get_wireserver_endpoint() self.assertEqual(ep, KNOWN_WIRESERVER_IP) # Test get endpoint for empty file mock_fileutil.read_file.return_value = "" ep = protocol_util.get_wireserver_endpoint() self.assertEqual(ep, KNOWN_WIRESERVER_IP) # Test set endpoint for io error mock_fileutil.write_file.side_effect = IOError() ep = protocol_util.get_wireserver_endpoint() self.assertRaises(OSUtilError, protocol_util._set_wireserver_endpoint, 'abc') # Test clear endpoint for io error with open(endpoint_file, "w+") as ep_fd: ep_fd.write("") with patch('os.remove') as mock_remove: protocol_util._clear_wireserver_endpoint() self.assertEqual(1, mock_remove.call_count) self.assertEqual(endpoint_file, mock_remove.call_args_list[0][0][0]) # Test clear endpoint when file not found with patch('os.remove') as mock_remove: mock_remove = Mock(side_effect=IOError(ENOENT, 'File not found')) protocol_util._clear_wireserver_endpoint() mock_remove.assert_not_called() def test_protocol_file_states(self, _): protocol_util = get_protocol_util() protocol_util._clear_wireserver_endpoint = Mock() protocol_file = protocol_util._get_protocol_file_path() # Test clear protocol for io error with open(protocol_file, "w+") as proto_fd: proto_fd.write("") with patch('os.remove') as mock_remove: protocol_util.clear_protocol() self.assertEqual(1, protocol_util._clear_wireserver_endpoint.call_count) self.assertEqual(1, mock_remove.call_count) self.assertEqual(protocol_file, mock_remove.call_args_list[0][0][0]) # Test clear protocol when file not found protocol_util._clear_wireserver_endpoint.reset_mock() with patch('os.remove') as mock_remove: protocol_util.clear_protocol() self.assertEqual(1, protocol_util._clear_wireserver_endpoint.call_count) self.assertEqual(1, mock_remove.call_count) self.assertEqual(protocol_file, mock_remove.call_args_list[0][0][0]) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-a976115/tests/common/protocol/test_wire.py000066400000000000000000002111351510742556200242470ustar00rootroot00000000000000# -*- encoding: utf-8 -*- # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import contextlib import json import os import socket import time import unittest import uuid from azurelinuxagent.common.agent_supported_feature import SupportedFeatureNames, get_supported_feature_by_name, \ get_agent_supported_features_list_for_crp from azurelinuxagent.common.event import WALAEventOperation from azurelinuxagent.common.exception import ResourceGoneError, ProtocolError, \ ExtensionDownloadError, HttpError from azurelinuxagent.common.protocol.extensions_goal_state_from_extensions_config import ExtensionsGoalStateFromExtensionsConfig from azurelinuxagent.common.protocol.goal_state import GoalStateProperties from azurelinuxagent.common.protocol.hostplugin import HostPluginProtocol from azurelinuxagent.common.protocol.wire import WireProtocol, WireClient, \ StatusBlob, VMStatus from azurelinuxagent.common.telemetryevent import GuestAgentExtensionEventsSchema, \ TelemetryEventParam, TelemetryEvent from azurelinuxagent.common.utils import restutil from azurelinuxagent.common.version import CURRENT_VERSION, DISTRO_NAME, DISTRO_VERSION from azurelinuxagent.ga.exthandlers import get_exthandlers_handler from tests.ga.test_monitor import random_generator from tests.lib import wire_protocol_data from tests.lib.mock_wire_protocol import mock_wire_protocol, MockHttpResponse from tests.lib.http_request_predicates import HttpRequestPredicates from tests.lib.wire_protocol_data import DATA_FILE_NO_EXT, DATA_FILE from tests.lib.wire_protocol_data import WireProtocolData from tests.lib.tools import patch, AgentTestCase, load_bin_data data_with_bom = b'\xef\xbb\xbfhehe' testurl = 'http://foo' testtype = 'BlockBlob' WIRESERVER_URL = '168.63.129.16' def get_event(message, duration=30000, evt_type="", is_internal=False, is_success=True, name="", op="Unknown", version=CURRENT_VERSION, eventId=1): event = TelemetryEvent(eventId, "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX") event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Name, name)) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Version, str(version))) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.IsInternal, is_internal)) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Operation, op)) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.OperationSuccess, is_success)) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Message, message)) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Duration, duration)) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.ExtensionType, evt_type)) return event @contextlib.contextmanager def create_mock_protocol(): with mock_wire_protocol(DATA_FILE_NO_EXT) as protocol: yield protocol @patch("time.sleep") @patch("azurelinuxagent.common.protocol.wire.CryptUtil") @patch("azurelinuxagent.common.protocol.healthservice.HealthService._report") class TestWireProtocol(AgentTestCase, HttpRequestPredicates): def setUp(self): super(TestWireProtocol, self).setUp() HostPluginProtocol.is_default_channel = False def _test_getters(self, test_data, certsMustBePresent, __, MockCryptUtil, _): MockCryptUtil.side_effect = test_data.mock_crypt_util with patch.object(restutil, 'http_get', test_data.mock_http_get): protocol = WireProtocol(WIRESERVER_URL) protocol.detect() protocol.get_vminfo() protocol.get_certs() ext_handlers = protocol.get_goal_state().extensions_goal_state.extensions for ext_handler in ext_handlers: protocol.get_goal_state().fetch_extension_manifest(ext_handler.name, ext_handler.manifest_uris) crt1 = os.path.join(self.tmp_dir, '8979F1AC8C4215827BF3B5A403E6137B504D02A4.crt') crt2 = os.path.join(self.tmp_dir, 'F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9.crt') prv2 = os.path.join(self.tmp_dir, 'F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9.prv') if certsMustBePresent: self.assertTrue(os.path.isfile(crt1)) self.assertTrue(os.path.isfile(crt2)) self.assertTrue(os.path.isfile(prv2)) else: self.assertFalse(os.path.isfile(crt1)) self.assertFalse(os.path.isfile(crt2)) self.assertFalse(os.path.isfile(prv2)) self.assertEqual("1", protocol.get_goal_state().incarnation) @staticmethod def _get_telemetry_events_generator(event_list): def _yield_events(): for telemetry_event in event_list: yield telemetry_event return _yield_events() def test_getters(self, *args): """Normal case""" test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) self._test_getters(test_data, True, *args) def test_getters_no_ext(self, *args): """Provision with agent is not checked""" test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_NO_EXT) self._test_getters(test_data, True, *args) def test_getters_ext_no_settings(self, *args): """Extensions without any settings""" test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_NO_SETTINGS) self._test_getters(test_data, True, *args) def test_getters_ext_no_public(self, *args): """Extensions without any public settings""" test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_NO_PUBLIC) self._test_getters(test_data, True, *args) def test_getters_ext_no_cert_format(self, *args): """Certificate format not specified""" test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_NO_CERT_FORMAT) self._test_getters(test_data, True, *args) def test_getters_ext_cert_format_not_pfx(self, *args): """Certificate format is not Pkcs7BlobWithPfxContents specified""" test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_CERT_FORMAT_NOT_PFX) self._test_getters(test_data, False, *args) @patch("azurelinuxagent.common.protocol.healthservice.HealthService.report_host_plugin_extension_artifact") def test_getters_with_stale_goal_state(self, patch_report, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) test_data.emulate_stale_goal_state = True self._test_getters(test_data, True, *args) # Ensure HostPlugin was invoked self.assertEqual(1, test_data.call_counts["/versions"]) self.assertEqual(2, test_data.call_counts["extensionArtifact"]) # Ensure the expected number of HTTP calls were made # -- Tracking calls to retrieve GoalState is problematic since it is # fetched often; however, the dependent documents, such as the # HostingEnvironmentConfig, will be retrieved the expected number self.assertEqual(1, test_data.call_counts["hostingEnvironmentConfig"]) self.assertEqual(1, patch_report.call_count) def test_call_storage_kwargs(self, *args): # pylint: disable=unused-argument with patch.object(restutil, 'http_get') as http_patch: http_req = restutil.http_get url = testurl headers = {} # no kwargs -- Default to True WireClient.call_storage_service(http_req) # kwargs, no use_proxy -- Default to True WireClient.call_storage_service(http_req, url, headers) # kwargs, use_proxy None -- Default to True WireClient.call_storage_service(http_req, url, headers, use_proxy=None) # kwargs, use_proxy False -- Keep False WireClient.call_storage_service(http_req, url, headers, use_proxy=False) # kwargs, use_proxy True -- Keep True WireClient.call_storage_service(http_req, url, headers, use_proxy=True) # assert self.assertTrue(http_patch.call_count == 5) for i in range(0, 5): c = http_patch.call_args_list[i][-1]['use_proxy'] self.assertTrue(c == (True if i != 3 else False)) def test_status_blob_parsing(self, *args): # pylint: disable=unused-argument with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: extensions_goal_state = protocol.get_goal_state().extensions_goal_state self.assertIsInstance(extensions_goal_state, ExtensionsGoalStateFromExtensionsConfig) self.assertEqual(extensions_goal_state.status_upload_blob, 'https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?' 'sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&' 'sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo') self.assertEqual(protocol.get_goal_state().extensions_goal_state.status_upload_blob_type, u'BlockBlob') def test_get_host_ga_plugin(self, *args): # pylint: disable=unused-argument with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: host_plugin = protocol.client.get_host_plugin() goal_state = protocol.client.get_goal_state() self.assertEqual(goal_state.container_id, host_plugin.container_id) self.assertEqual(goal_state.role_config_name, host_plugin.role_config_name) def test_upload_status_blob_should_use_the_host_channel_by_default(self, *_): def http_put_handler(url, *_, **__): # pylint: disable=inconsistent-return-statements if protocol.get_endpoint() in url and url.endswith('/status'): return MockHttpResponse(200) with mock_wire_protocol(wire_protocol_data.DATA_FILE, http_put_handler=http_put_handler) as protocol: HostPluginProtocol.is_default_channel = False protocol.client.status_blob.vm_status = VMStatus(message="Ready", status="Ready") protocol.client.upload_status_blob() urls = protocol.get_tracked_urls() self.assertEqual(len(urls), 1, 'Expected one post request to the host: [{0}]'.format(urls)) def test_upload_status_blob_host_ga_plugin(self, *_): with create_mock_protocol() as protocol: protocol.client.status_blob.vm_status = VMStatus(message="Ready", status="Ready") with patch.object(HostPluginProtocol, "ensure_initialized", return_value=True): with patch.object(StatusBlob, "upload", return_value=False) as patch_default_upload: with patch.object(HostPluginProtocol, "_put_block_blob_status") as patch_http: HostPluginProtocol.is_default_channel = False protocol.client.upload_status_blob() patch_default_upload.assert_not_called() patch_http.assert_called_once_with(testurl, protocol.client.status_blob) self.assertFalse(HostPluginProtocol.is_default_channel) def test_upload_status_blob_reports_prepare_error(self, *_): with create_mock_protocol() as protocol: protocol.client.status_blob.vm_status = VMStatus(message="Ready", status="Ready") with patch.object(StatusBlob, "prepare", side_effect=Exception) as mock_prepare: self.assertRaises(ProtocolError, protocol.client.upload_status_blob) self.assertEqual(1, mock_prepare.call_count) def test_get_in_vm_artifacts_profile_blob_not_available(self, *_): data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_in_vm_empty_artifacts_profile.xml" with mock_wire_protocol(data_file) as protocol: self.assertFalse(protocol.get_goal_state().extensions_goal_state.on_hold) def test_it_should_set_on_hold_to_false_when_the_in_vm_artifacts_profile_is_not_valid(self, *_): with mock_wire_protocol(wire_protocol_data.DATA_FILE_IN_VM_ARTIFACTS_PROFILE) as protocol: extensions_on_hold = protocol.get_goal_state().extensions_goal_state.on_hold self.assertTrue(extensions_on_hold, "Extensions should be on hold in the test data") def http_get_handler(url, *_, **kwargs): if self.is_in_vm_artifacts_profile_request(url) or self.is_host_plugin_in_vm_artifacts_profile_request(url, kwargs): return mock_response return None protocol.set_http_handlers(http_get_handler=http_get_handler) mock_response = MockHttpResponse(200, body=None) protocol.client.reset_goal_state() extensions_on_hold = protocol.get_goal_state().extensions_goal_state.on_hold self.assertFalse(extensions_on_hold, "Extensions should not be on hold when the in-vm artifacts profile response body is None") mock_response = MockHttpResponse(200, ' '.encode('utf-8')) protocol.client.reset_goal_state() extensions_on_hold = protocol.get_goal_state().extensions_goal_state.on_hold self.assertFalse(extensions_on_hold, "Extensions should not be on hold when the in-vm artifacts profile response is an empty string") mock_response = MockHttpResponse(200, '{ }'.encode('utf-8')) protocol.client.reset_goal_state() extensions_on_hold = protocol.get_goal_state().extensions_goal_state.on_hold self.assertFalse(extensions_on_hold, "Extensions should not be on hold when the in-vm artifacts profile response is an empty json object") with patch("azurelinuxagent.common.protocol.extensions_goal_state_from_extensions_config.add_event") as add_event: mock_response = MockHttpResponse(200, 'invalid json'.encode('utf-8')) protocol.client.reset_goal_state() extensions_on_hold = protocol.get_goal_state().extensions_goal_state.on_hold self.assertFalse(extensions_on_hold, "Extensions should not be on hold when the in-vm artifacts profile response is not valid json") events = [kwargs for _, kwargs in add_event.call_args_list if kwargs['op'] == WALAEventOperation.ArtifactsProfileBlob] self.assertEqual(1, len(events), "Expected 1 event for operation ArtifactsProfileBlob. Got: {0}".format(events)) self.assertFalse(events[0]['is_success'], "Expected ArtifactsProfileBlob's success to be False") self.assertTrue("Can't parse the artifacts profile blob" in events[0]['message'], "Expected 'Can't parse the artifacts profile blob as the reason for the operation failure. Got: {0}".format(events[0]['message'])) @patch("socket.gethostname", return_value="hostname") @patch("time.gmtime", return_value=time.localtime(1485543256)) def test_report_vm_status(self, *args): # pylint: disable=unused-argument status = 'status' message = 'message' client = WireProtocol(WIRESERVER_URL).client actual = StatusBlob(client=client) actual.set_vm_status(VMStatus(status=status, message=message)) timestamp = time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime()) formatted_msg = { 'lang': 'en-US', 'message': message } v1_ga_status = { 'version': str(CURRENT_VERSION), 'status': status, 'formattedMessage': formatted_msg } v1_ga_guest_info = { 'computerName': socket.gethostname(), 'osName': DISTRO_NAME, 'osVersion': DISTRO_VERSION, 'version': str(CURRENT_VERSION), } v1_agg_status = { 'guestAgentStatus': v1_ga_status, 'handlerAggregateStatus': [] } supported_features = [] for _, feature in get_agent_supported_features_list_for_crp().items(): supported_features.append( { "Key": feature.name, "Value": feature.version } ) v1_vm_status = { 'version': '1.1', 'timestampUTC': timestamp, 'aggregateStatus': v1_agg_status, 'guestOSInfo': v1_ga_guest_info, 'supportedFeatures': supported_features } self.assertEqual(json.dumps(v1_vm_status), actual.to_json()) def test_it_should_report_supported_features_in_status_blob_if_supported(self, *_): with mock_wire_protocol(DATA_FILE) as protocol: def mock_http_put(url, *args, **__): if HttpRequestPredicates.is_host_plugin_status_request(url): # Skip reading the HostGA request data as its encoded return MockHttpResponse(status=500) protocol.aggregate_status = json.loads(args[0]) return MockHttpResponse(status=201) protocol.aggregate_status = {} protocol.set_http_handlers(http_put_handler=mock_http_put) exthandlers_handler = get_exthandlers_handler(protocol) with patch("azurelinuxagent.common.agent_supported_feature._MultiConfigFeature.is_supported", True): with patch("azurelinuxagent.common.agent_supported_feature._GAVersioningGovernanceFeature.is_supported", True): exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertIsNotNone(protocol.aggregate_status, "Aggregate status should not be None") self.assertIn("supportedFeatures", protocol.aggregate_status, "supported features not reported") multi_config_feature = get_supported_feature_by_name(SupportedFeatureNames.MultiConfig) found = False for feature in protocol.aggregate_status['supportedFeatures']: if feature['Key'] == multi_config_feature.name and feature['Value'] == multi_config_feature.version: found = True break self.assertTrue(found, "Multi-config name should be present in supportedFeatures") ga_versioning_feature = get_supported_feature_by_name(SupportedFeatureNames.GAVersioningGovernance) found = False for feature in protocol.aggregate_status['supportedFeatures']: if feature['Key'] == ga_versioning_feature.name and feature['Value'] == ga_versioning_feature.version: found = True break self.assertTrue(found, "ga versioning name should be present in supportedFeatures") # Feature should not be reported if not present with patch("azurelinuxagent.common.agent_supported_feature._MultiConfigFeature.is_supported", False): with patch("azurelinuxagent.common.agent_supported_feature._GAVersioningGovernanceFeature.is_supported", False): exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertIsNotNone(protocol.aggregate_status, "Aggregate status should not be None") if "supportedFeatures" not in protocol.aggregate_status: # In the case Multi-config and GA Versioning only features available, 'supportedFeatures' should not be # reported in the status blob as its not supported as of now. # Asserting no other feature was available to report back to crp self.assertEqual(0, len(get_agent_supported_features_list_for_crp()), "supportedFeatures should be available if there are more features") return # If there are other features available, confirm MultiConfig and GA versioning was not reported multi_config_feature = get_supported_feature_by_name(SupportedFeatureNames.MultiConfig) found = False for feature in protocol.aggregate_status['supportedFeatures']: if feature['Key'] == multi_config_feature.name and feature['Value'] == multi_config_feature.version: found = True break self.assertFalse(found, "Multi-config name should not be present in supportedFeatures") ga_versioning_feature = get_supported_feature_by_name(SupportedFeatureNames.GAVersioningGovernance) found = False for feature in protocol.aggregate_status['supportedFeatures']: if feature['Key'] == ga_versioning_feature.name and feature['Value'] == ga_versioning_feature.version: found = True break self.assertFalse(found, "ga versioning name should not be present in supportedFeatures") @patch("azurelinuxagent.common.utils.restutil.http_request") def test_send_encoded_event(self, mock_http_request, *args): mock_http_request.return_value = MockHttpResponse(200) event_str = u'a test string' client = WireProtocol(WIRESERVER_URL).client client._send_encoded_event("foo", event_str.encode('utf-8'), flush=False) first_call = mock_http_request.call_args_list[0] args, kwargs = first_call method, url, body_received, timeout = args # pylint: disable=unused-variable headers = kwargs['headers'] # the headers should include utf-8 encoding... self.assertTrue("utf-8" in headers['Content-Type']) # the body is encoded, decode and check for equality self.assertIn(event_str, body_received.decode('utf-8')) @patch("azurelinuxagent.common.protocol.wire.WireClient._send_encoded_event") def test_report_event_small_event(self, patch_send_event, *args): # pylint: disable=unused-argument event_list = [] client = WireProtocol(WIRESERVER_URL).client event_str = random_generator(10) event_list.append(get_event(message=event_str)) event_str = random_generator(100) event_list.append(get_event(message=event_str)) event_str = random_generator(1000) event_list.append(get_event(message=event_str)) event_str = random_generator(10000) event_list.append(get_event(message=event_str)) client.report_event(self._get_telemetry_events_generator(event_list)) # It merges the messages into one message self.assertEqual(patch_send_event.call_count, 1) @patch("azurelinuxagent.common.protocol.wire.WireClient._send_encoded_event") def test_report_event_multiple_events_to_fill_buffer(self, patch_send_event, *args): # pylint: disable=unused-argument event_list = [] client = WireProtocol(WIRESERVER_URL).client event_str = random_generator(2 ** 15) event_list.append(get_event(message=event_str)) event_list.append(get_event(message=event_str)) client.report_event(self._get_telemetry_events_generator(event_list)) # It merges the messages into one message self.assertEqual(patch_send_event.call_count, 2) @patch("azurelinuxagent.common.protocol.wire.WireClient._send_encoded_event") def test_report_event_large_event(self, patch_send_event, *args): # pylint: disable=unused-argument event_list = [] event_str = random_generator(2 ** 18) event_list.append(get_event(message=event_str)) client = WireProtocol(WIRESERVER_URL).client client.report_event(self._get_telemetry_events_generator(event_list)) self.assertEqual(patch_send_event.call_count, 0) @patch("azurelinuxagent.common.utils.restutil._http_request") def test_report_event_http_req_should_do_max_retries_on_throttling_error(self, mock_http_request, *args): # pylint: disable=unused-argument mock_http_request.return_value = MockHttpResponse(429) event_list = [] event_str = random_generator(2 ** 15) event_list.append(get_event(message=event_str)) client = WireProtocol(WIRESERVER_URL).client with patch("azurelinuxagent.common.utils.restutil.TELEMETRY_THROTTLE_DELAY_IN_SECONDS", 0.001): client.report_event(self._get_telemetry_events_generator(event_list)) self.assertEqual(mock_http_request.call_count, 3) mock_http_request.reset_mock() self.assertEqual(mock_http_request.call_count, 0) mock_http_request.return_value = MockHttpResponse(429) with patch("azurelinuxagent.common.utils.restutil.TELEMETRY_FLUSH_THROTTLE_DELAY_IN_SECONDS", 0.001): client.report_event(self._get_telemetry_events_generator(event_list), flush=True) self.assertEqual(mock_http_request.call_count, 3) def test_get_header_for_remote_access_should_use_aes128(self, *_): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: headers = protocol.client.get_header_for_remote_access() self.assertIn("x-ms-cipher-name", headers) self.assertEqual(headers["x-ms-cipher-name"], "AES128_CBC", "Unexpected x-ms-cipher-name") class TestWireClient(HttpRequestPredicates, AgentTestCase): def test_get_ext_conf_without_extensions_should_retrieve_vmagent_manifests_info(self, *args): # pylint: disable=unused-argument # Basic test for extensions_goal_state when extensions are not present in the config. The test verifies that # extensions_goal_state fetches the correct data by comparing the returned data with the test data provided the # mock_wire_protocol. with mock_wire_protocol(wire_protocol_data.DATA_FILE_NO_EXT) as protocol: extensions_goal_state = protocol.get_goal_state().extensions_goal_state ext_handlers_names = [ext_handler.name for ext_handler in extensions_goal_state.extensions] self.assertEqual(0, len(extensions_goal_state.extensions), "Unexpected number of extension handlers in the extension config: [{0}]".format(ext_handlers_names)) vmagent_families = [manifest.name for manifest in extensions_goal_state.agent_families] self.assertEqual(0, len(extensions_goal_state.agent_families), "Unexpected number of vmagent manifests in the extension config: [{0}]".format(vmagent_families)) self.assertFalse(extensions_goal_state.on_hold, "Extensions On Hold is expected to be False") def test_get_ext_conf_with_extensions_should_retrieve_ext_handlers_and_vmagent_manifests_info(self): # Basic test for extensions_goal_state when extensions are present in the config. The test verifies that extensions_goal_state # fetches the correct data by comparing the returned data with the test data provided the mock_wire_protocol. with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: extensions_goal_state = protocol.get_goal_state().extensions_goal_state ext_handlers_names = [ext_handler.name for ext_handler in extensions_goal_state.extensions] self.assertEqual(1, len(extensions_goal_state.extensions), "Unexpected number of extension handlers in the extension config: [{0}]".format(ext_handlers_names)) vmagent_families = [manifest.name for manifest in extensions_goal_state.agent_families] self.assertEqual(2, len(extensions_goal_state.agent_families), "Unexpected number of vmagent manifests in the extension config: [{0}]".format(vmagent_families)) self.assertEqual("https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw" "&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo", extensions_goal_state.status_upload_blob, "Unexpected value for status upload blob URI") self.assertEqual("BlockBlob", extensions_goal_state.status_upload_blob_type, "Unexpected status upload blob type in the extension config") self.assertFalse(extensions_goal_state.on_hold, "Extensions On Hold is expected to be False") def test_download_zip_package_should_expand_and_delete_the_package(self): extension_url = 'https://fake_host/fake_extension.zip' target_file = os.path.join(self.tmp_dir, 'fake_extension.zip') target_directory = os.path.join(self.tmp_dir, "fake_extension") def http_get_handler(url, *_, **__): if url == extension_url or self.is_host_plugin_extension_artifact_request(url): return MockHttpResponse(200, body=load_bin_data("ga/fake_extension.zip")) return None with mock_wire_protocol(wire_protocol_data.DATA_FILE, http_get_handler=http_get_handler) as protocol: protocol.client.download_zip_package("Microsoft.FakeExtension-1.0.0.0", [extension_url], target_file, target_directory, use_verify_header=False, signature="", enforce_signature=False) self.assertTrue(os.path.exists(target_directory), "The extension package was not downloaded") self.assertFalse(os.path.exists(target_file), "The extension package was not deleted") def test_download_zip_package_should_not_invoke_host_channel_when_direct_channel_succeeds(self): extension_url = 'https://fake_host/fake_extension.zip' target_file = os.path.join(self.tmp_dir, 'fake_extension.zip') target_directory = os.path.join(self.tmp_dir, "fake_extension") def http_get_handler(url, *_, **__): if url == extension_url: return MockHttpResponse(200, body=load_bin_data("ga/fake_extension.zip")) if self.is_host_plugin_extension_artifact_request(url): self.fail('The host channel should not have been used') return None with mock_wire_protocol(wire_protocol_data.DATA_FILE, http_get_handler=http_get_handler) as protocol: HostPluginProtocol.is_default_channel = False protocol.client.download_zip_package("Microsoft.FakeExtension-1.0.0.0", [extension_url], target_file, target_directory, use_verify_header=False, signature="", enforce_signature=False) urls = protocol.get_tracked_urls() self.assertEqual(len(urls), 1, "Unexpected number of HTTP requests: [{0}]".format(urls)) self.assertEqual(urls[0], extension_url, "The extension should have been downloaded over the direct channel") self.assertTrue(os.path.exists(target_directory), "The extension package was not downloaded") self.assertFalse(HostPluginProtocol.is_default_channel, "The host channel should not have been set as the default") def test_download_zip_package_should_use_host_channel_when_direct_channel_fails_and_set_host_as_default(self): extension_url = 'https://fake_host/fake_extension.zip' target_file = os.path.join(self.tmp_dir, 'fake_extension.zip') target_directory = os.path.join(self.tmp_dir, "fake_extension") def http_get_handler(url, *_, **kwargs): if url == extension_url: return HttpError("Exception to fake an error on the direct channel") if self.is_host_plugin_extension_request(url, kwargs, extension_url): return MockHttpResponse(200, body=load_bin_data("ga/fake_extension.zip")) return None with mock_wire_protocol(wire_protocol_data.DATA_FILE, http_get_handler=http_get_handler) as protocol: HostPluginProtocol.is_default_channel = False protocol.client.download_zip_package("Microsoft.FakeExtension-1.0.0.0", [extension_url], target_file, target_directory, use_verify_header=False, signature="", enforce_signature=False) urls = protocol.get_tracked_urls() self.assertEqual(len(urls), 2, "Unexpected number of HTTP requests: [{0}]".format(urls)) self.assertEqual(urls[0], extension_url, "The first attempt should have been over the direct channel") self.assertTrue(self.is_host_plugin_extension_artifact_request(urls[1]), "The retry attempt should have been over the host channel") self.assertTrue(os.path.exists(target_directory), 'The extension package was not downloaded') self.assertTrue(HostPluginProtocol.is_default_channel, "The host channel should have been set as the default") def test_download_zip_package_should_retry_the_host_channel_after_refreshing_host_plugin(self): extension_url = 'https://fake_host/fake_extension.zip' target_file = os.path.join(self.tmp_dir, 'fake_extension.zip') target_directory = os.path.join(self.tmp_dir, "fake_extension") def http_get_handler(url, *_, **kwargs): if url == extension_url: return HttpError("Exception to fake an error on the direct channel") if self.is_host_plugin_extension_request(url, kwargs, extension_url): # fake a stale goal state then succeed once the goal state has been refreshed if http_get_handler.goal_state_requests == 0: http_get_handler.goal_state_requests += 1 return ResourceGoneError("Exception to fake a stale goal") return MockHttpResponse(200, body=load_bin_data("ga/fake_extension.zip")) if self.is_goal_state_request(url): protocol.track_url(url) # track requests for the goal state return None http_get_handler.goal_state_requests = 0 with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: HostPluginProtocol.is_default_channel = False try: # initialization of the host plugin triggers a request for the goal state; do it here before we start tracking those requests. protocol.client.get_host_plugin() protocol.set_http_handlers(http_get_handler=http_get_handler) protocol.client.download_zip_package("Microsoft.FakeExtension-1.0.0.0", [extension_url], target_file, target_directory, use_verify_header=False, signature="", enforce_signature=False) urls = protocol.get_tracked_urls() self.assertEqual(len(urls), 4, "Unexpected number of HTTP requests: [{0}]".format(urls)) self.assertEqual(urls[0], extension_url, "The first attempt should have been over the direct channel") self.assertTrue(self.is_host_plugin_extension_artifact_request(urls[1]), "The second attempt should have been over the host channel") self.assertTrue(self.is_goal_state_request(urls[2]), "The host channel should have been refreshed the goal state") self.assertTrue(self.is_host_plugin_extension_artifact_request(urls[3]), "The third attempt should have been over the host channel") self.assertTrue(os.path.exists(target_directory), 'The extension package was not downloaded') self.assertTrue(HostPluginProtocol.is_default_channel, "The host channel should have been set as the default") finally: HostPluginProtocol.is_default_channel = False def test_download_zip_package_should_not_change_default_channel_when_all_channels_fail(self): extension_url = 'https://fake_host/fake_extension.zip' target_file = os.path.join(self.tmp_dir, "fake_extension.zip") target_directory = os.path.join(self.tmp_dir, "fake_extension") def http_get_handler(url, *_, **kwargs): if url == extension_url or self.is_host_plugin_extension_request(url, kwargs, extension_url): return MockHttpResponse(status=404, body=b"content not found", reason="Not Found") if self.is_goal_state_request(url): protocol.track_url(url) # keep track of goal state requests return None with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: HostPluginProtocol.is_default_channel = False # initialization of the host plugin triggers a request for the goal state; do it here before we start tracking those requests. protocol.client.get_host_plugin() protocol.set_http_handlers(http_get_handler=http_get_handler) with self.assertRaises(ExtensionDownloadError): protocol.client.download_zip_package("Microsoft.FakeExtension-1.0.0.0", [extension_url], target_file, target_directory, use_verify_header=False, signature="", enforce_signature=False) urls = protocol.get_tracked_urls() self.assertEqual(len(urls), 2, "Unexpected number of HTTP requests: [{0}]".format(urls)) self.assertEqual(urls[0], extension_url, "The first attempt should have been over the direct channel") self.assertTrue(self.is_host_plugin_extension_artifact_request(urls[1]), "The second attempt should have been over the host channel") self.assertFalse(os.path.exists(target_file), "The extension package was downloaded and it shouldn't have") self.assertFalse(HostPluginProtocol.is_default_channel, "The host channel should not have been set as the default") def test_invalid_zip_should_raise_an_error(self): extension_url = 'https://fake_host/fake_extension.zip' target_file = os.path.join(self.tmp_dir, "fake_extension.zip") target_directory = os.path.join(self.tmp_dir, "fake_extension") def http_get_handler(url, *_, **kwargs): if url == extension_url or self.is_host_plugin_extension_request(url, kwargs, extension_url): return MockHttpResponse(status=200, body=b"NOT A ZIP") return None with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: protocol.set_http_handlers(http_get_handler=http_get_handler) with self.assertRaises(ExtensionDownloadError): protocol.client.download_zip_package("Microsoft.FakeExtension-1.0.0.0", [extension_url], target_file, target_directory, use_verify_header=False, signature="", enforce_signature=False) self.assertFalse(os.path.exists(target_file), "The extension package should have been deleted") self.assertFalse(os.path.exists(target_directory), "The extension directory should not have been created") def test_fetch_manifest_should_not_invoke_host_channel_when_direct_channel_succeeds(self): manifest_url = 'https://fake_host/fake_manifest.xml' manifest_xml = '' def http_get_handler(url, *_, **__): if url == manifest_url: return MockHttpResponse(200, manifest_xml.encode('utf-8')) if url.endswith('/extensionArtifact'): self.fail('The Host GA Plugin should not have been invoked') return None with mock_wire_protocol(wire_protocol_data.DATA_FILE, http_get_handler=http_get_handler) as protocol: HostPluginProtocol.is_default_channel = False manifest = protocol.client.fetch_manifest("test", [manifest_url], use_verify_header=False) urls = protocol.get_tracked_urls() self.assertEqual(manifest, manifest_xml, 'The expected manifest was not downloaded') self.assertEqual(len(urls), 1, "Unexpected number of HTTP requests: [{0}]".format(urls)) self.assertEqual(urls[0], manifest_url, "The manifest should have been downloaded over the direct channel") self.assertFalse(HostPluginProtocol.is_default_channel, "The default channel should not have changed") def test_fetch_manifest_should_use_host_channel_when_direct_channel_fails_and_set_it_to_default(self): manifest_url = 'https://fake_host/fake_manifest.xml' manifest_xml = '' def http_get_handler(url, *_, **kwargs): if url == manifest_url: return ResourceGoneError("Exception to fake an error on the direct channel") if self.is_host_plugin_extension_request(url, kwargs, manifest_url): return MockHttpResponse(200, body=manifest_xml.encode('utf-8')) return None with mock_wire_protocol(wire_protocol_data.DATA_FILE, http_get_handler=http_get_handler) as protocol: HostPluginProtocol.is_default_channel = False try: manifest = protocol.client.fetch_manifest("test", [manifest_url], use_verify_header=False) urls = protocol.get_tracked_urls() self.assertEqual(manifest, manifest_xml, 'The expected manifest was not downloaded') self.assertEqual(len(urls), 2, "Unexpected number of HTTP requests: [{0}]".format(urls)) self.assertEqual(urls[0], manifest_url, "The first attempt should have been over the direct channel") self.assertTrue(self.is_host_plugin_extension_artifact_request(urls[1]), "The retry should have been over the host channel") self.assertTrue(HostPluginProtocol.is_default_channel, "The host should have been set as the default channel") finally: HostPluginProtocol.is_default_channel = False # Reset default channel def test_fetch_manifest_should_retry_the_host_channel_after_refreshing_the_host_plugin_and_set_the_host_as_default(self): manifest_url = 'https://fake_host/fake_manifest.xml' manifest_xml = '' def http_get_handler(url, *_, **kwargs): if url == manifest_url: return HttpError("Exception to fake an error on the direct channel") if self.is_host_plugin_extension_request(url, kwargs, manifest_url): # fake a stale goal state then succeed once the goal state has been refreshed if http_get_handler.goal_state_requests == 0: http_get_handler.goal_state_requests += 1 return ResourceGoneError("Exception to fake a stale goal state") return MockHttpResponse(200, manifest_xml.encode('utf-8')) elif self.is_goal_state_request(url): protocol.track_url(url) # keep track of goal state requests return None http_get_handler.goal_state_requests = 0 with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: HostPluginProtocol.is_default_channel = False try: # initialization of the host plugin triggers a request for the goal state; do it here before we start tracking those requests. protocol.client.get_host_plugin() protocol.set_http_handlers(http_get_handler=http_get_handler) manifest = protocol.client.fetch_manifest("test", [manifest_url], use_verify_header=False) urls = protocol.get_tracked_urls() self.assertEqual(manifest, manifest_xml) self.assertEqual(len(urls), 4, "Unexpected number of HTTP requests: [{0}]".format(urls)) self.assertEqual(urls[0], manifest_url, "The first attempt should have been over the direct channel") self.assertTrue(self.is_host_plugin_extension_artifact_request(urls[1]), "The second attempt should have been over the host channel") self.assertTrue(self.is_goal_state_request(urls[2]), "The host channel should have been refreshed the goal state") self.assertTrue(self.is_host_plugin_extension_artifact_request(urls[3]), "The third attempt should have been over the host channel") self.assertTrue(HostPluginProtocol.is_default_channel, "The host should have been set as the default channel") finally: HostPluginProtocol.is_default_channel = False # Reset default channel def test_fetch_manifest_should_update_goal_state_and_not_change_default_channel_if_host_fails(self): manifest_url = 'https://fake_host/fake_manifest.xml' def http_get_handler(url, *_, **kwargs): if url == manifest_url or self.is_host_plugin_extension_request(url, kwargs, manifest_url): return ResourceGoneError("Exception to fake an error on either channel") elif self.is_goal_state_request(url): protocol.track_url(url) # keep track of goal state requests return None # Everything fails. Goal state should have been updated and host channel should not have been set as default. with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: HostPluginProtocol.is_default_channel = False # initialization of the host plugin triggers a request for the goal state; do it here before we start # tracking those requests. protocol.client.get_host_plugin() protocol.set_http_handlers(http_get_handler=http_get_handler) with self.assertRaises(ExtensionDownloadError): protocol.client.fetch_manifest("test", [manifest_url], use_verify_header=False) urls = protocol.get_tracked_urls() self.assertEqual(len(urls), 4, "Unexpected number of HTTP requests: [{0}]".format(urls)) self.assertEqual(urls[0], manifest_url, "The first attempt should have been over the direct channel") self.assertTrue(self.is_host_plugin_extension_artifact_request(urls[1]), "The second attempt should have been over the host channel") self.assertTrue(self.is_goal_state_request(urls[2]), "The host channel should have been refreshed the goal state") self.assertTrue(self.is_host_plugin_extension_artifact_request(urls[3]), "The third attempt should have been over the host channel") self.assertFalse(HostPluginProtocol.is_default_channel, "The host should not have been set as the default channel") self.assertEqual(HostPluginProtocol.is_default_channel, False) def test_get_artifacts_profile_should_not_invoke_host_channel_when_direct_channel_succeeds(self): def http_get_handler(url, *_, **__): if self.is_in_vm_artifacts_profile_request(url): protocol.track_url(url) return None with mock_wire_protocol(wire_protocol_data.DATA_FILE_IN_VM_ARTIFACTS_PROFILE) as protocol: protocol.set_http_handlers(http_get_handler=http_get_handler) HostPluginProtocol.is_default_channel = False protocol.client.reset_goal_state() urls = protocol.get_tracked_urls() self.assertEqual(len(urls), 1, "Unexpected HTTP requests: [{0}]".format(urls)) self.assertFalse(HostPluginProtocol.is_default_channel, "The host should not have been set as the default channel") def test_get_artifacts_profile_should_use_host_channel_when_direct_channel_fails(self): def http_get_handler(url, *_, **kwargs): if self.is_in_vm_artifacts_profile_request(url): return HttpError("Exception to fake an error on the direct channel") if self.is_host_plugin_in_vm_artifacts_profile_request(url, kwargs): protocol.track_url(url) return None with mock_wire_protocol(wire_protocol_data.DATA_FILE_IN_VM_ARTIFACTS_PROFILE) as protocol: protocol.set_http_handlers(http_get_handler=http_get_handler) HostPluginProtocol.is_default_channel = False try: protocol.client.reset_goal_state() urls = protocol.get_tracked_urls() self.assertEqual(len(urls), 2, "Invalid number of requests: [{0}]".format(urls)) self.assertTrue(self.is_in_vm_artifacts_profile_request(urls[0]), "The first request should have been over the direct channel") self.assertTrue(self.is_host_plugin_extension_artifact_request(urls[1]), "The second request should have been over the host channel") self.assertTrue(HostPluginProtocol.is_default_channel, "The default channel should have changed to the host") finally: HostPluginProtocol.is_default_channel = False def test_get_artifacts_profile_should_retry_the_host_channel_after_refreshing_the_host_plugin(self): def http_get_handler(url, *_, **kwargs): if self.is_in_vm_artifacts_profile_request(url): return HttpError("Exception to fake an error on the direct channel") if self.is_host_plugin_in_vm_artifacts_profile_request(url, kwargs): if http_get_handler.host_plugin_calls == 0: http_get_handler.host_plugin_calls += 1 return ResourceGoneError("Exception to fake a stale goal state") protocol.track_url(url) if self.is_goal_state_request(url) and http_get_handler.host_plugin_calls == 1: protocol.track_url(url) return None http_get_handler.host_plugin_calls = 0 with mock_wire_protocol(wire_protocol_data.DATA_FILE_IN_VM_ARTIFACTS_PROFILE) as protocol: HostPluginProtocol.is_default_channel = False try: # initialization of the host plugin triggers a request for the goal state; do it here before we start tracking those requests. protocol.client.get_host_plugin() protocol.set_http_handlers(http_get_handler=http_get_handler) protocol.client.reset_goal_state() urls = protocol.get_tracked_urls() self.assertEqual(len(urls), 4, "Invalid number of requests: [{0}]".format(urls)) self.assertTrue(self.is_in_vm_artifacts_profile_request(urls[0]), "The first request should have been over the direct channel") self.assertTrue(self.is_host_plugin_extension_artifact_request(urls[1]), "The second request should have been over the host channel") self.assertTrue(self.is_goal_state_request(urls[2]), "The goal state should have been refreshed before retrying the host channel") self.assertTrue(self.is_host_plugin_extension_artifact_request(urls[3]), "The retry request should have been over the host channel") self.assertTrue(HostPluginProtocol.is_default_channel, "The default channel should have changed to the host") finally: HostPluginProtocol.is_default_channel = False def test_get_artifacts_profile_should_refresh_the_host_plugin_and_not_change_default_channel_if_host_plugin_fails(self): def http_get_handler(url, *_, **kwargs): if self.is_in_vm_artifacts_profile_request(url): return HttpError("Exception to fake an error on the direct channel") if self.is_host_plugin_in_vm_artifacts_profile_request(url, kwargs): http_get_handler.host_plugin_calls += 1 return ResourceGoneError("Exception to fake a stale goal state") if self.is_goal_state_request(url) and http_get_handler.host_plugin_calls == 1: protocol.track_url(url) return None http_get_handler.host_plugin_calls = 0 with mock_wire_protocol(wire_protocol_data.DATA_FILE_IN_VM_ARTIFACTS_PROFILE) as protocol: HostPluginProtocol.is_default_channel = False # initialization of the host plugin triggers a request for the goal state; do it here before we start tracking those requests. protocol.client.get_host_plugin() protocol.set_http_handlers(http_get_handler=http_get_handler) protocol.client.reset_goal_state() urls = protocol.get_tracked_urls() self.assertEqual(len(urls), 4, "Invalid number of requests: [{0}]".format(urls)) self.assertTrue(self.is_in_vm_artifacts_profile_request(urls[0]), "The first request should have been over the direct channel") self.assertTrue(self.is_host_plugin_extension_artifact_request(urls[1]), "The second request should have been over the host channel") self.assertTrue(self.is_goal_state_request(urls[2]), "The goal state should have been refreshed before retrying the host channel") self.assertTrue(self.is_host_plugin_extension_artifact_request(urls[3]), "The retry request should have been over the host channel") self.assertFalse(HostPluginProtocol.is_default_channel, "The default channel should not have changed") @staticmethod def _set_and_fail_helper_channel_functions(fail_direct=False, fail_host=False): def direct_func(*_): direct_func.counter += 1 if direct_func.fail: raise Exception("Direct channel failed") return "direct" def host_func(*_): host_func.counter += 1 if host_func.fail: raise Exception("Host channel failed") return "host" direct_func.counter = 0 direct_func.fail = fail_direct host_func.counter = 0 host_func.fail = fail_host return direct_func, host_func def test_download_using_appropriate_channel_should_not_invoke_secondary_when_primary_channel_succeeds(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: # Scenario #1: Direct channel default HostPluginProtocol.is_default_channel = False direct_func, host_func = self._set_and_fail_helper_channel_functions() # Assert we're only calling the primary channel (direct) and that it succeeds. for iteration in range(5): ret = protocol.client._download_using_appropriate_channel(direct_func, host_func) self.assertEqual("direct", ret) self.assertEqual(iteration + 1, direct_func.counter) self.assertEqual(0, host_func.counter) self.assertFalse(HostPluginProtocol.is_default_channel) # Scenario #2: Host channel default HostPluginProtocol.is_default_channel = True direct_func, host_func = self._set_and_fail_helper_channel_functions() # Assert we're only calling the primary channel (host) and that it succeeds. for iteration in range(5): ret = protocol.client._download_using_appropriate_channel(direct_func, host_func) self.assertEqual("host", ret) self.assertEqual(0, direct_func.counter) self.assertEqual(iteration + 1, host_func.counter) self.assertTrue(HostPluginProtocol.is_default_channel) def test_download_using_appropriate_channel_should_not_change_default_channel_if_none_succeeds(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: # Scenario #1: Direct channel is default HostPluginProtocol.is_default_channel = False direct_func, host_func = self._set_and_fail_helper_channel_functions(fail_direct=True, fail_host=True) # Assert we keep trying both channels, but the default channel doesn't change for iteration in range(5): with self.assertRaises(HttpError): protocol.client._download_using_appropriate_channel(direct_func, host_func) self.assertEqual(iteration + 1, direct_func.counter) self.assertEqual(iteration + 1, host_func.counter) self.assertFalse(HostPluginProtocol.is_default_channel) # Scenario #2: Host channel is default HostPluginProtocol.is_default_channel = True direct_func, host_func = self._set_and_fail_helper_channel_functions(fail_direct=True, fail_host=True) # Assert we keep trying both channels, but the default channel doesn't change for iteration in range(5): with self.assertRaises(HttpError): protocol.client._download_using_appropriate_channel(direct_func, host_func) self.assertEqual(iteration + 1, direct_func.counter) self.assertEqual(iteration + 1, host_func.counter) self.assertTrue(HostPluginProtocol.is_default_channel) def test_download_using_appropriate_channel_should_change_default_channel_when_secondary_succeeds(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: # Scenario #1: Direct channel is default HostPluginProtocol.is_default_channel = False direct_func, host_func = self._set_and_fail_helper_channel_functions(fail_direct=True, fail_host=False) # Assert we've called both channels and the default channel changed ret = protocol.client._download_using_appropriate_channel(direct_func, host_func) self.assertEqual("host", ret) self.assertEqual(1, direct_func.counter) self.assertEqual(1, host_func.counter) self.assertTrue(HostPluginProtocol.is_default_channel) # If host keeps succeeding, assert we keep calling only that channel and not changing the default. for iteration in range(5): ret = protocol.client._download_using_appropriate_channel(direct_func, host_func) self.assertEqual("host", ret) self.assertEqual(1, direct_func.counter) self.assertEqual(1 + iteration + 1, host_func.counter) self.assertTrue(HostPluginProtocol.is_default_channel) # Scenario #2: Host channel is default HostPluginProtocol.is_default_channel = True direct_func, host_func = self._set_and_fail_helper_channel_functions(fail_direct=False, fail_host=True) # Assert we've called both channels and the default channel changed ret = protocol.client._download_using_appropriate_channel(direct_func, host_func) self.assertEqual("direct", ret) self.assertEqual(1, direct_func.counter) self.assertEqual(1, host_func.counter) self.assertFalse(HostPluginProtocol.is_default_channel) # If direct keeps succeeding, assert we keep calling only that channel and not changing the default. for iteration in range(5): ret = protocol.client._download_using_appropriate_channel(direct_func, host_func) self.assertEqual("direct", ret) self.assertEqual(1 + iteration + 1, direct_func.counter) self.assertEqual(1, host_func.counter) self.assertFalse(HostPluginProtocol.is_default_channel) class UpdateGoalStateTestCase(HttpRequestPredicates, AgentTestCase): """ Tests for WireClient.update_goal_state() and WireClient.reset_goal_state() """ def test_it_should_update_the_goal_state_and_the_host_plugin_when_the_incarnation_changes(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: protocol.client.get_host_plugin() # if the incarnation changes the behavior is the same for forced and non-forced updates for forced in [True, False]: protocol.mock_wire_data.reload() # start each iteration of the test with fresh mock data # # Update the mock data with random values; include at least one field from each of the components # in the goal state to ensure the entire state was updated. Note that numeric entities, e.g. incarnation, are # actually represented as strings in the goal state. # # Note that the shared config is not parsed by the agent, so we modify the XML data directly. Also, the # certificates are encrypted and it is hard to update a single field; instead, we update the entire list with # empty. # new_incarnation = str(uuid.uuid4()) new_container_id = str(uuid.uuid4()) new_role_config_name = str(uuid.uuid4()) new_hosting_env_deployment_name = str(uuid.uuid4()) new_shared_conf = WireProtocolData.replace_xml_attribute_value(protocol.mock_wire_data.shared_config, "Deployment", "name", str(uuid.uuid4())) new_sequence_number = 12345 if 'Pkcs7BlobWithPfxContents' not in protocol.mock_wire_data.certs: raise Exception('This test requires a non-empty certificate list') protocol.mock_wire_data.set_incarnation(new_incarnation) protocol.mock_wire_data.set_container_id(new_container_id) protocol.mock_wire_data.set_role_config_name(new_role_config_name) protocol.mock_wire_data.set_hosting_env_deployment_name(new_hosting_env_deployment_name) protocol.mock_wire_data.shared_config = new_shared_conf protocol.mock_wire_data.set_extensions_config_sequence_number(new_sequence_number) protocol.mock_wire_data.certs = r''' 2012-11-30 12 CertificatesNonPfxPackage NotPFXData ''' if forced: protocol.client.reset_goal_state() else: protocol.client.update_goal_state() sequence_number = protocol.get_goal_state().extensions_goal_state.extensions[0].settings[0].sequenceNumber self.assertEqual(protocol.client.get_goal_state().incarnation, new_incarnation) self.assertEqual(protocol.client.get_hosting_env().deployment_name, new_hosting_env_deployment_name) self.assertEqual(protocol.client.get_shared_conf().xml_text, new_shared_conf) self.assertEqual(sequence_number, new_sequence_number) self.assertEqual(len(protocol.client.get_certs().summary), 0) self.assertEqual(protocol.client.get_host_plugin().container_id, new_container_id) self.assertEqual(protocol.client.get_host_plugin().role_config_name, new_role_config_name) def test_non_forced_update_should_not_update_the_goal_state_but_should_update_the_host_plugin_when_the_incarnation_does_not_change(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: protocol.client.get_host_plugin() # The container id, role config name and shared config can change without the incarnation changing; capture the initial # goal state and then change those fields. container_id = protocol.client.get_goal_state().container_id role_config_name = protocol.client.get_goal_state().role_config_name new_container_id = str(uuid.uuid4()) new_role_config_name = str(uuid.uuid4()) protocol.mock_wire_data.set_container_id(new_container_id) protocol.mock_wire_data.set_role_config_name(new_role_config_name) protocol.mock_wire_data.shared_config = WireProtocolData.replace_xml_attribute_value( protocol.mock_wire_data.shared_config, "Deployment", "name", str(uuid.uuid4())) protocol.client.update_goal_state() self.assertEqual(protocol.client.get_goal_state().container_id, container_id) self.assertEqual(protocol.client.get_goal_state().role_config_name, role_config_name) self.assertEqual(protocol.client.get_host_plugin().container_id, new_container_id) self.assertEqual(protocol.client.get_host_plugin().role_config_name, new_role_config_name) def test_forced_update_should_update_the_goal_state_and_the_host_plugin_when_the_incarnation_does_not_change(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: protocol.client.get_host_plugin() # The container id, role config name and shared config can change without the incarnation changing incarnation = protocol.client.get_goal_state().incarnation new_container_id = str(uuid.uuid4()) new_role_config_name = str(uuid.uuid4()) new_shared_conf = WireProtocolData.replace_xml_attribute_value( protocol.mock_wire_data.shared_config, "Deployment", "name", str(uuid.uuid4())) protocol.mock_wire_data.set_container_id(new_container_id) protocol.mock_wire_data.set_role_config_name(new_role_config_name) protocol.mock_wire_data.shared_config = new_shared_conf protocol.client.reset_goal_state() self.assertEqual(protocol.client.get_goal_state().incarnation, incarnation) self.assertEqual(protocol.client.get_shared_conf().xml_text, new_shared_conf) self.assertEqual(protocol.client.get_host_plugin().container_id, new_container_id) self.assertEqual(protocol.client.get_host_plugin().role_config_name, new_role_config_name) def test_reset_should_init_provided_goal_state_properties(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: protocol.client.reset_goal_state(goal_state_properties=GoalStateProperties.All & ~GoalStateProperties.Certificates) with self.assertRaises(ProtocolError) as context: _ = protocol.client.get_certs() expected_message = "Certificates is not in goal state properties" self.assertIn(expected_message, str(context.exception)) def test_reset_should_init_the_goal_state(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: new_container_id = str(uuid.uuid4()) new_role_config_name = str(uuid.uuid4()) protocol.mock_wire_data.set_container_id(new_container_id) protocol.mock_wire_data.set_role_config_name(new_role_config_name) protocol.client.reset_goal_state() self.assertEqual(protocol.client.get_goal_state().container_id, new_container_id) self.assertEqual(protocol.client.get_goal_state().role_config_name, new_role_config_name) class UpdateHostPluginFromGoalStateTestCase(AgentTestCase): """ Tests for WireClient.update_host_plugin_from_goal_state() """ def test_it_should_update_the_host_plugin_with_or_without_incarnation_changes(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: protocol.client.get_host_plugin() # the behavior should be the same whether the incarnation changes or not for incarnation_change in [True, False]: protocol.mock_wire_data.reload() # start each iteration of the test with fresh mock data new_container_id = str(uuid.uuid4()) new_role_config_name = str(uuid.uuid4()) if incarnation_change: protocol.mock_wire_data.set_incarnation(str(uuid.uuid4())) protocol.mock_wire_data.set_container_id(new_container_id) protocol.mock_wire_data.set_role_config_name(new_role_config_name) protocol.mock_wire_data.shared_config = WireProtocolData.replace_xml_attribute_value( protocol.mock_wire_data.shared_config, "Deployment", "name", str(uuid.uuid4())) protocol.client.update_host_plugin_from_goal_state() self.assertEqual(protocol.client.get_host_plugin().container_id, new_container_id) self.assertEqual(protocol.client.get_host_plugin().role_config_name, new_role_config_name) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-a976115/tests/common/test_agent_supported_feature.py000066400000000000000000000100361510742556200263530ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from azurelinuxagent.common.agent_supported_feature import SupportedFeatureNames, \ get_agent_supported_features_list_for_crp, get_supported_feature_by_name, \ get_agent_supported_features_list_for_extensions from tests.lib.tools import AgentTestCase, patch class TestAgentSupportedFeature(AgentTestCase): def test_it_should_return_features_properly(self): with patch("azurelinuxagent.common.agent_supported_feature._MultiConfigFeature.is_supported", True): self.assertIn(SupportedFeatureNames.MultiConfig, get_agent_supported_features_list_for_crp(), "Multi-config should be fetched in crp_supported_features") with patch("azurelinuxagent.common.agent_supported_feature._MultiConfigFeature.is_supported", False): self.assertNotIn(SupportedFeatureNames.MultiConfig, get_agent_supported_features_list_for_crp(), "Multi-config should not be fetched in crp_supported_features as not supported") self.assertEqual(SupportedFeatureNames.MultiConfig, get_supported_feature_by_name(SupportedFeatureNames.MultiConfig).name, "Invalid/Wrong feature returned") # Raise error if feature name not found with self.assertRaises(NotImplementedError): get_supported_feature_by_name("ABC") def test_it_should_return_extension_supported_features_properly(self): with patch("azurelinuxagent.common.agent_supported_feature._ETPFeature.is_supported", True): self.assertIn(SupportedFeatureNames.ExtensionTelemetryPipeline, get_agent_supported_features_list_for_extensions(), "ETP should be in supported features list") with patch("azurelinuxagent.common.agent_supported_feature._ETPFeature.is_supported", False): self.assertNotIn(SupportedFeatureNames.ExtensionTelemetryPipeline, get_agent_supported_features_list_for_extensions(), "ETP should not be in supported features list") self.assertEqual(SupportedFeatureNames.ExtensionTelemetryPipeline, get_supported_feature_by_name(SupportedFeatureNames.ExtensionTelemetryPipeline).name, "Invalid/Wrong feature returned") def test_it_should_return_ga_versioning_governance_feature_properly(self): with patch("azurelinuxagent.common.agent_supported_feature._GAVersioningGovernanceFeature.is_supported", True): self.assertIn(SupportedFeatureNames.GAVersioningGovernance, get_agent_supported_features_list_for_crp(), "GAVersioningGovernance should be fetched in crp_supported_features") with patch("azurelinuxagent.common.agent_supported_feature._GAVersioningGovernanceFeature.is_supported", False): self.assertNotIn(SupportedFeatureNames.GAVersioningGovernance, get_agent_supported_features_list_for_crp(), "GAVersioningGovernance should not be fetched in crp_supported_features as not supported") self.assertEqual(SupportedFeatureNames.GAVersioningGovernance, get_supported_feature_by_name(SupportedFeatureNames.GAVersioningGovernance).name, "Invalid/Wrong feature returned") # Raise error if feature name not found with self.assertRaises(NotImplementedError): get_supported_feature_by_name("ABC") Azure-WALinuxAgent-a976115/tests/common/test_conf.py000066400000000000000000000217651510742556200223750ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os.path import azurelinuxagent.common.conf as conf from azurelinuxagent.common.utils import fileutil from tests.lib.tools import AgentTestCase, data_dir class TestConf(AgentTestCase): # Note: # -- These values *MUST* match those from data/test_waagent.conf EXPECTED_CONFIGURATION = { "Extensions.Enabled": True, "Extensions.WaitForCloudInit": False, "Extensions.WaitForCloudInitTimeout": 3600, "Provisioning.Agent": "auto", "Provisioning.DeleteRootPassword": True, "Provisioning.RegenerateSshHostKeyPair": True, "Provisioning.SshHostKeyPairType": "rsa", "Provisioning.MonitorHostName": True, "Provisioning.DecodeCustomData": False, "Provisioning.ExecuteCustomData": False, "Provisioning.PasswordCryptId": '6', "Provisioning.PasswordCryptSaltLength": 10, "Provisioning.AllowResetSysUser": False, "ResourceDisk.Format": True, "ResourceDisk.Filesystem": "ext4", "ResourceDisk.MountPoint": "/mnt/resource", "ResourceDisk.EnableSwap": False, "ResourceDisk.EnableSwapEncryption": False, "ResourceDisk.SwapSizeMB": 0, "ResourceDisk.MountOptions": None, "Logs.Verbose": False, "OS.EnableFIPS": True, "OS.RootDeviceScsiTimeout": '300', "OS.OpensslPath": '/usr/bin/openssl', "OS.SshClientAliveInterval": 42, "OS.SshDir": "/notareal/path", "HttpProxy.Host": None, "HttpProxy.Port": None, "DetectScvmmEnv": False, "Lib.Dir": "/var/lib/waagent", "DVD.MountPoint": "/mnt/cdrom/secure", "Pid.File": "/var/run/waagent.pid", "Extension.LogDir": "/var/log/azure", "OS.HomeDir": "/home", "OS.EnableRDMA": False, "OS.UpdateRdmaDriver": False, "OS.CheckRdmaDriver": False, "AutoUpdate.Enabled": True, "AutoUpdate.GAFamily": "Prod", "AutoUpdate.UpdateToLatestVersion": True, "EnableOverProvisioning": True, "OS.AllowHTTP": False, "OS.EnableFirewall": False } def setUp(self): AgentTestCase.setUp(self) self.conf = conf.ConfigurationProvider() conf.load_conf_from_file( os.path.join(data_dir, "test_waagent.conf"), self.conf) def test_get_should_return_default_when_key_is_not_found(self): self.assertEqual("The Default Value", self.conf.get("this-key-does-not-exist", "The Default Value")) self.assertEqual("The Default Value", self.conf.get("this-key-does-not-exist", lambda: "The Default Value")) def test_get_switch_should_return_default_when_key_is_not_found(self): self.assertEqual(True, self.conf.get_switch("this-key-does-not-exist", True)) self.assertEqual(True, self.conf.get_switch("this-key-does-not-exist", lambda: True)) def test_get_int_should_return_default_when_key_is_not_found(self): self.assertEqual(123456789, self.conf.get_int("this-key-does-not-exist", 123456789)) self.assertEqual(123456789, self.conf.get_int("this-key-does-not-exist", lambda: 123456789)) def test_key_value_handling(self): self.assertEqual("Value1", self.conf.get("FauxKey1", "Bad")) self.assertEqual("Value2 Value2", self.conf.get("FauxKey2", "Bad")) self.assertEqual("delalloc,rw,noatime,nobarrier,users,mode=777", self.conf.get("FauxKey3", "Bad")) def test_get_ssh_dir(self): self.assertTrue(conf.get_ssh_dir(self.conf).startswith("/notareal/path")) def test_get_sshd_conf_file_path(self): self.assertTrue(conf.get_sshd_conf_file_path( self.conf).startswith("/notareal/path")) def test_get_ssh_key_glob(self): self.assertTrue(conf.get_ssh_key_glob( self.conf).startswith("/notareal/path")) def test_get_ssh_key_private_path(self): self.assertTrue(conf.get_ssh_key_private_path( self.conf).startswith("/notareal/path")) def test_get_ssh_key_public_path(self): self.assertTrue(conf.get_ssh_key_public_path( self.conf).startswith("/notareal/path")) def test_get_fips_enabled(self): self.assertTrue(conf.get_fips_enabled(self.conf)) def test_get_provision_agent(self): self.assertTrue(conf.get_provisioning_agent(self.conf) == 'auto') def test_get_configuration(self): configuration = conf.get_configuration(self.conf) self.assertTrue(len(configuration.keys()) > 0) for k in TestConf.EXPECTED_CONFIGURATION.keys(): self.assertEqual( TestConf.EXPECTED_CONFIGURATION[k], configuration[k], k) def test_get_agent_disabled_file_path(self): self.assertEqual(conf.get_disable_agent_file_path(self.conf), os.path.join(self.tmp_dir, conf.DISABLE_AGENT_FILE)) def test_write_agent_disabled(self): """ Test writing disable_agent is empty """ from azurelinuxagent.pa.provision.default import ProvisionHandler disable_file_path = conf.get_disable_agent_file_path(self.conf) self.assertFalse(os.path.exists(disable_file_path)) ProvisionHandler.write_agent_disabled() self.assertTrue(os.path.exists(disable_file_path)) self.assertEqual('', fileutil.read_file(disable_file_path)) def test_get_extensions_enabled(self): self.assertTrue(conf.get_extensions_enabled(self.conf)) def test_get_get_auto_update_to_latest_version(self): # update flags not set self.assertTrue(conf.get_auto_update_to_latest_version(self.conf)) config = conf.ConfigurationProvider() # AutoUpdate.Enabled is set to 'n' conf.load_conf_from_file( os.path.join(data_dir, "config/waagent_auto_update_disabled.conf"), config) self.assertFalse(conf.get_auto_update_to_latest_version(config), "AutoUpdate.UpdateToLatestVersion should be 'n'") # AutoUpdate.Enabled is set to 'y' conf.load_conf_from_file( os.path.join(data_dir, "config/waagent_auto_update_enabled.conf"), config) self.assertTrue(conf.get_auto_update_to_latest_version(config), "AutoUpdate.UpdateToLatestVersion should be 'y'") # AutoUpdate.UpdateToLatestVersion is set to 'n' conf.load_conf_from_file( os.path.join(data_dir, "config/waagent_update_to_latest_version_disabled.conf"), config) self.assertFalse(conf.get_auto_update_to_latest_version(config), "AutoUpdate.UpdateToLatestVersion should be 'n'") # AutoUpdate.UpdateToLatestVersion is set to 'y' conf.load_conf_from_file( os.path.join(data_dir, "config/waagent_update_to_latest_version_enabled.conf"), config) self.assertTrue(conf.get_auto_update_to_latest_version(config), "AutoUpdate.UpdateToLatestVersion should be 'y'") # AutoUpdate.Enabled is set to 'y' and AutoUpdate.UpdateToLatestVersion is set to 'n' conf.load_conf_from_file( os.path.join(data_dir, "config/waagent_auto_update_enabled_update_to_latest_version_disabled.conf"), config) self.assertFalse(conf.get_auto_update_to_latest_version(config), "AutoUpdate.UpdateToLatestVersion should be 'n'") # AutoUpdate.Enabled is set to 'n' and AutoUpdate.UpdateToLatestVersion is set to 'y' conf.load_conf_from_file( os.path.join(data_dir, "config/waagent_auto_update_disabled_update_to_latest_version_enabled.conf"), config) self.assertTrue(conf.get_auto_update_to_latest_version(config), "AutoUpdate.UpdateToLatestVersion should be 'y'") # AutoUpdate.Enabled is set to 'n' and AutoUpdate.UpdateToLatestVersion is set to 'n' conf.load_conf_from_file( os.path.join(data_dir, "config/waagent_auto_update_disabled_update_to_latest_version_disabled.conf"), config) self.assertFalse(conf.get_auto_update_to_latest_version(config), "AutoUpdate.UpdateToLatestVersion should be 'n'") # AutoUpdate.Enabled is set to 'y' and AutoUpdate.UpdateToLatestVersion is set to 'y' conf.load_conf_from_file( os.path.join(data_dir, "config/waagent_auto_update_enabled_update_to_latest_version_enabled.conf"), config) self.assertTrue(conf.get_auto_update_to_latest_version(config), "AutoUpdate.UpdateToLatestVersion should be 'y'") Azure-WALinuxAgent-a976115/tests/common/test_errorstate.py000066400000000000000000000074551510742556200236420ustar00rootroot00000000000000import unittest from datetime import timedelta, datetime from azurelinuxagent.common.errorstate import ErrorState from azurelinuxagent.common.future import UTC from tests.lib.tools import Mock, patch class TestErrorState(unittest.TestCase): def test_errorstate00(self): """ If ErrorState is never incremented, it will never trigger. """ test_subject = ErrorState(timedelta(seconds=10000)) self.assertFalse(test_subject.is_triggered()) self.assertEqual(0, test_subject.count) self.assertEqual('unknown', test_subject.fail_time) def test_errorstate01(self): """ If ErrorState is never incremented, and the timedelta is zero it will not trigger. """ test_subject = ErrorState(timedelta(seconds=0)) self.assertFalse(test_subject.is_triggered()) self.assertEqual(0, test_subject.count) self.assertEqual('unknown', test_subject.fail_time) def test_errorstate02(self): """ If ErrorState is triggered, and the current time is within timedelta of now it will trigger. """ test_subject = ErrorState(timedelta(seconds=0)) test_subject.incr() self.assertTrue(test_subject.is_triggered()) self.assertEqual(1, test_subject.count) self.assertEqual('0.0 min', test_subject.fail_time) @patch('azurelinuxagent.common.errorstate.datetime') def test_errorstate03(self, mock_time): """ ErrorState will not trigger until 1. ErrorState has been incr() at least once. 2. The timedelta from the first incr() has elapsed. """ test_subject = ErrorState(timedelta(minutes=15)) for x in range(1, 10): mock_time.now = Mock(return_value=datetime.now(UTC) + timedelta(minutes=x)) test_subject.incr() self.assertFalse(test_subject.is_triggered()) mock_time.now = Mock(return_value=datetime.now(UTC) + timedelta(minutes=30)) test_subject.incr() self.assertTrue(test_subject.is_triggered()) self.assertEqual('29.0 min', test_subject.fail_time) def test_errorstate04(self): """ If ErrorState is reset the timestamp of the last incr() is reset to None. """ test_subject = ErrorState(timedelta(minutes=15)) self.assertTrue(test_subject.timestamp is None) test_subject.incr() self.assertTrue(test_subject.timestamp is not None) test_subject.reset() self.assertTrue(test_subject.timestamp is None) def test_errorstate05(self): """ Test the fail_time for various scenarios """ test_subject = ErrorState(timedelta(minutes=15)) self.assertEqual('unknown', test_subject.fail_time) test_subject.incr() self.assertEqual('0.0 min', test_subject.fail_time) test_subject.timestamp = datetime.now(UTC) - timedelta(seconds=60) self.assertEqual('1.0 min', test_subject.fail_time) test_subject.timestamp = datetime.now(UTC) - timedelta(seconds=73) self.assertEqual('1.22 min', test_subject.fail_time) test_subject.timestamp = datetime.now(UTC) - timedelta(seconds=120) self.assertEqual('2.0 min', test_subject.fail_time) test_subject.timestamp = datetime.now(UTC) - timedelta(seconds=60 * 59) self.assertEqual('59.0 min', test_subject.fail_time) test_subject.timestamp = datetime.now(UTC) - timedelta(seconds=60 * 60) self.assertEqual('1.0 hr', test_subject.fail_time) test_subject.timestamp = datetime.now(UTC) - timedelta(seconds=60 * 95) self.assertEqual('1.58 hr', test_subject.fail_time) test_subject.timestamp = datetime.now(UTC) - timedelta(seconds=60 * 60 * 3) self.assertEqual('3.0 hr', test_subject.fail_time) Azure-WALinuxAgent-a976115/tests/common/test_event.py000066400000000000000000001467371510742556200226000ustar00rootroot00000000000000# coding=utf-8 # # Copyright 2017 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from __future__ import print_function import json import os import platform import re import shutil import threading import xml.dom from datetime import datetime, timedelta from mock import MagicMock from azurelinuxagent.common.utils import textutil, fileutil, timeutil from azurelinuxagent.common import event, logger from azurelinuxagent.common.AgentGlobals import AgentGlobals from azurelinuxagent.common.event import add_event, add_periodic, add_log_event, elapsed_milliseconds, \ WALAEventOperation, parse_xml_event, parse_json_event, AGENT_EVENT_FILE_EXTENSION, EVENTS_DIRECTORY, \ TELEMETRY_EVENT_EVENT_ID, TELEMETRY_EVENT_PROVIDER_ID, TELEMETRY_LOG_EVENT_ID, TELEMETRY_LOG_PROVIDER_ID, \ report_metric from azurelinuxagent.common.future import ustr, UTC from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.telemetryevent import CommonTelemetryEventSchema, GuestAgentGenericLogsSchema, \ GuestAgentExtensionEventsSchema, GuestAgentPerfCounterEventsSchema from azurelinuxagent.common.version import CURRENT_AGENT, CURRENT_VERSION, AGENT_EXECUTION_MODE from azurelinuxagent.ga.collect_telemetry_events import _CollectAndEnqueueEvents from tests.lib import wire_protocol_data from tests.lib.mock_wire_protocol import mock_wire_protocol, MockHttpResponse from tests.lib.http_request_predicates import HttpRequestPredicates from tests.lib.tools import AgentTestCase, data_dir, load_data, patch, skip_if_predicate_true from tests.lib.event_logger_tools import EventLoggerTools class TestEvent(HttpRequestPredicates, AgentTestCase): # These are the Operation/Category for events produced by the tests below (as opposed by events produced by the agent itself) _Message = "ThisIsATestEventMessage" _Operation = "ThisIsATestEventOperation" _Category = "ThisIsATestMetricCategory" def setUp(self): AgentTestCase.setUp(self) self.event_dir = os.path.join(self.tmp_dir, EVENTS_DIRECTORY) EventLoggerTools.initialize_event_logger(self.event_dir) threading.current_thread().name = "TestEventThread" osutil = get_osutil() self.expected_common_parameters = { # common parameters computed at event creation; the timestamp (stored as the opcode name) is not included # here and is checked separately from these parameters CommonTelemetryEventSchema.GAVersion: CURRENT_AGENT, CommonTelemetryEventSchema.ContainerId: AgentGlobals.get_container_id(), CommonTelemetryEventSchema.EventTid: threading.current_thread().ident, CommonTelemetryEventSchema.EventPid: os.getpid(), CommonTelemetryEventSchema.TaskName: threading.current_thread().name, CommonTelemetryEventSchema.KeywordName: json.dumps({"CpuArchitecture": platform.machine()}), # common parameters computed from the OS platform CommonTelemetryEventSchema.OSVersion: EventLoggerTools.get_expected_os_version(), CommonTelemetryEventSchema.ExecutionMode: AGENT_EXECUTION_MODE, CommonTelemetryEventSchema.RAM: int(osutil.get_total_mem()), CommonTelemetryEventSchema.Processors: osutil.get_processor_cores(), # common parameters from the goal state CommonTelemetryEventSchema.TenantName: 'db00a7755a5e4e8a8fe4b19bc3b330c3', CommonTelemetryEventSchema.RoleName: 'MachineRole', CommonTelemetryEventSchema.RoleInstanceName: 'b61f93d0-e1ed-40b2-b067-22c243233448.MachineRole_IN_0', # common parameters CommonTelemetryEventSchema.Location: EventLoggerTools.mock_imds_data['location'], CommonTelemetryEventSchema.SubscriptionId: EventLoggerTools.mock_imds_data['subscriptionId'], CommonTelemetryEventSchema.ResourceGroupName: EventLoggerTools.mock_imds_data['resourceGroupName'], CommonTelemetryEventSchema.VMId: EventLoggerTools.mock_imds_data['vmId'], CommonTelemetryEventSchema.ImageOrigin: EventLoggerTools.mock_imds_data['image_origin'], } self.expected_extension_events_params = { GuestAgentExtensionEventsSchema.IsInternal: False, GuestAgentExtensionEventsSchema.ExtensionType: "" } @staticmethod def _report_events(protocol, event_list): def _yield_events(): for telemetry_event in event_list: yield telemetry_event protocol.client.report_event(_yield_events()) @staticmethod def _collect_events(): def append_event(e): for p in e.parameters: if p.name == 'Operation' and p.value == TestEvent._Operation \ or p.name == 'Category' and p.value == TestEvent._Category \ or p.name == 'Message' and p.value == TestEvent._Message \ or p.name == 'Context1' and p.value == TestEvent._Message: event_list.append(e) event_list = [] send_telemetry_events = MagicMock() send_telemetry_events.enqueue_event = MagicMock(wraps=append_event) event_collector = _CollectAndEnqueueEvents(send_telemetry_events) event_collector.process_events() return event_list def _collect_event_files(self): files = [os.path.join(self.event_dir, f) for f in os.listdir(self.event_dir)] return [f for f in files if fileutil.findre_in_file(f, TestEvent._Operation)] @staticmethod def _is_guest_extension_event(event): # pylint: disable=redefined-outer-name return event.eventId == TELEMETRY_EVENT_EVENT_ID and event.providerId == TELEMETRY_EVENT_PROVIDER_ID @staticmethod def _is_telemetry_log_event(event): # pylint: disable=redefined-outer-name return event.eventId == TELEMETRY_LOG_EVENT_ID and event.providerId == TELEMETRY_LOG_PROVIDER_ID def test_parse_xml_event(self, *args): # pylint: disable=unused-argument data_str = load_data('ext/event_from_extension.xml') event = parse_xml_event(data_str) # pylint: disable=redefined-outer-name self.assertIsNotNone(event) self.assertNotEqual(0, event.parameters) self.assertTrue(all(param is not None for param in event.parameters)) def test_parse_json_event(self, *args): # pylint: disable=unused-argument data_str = load_data('ext/event.json') event = parse_json_event(data_str) # pylint: disable=redefined-outer-name self.assertIsNotNone(event) self.assertNotEqual(0, event.parameters) self.assertTrue(all(param is not None for param in event.parameters)) def test_add_event_should_use_the_container_id_from_the_most_recent_goal_state(self): def create_event_and_return_container_id(): # pylint: disable=inconsistent-return-statements event.add_event(name='Event', op=TestEvent._Operation) event_list = self._collect_events() self.assertEqual(len(event_list), 1, "Could not find the event created by add_event") for p in event_list[0].parameters: if p.name == CommonTelemetryEventSchema.ContainerId: return p.value self.fail("Could not find Contained ID on event") with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: contained_id = create_event_and_return_container_id() # The expect value comes from DATA_FILE self.assertEqual(contained_id, 'c6d5526c-5ac2-4200-b6e2-56f2b70c5ab2', "Incorrect container ID") protocol.mock_wire_data.set_container_id('AAAAAAAA-BBBB-CCCC-DDDD-EEEEEEEEEEEE') protocol.client.update_goal_state() contained_id = create_event_and_return_container_id() self.assertEqual(contained_id, 'AAAAAAAA-BBBB-CCCC-DDDD-EEEEEEEEEEEE', "Incorrect container ID") protocol.mock_wire_data.set_container_id('11111111-2222-3333-4444-555555555555') protocol.client.update_goal_state() contained_id = create_event_and_return_container_id() self.assertEqual(contained_id, '11111111-2222-3333-4444-555555555555', "Incorrect container ID") def test_add_event_should_handle_event_errors(self): with patch("azurelinuxagent.common.utils.fileutil.mkdir", side_effect=OSError): with patch('azurelinuxagent.common.logger.periodic_error') as mock_logger_periodic_error: add_event('test', message='test event', op=TestEvent._Operation) # The event shouldn't have been created self.assertTrue(len(self._collect_event_files()) == 0) # The exception should have been caught and logged args = mock_logger_periodic_error.call_args exception_message = args[0][1] self.assertIn("[EventError] Failed to create events folder", exception_message) def test_event_status_event_marked(self): es = event.__event_status__ self.assertFalse(es.event_marked("Foo", "1.2", "FauxOperation")) es.mark_event_status("Foo", "1.2", "FauxOperation", True) self.assertTrue(es.event_marked("Foo", "1.2", "FauxOperation")) event.__event_status__ = event.EventStatus() event.init_event_status(self.tmp_dir) es = event.__event_status__ self.assertTrue(es.event_marked("Foo", "1.2", "FauxOperation")) def test_event_status_defaults_to_success(self): es = event.__event_status__ self.assertTrue(es.event_succeeded("Foo", "1.2", "FauxOperation")) def test_event_status_records_status(self): es = event.EventStatus() es.mark_event_status("Foo", "1.2", "FauxOperation", True) self.assertTrue(es.event_succeeded("Foo", "1.2", "FauxOperation")) es.mark_event_status("Foo", "1.2", "FauxOperation", False) self.assertFalse(es.event_succeeded("Foo", "1.2", "FauxOperation")) def test_event_status_preserves_state(self): es = event.__event_status__ es.mark_event_status("Foo", "1.2", "FauxOperation", False) self.assertFalse(es.event_succeeded("Foo", "1.2", "FauxOperation")) event.__event_status__ = event.EventStatus() event.init_event_status(self.tmp_dir) es = event.__event_status__ self.assertFalse(es.event_succeeded("Foo", "1.2", "FauxOperation")) def test_should_emit_event_ignores_unknown_operations(self): event.__event_status__ = event.EventStatus() self.assertTrue(event.should_emit_event("Foo", "1.2", "FauxOperation", True)) self.assertTrue(event.should_emit_event("Foo", "1.2", "FauxOperation", False)) # Marking the event has no effect event.mark_event_status("Foo", "1.2", "FauxOperation", True) self.assertTrue(event.should_emit_event("Foo", "1.2", "FauxOperation", True)) self.assertTrue(event.should_emit_event("Foo", "1.2", "FauxOperation", False)) def test_should_emit_event_handles_known_operations(self): event.__event_status__ = event.EventStatus() # Known operations always initially "fire" for op in event.__event_status_operations__: self.assertTrue(event.should_emit_event("Foo", "1.2", op, True)) self.assertTrue(event.should_emit_event("Foo", "1.2", op, False)) # Note a success event... for op in event.__event_status_operations__: event.mark_event_status("Foo", "1.2", op, True) # Subsequent success events should not fire, but failures will for op in event.__event_status_operations__: self.assertFalse(event.should_emit_event("Foo", "1.2", op, True)) self.assertTrue(event.should_emit_event("Foo", "1.2", op, False)) # Note a failure event... for op in event.__event_status_operations__: event.mark_event_status("Foo", "1.2", op, False) # Subsequent success events fire and failure do not for op in event.__event_status_operations__: self.assertTrue(event.should_emit_event("Foo", "1.2", op, True)) self.assertFalse(event.should_emit_event("Foo", "1.2", op, False)) @patch('azurelinuxagent.common.event.EventLogger') @patch('azurelinuxagent.common.logger.error') @patch('azurelinuxagent.common.logger.warn') @patch('azurelinuxagent.common.logger.info') def test_should_log_errors_if_failed_operation_and_empty_event_dir(self, mock_logger_info, mock_logger_warn, mock_logger_error, mock_reporter): mock_reporter.event_dir = None add_event("dummy name", version=CURRENT_VERSION, op=WALAEventOperation.Download, is_success=False, message="dummy event message", reporter=mock_reporter) self.assertEqual(1, mock_logger_error.call_count) self.assertEqual(1, mock_logger_warn.call_count) self.assertEqual(0, mock_logger_info.call_count) args = mock_logger_error.call_args[0] self.assertEqual(('dummy name', 'Download', 'dummy event message', 0), args[1:]) @patch('azurelinuxagent.common.event.EventLogger') @patch('azurelinuxagent.common.logger.error') @patch('azurelinuxagent.common.logger.warn') @patch('azurelinuxagent.common.logger.info') def test_should_log_errors_if_failed_operation_and_not_empty_event_dir(self, mock_logger_info, mock_logger_warn, mock_logger_error, mock_reporter): mock_reporter.event_dir = "dummy" with patch("azurelinuxagent.common.event.should_emit_event", return_value=True) as mock_should_emit_event: with patch("azurelinuxagent.common.event.mark_event_status"): with patch("azurelinuxagent.common.event.EventLogger._add_event"): add_event("dummy name", version=CURRENT_VERSION, op=WALAEventOperation.Download, is_success=False, message="dummy event message") self.assertEqual(1, mock_should_emit_event.call_count) self.assertEqual(1, mock_logger_error.call_count) self.assertEqual(0, mock_logger_warn.call_count) self.assertEqual(0, mock_logger_info.call_count) args = mock_logger_error.call_args[0] self.assertEqual(('dummy name', 'Download', 'dummy event message', 0), args[1:]) @patch('azurelinuxagent.common.event.EventLogger.add_event') def test_periodic_emits_if_not_previously_sent(self, mock_event): event.__event_logger__.reset_periodic() event.add_periodic(logger.EVERY_DAY, "FauxEvent") self.assertEqual(1, mock_event.call_count) @patch('azurelinuxagent.common.event.EventLogger.add_event') def test_periodic_does_not_emit_if_previously_sent(self, mock_event): event.__event_logger__.reset_periodic() event.add_periodic(logger.EVERY_DAY, "FauxEvent") self.assertEqual(1, mock_event.call_count) event.add_periodic(logger.EVERY_DAY, "FauxEvent") self.assertEqual(1, mock_event.call_count) @patch('azurelinuxagent.common.event.EventLogger.add_event') def test_periodic_emits_if_forced(self, mock_event): event.__event_logger__.reset_periodic() event.add_periodic(logger.EVERY_DAY, "FauxEvent") self.assertEqual(1, mock_event.call_count) event.add_periodic(logger.EVERY_DAY, "FauxEvent", force=True) self.assertEqual(2, mock_event.call_count) @patch('azurelinuxagent.common.event.EventLogger.add_event') def test_periodic_emits_after_elapsed_delta(self, mock_event): event.__event_logger__.reset_periodic() event.add_periodic(logger.EVERY_DAY, "FauxEvent") self.assertEqual(1, mock_event.call_count) event.add_periodic(logger.EVERY_DAY, "FauxEvent") self.assertEqual(1, mock_event.call_count) h = hash("FauxEvent"+WALAEventOperation.Unknown+ustr(True)) event.__event_logger__.periodic_events[h] = \ datetime.now(UTC) - logger.EVERY_DAY - logger.EVERY_HOUR event.add_periodic(logger.EVERY_DAY, "FauxEvent") self.assertEqual(2, mock_event.call_count) @patch('azurelinuxagent.common.event.EventLogger.add_event') def test_periodic_forwards_args(self, mock_event): event.__event_logger__.reset_periodic() event.add_periodic(logger.EVERY_DAY, "FauxEvent", op=WALAEventOperation.Log, is_success=True, duration=0, version=str(CURRENT_VERSION), message="FauxEventMessage", log_event=True, force=False) mock_event.assert_called_once_with("FauxEvent", op=WALAEventOperation.Log, is_success=True, duration=0, version=str(CURRENT_VERSION), message="FauxEventMessage", log_event=True) @patch("azurelinuxagent.common.event.datetime") @patch('azurelinuxagent.common.event.EventLogger.add_event') def test_periodic_forwards_args_default_values(self, mock_event, mock_datetime): # pylint: disable=unused-argument event.__event_logger__.reset_periodic() event.add_periodic(logger.EVERY_DAY, "FauxEvent", message="FauxEventMessage") mock_event.assert_called_once_with("FauxEvent", op=WALAEventOperation.Unknown, is_success=True, duration=0, version=str(CURRENT_VERSION), message="FauxEventMessage", log_event=True) @patch("azurelinuxagent.common.event.EventLogger.add_event") def test_add_event_default_variables(self, mock_add_event): add_event('test', message='test event') mock_add_event.assert_called_once_with('test', duration=0, is_success=True, log_event=True, message='test event', op=WALAEventOperation.Unknown, version=str(CURRENT_VERSION), flush=False) def test_collect_events_should_delete_event_files(self): add_event(name='Event1', op=TestEvent._Operation) add_event(name='Event1', op=TestEvent._Operation) add_event(name='Event3', op=TestEvent._Operation) event_files = self._collect_event_files() self.assertEqual(3, len(event_files), "Did not find all the event files that were created") event_list = self._collect_events() event_files = os.listdir(self.event_dir) self.assertEqual(len(event_list), 3, "Did not collect all the events that were created") self.assertEqual(len(event_files), 0, "The event files were not deleted") def test_save_event(self): add_event('test', message='test event', op=TestEvent._Operation) self.assertTrue(len(self._collect_event_files()) == 1) # checking the extension of the file created. for filename in os.listdir(self.event_dir): self.assertTrue(filename.endswith(AGENT_EVENT_FILE_EXTENSION), 'Event file does not have the correct extension ({0}): {1}'.format(AGENT_EVENT_FILE_EXTENSION, filename)) def test_save_event_redact_sas_token(self): add_event('test', message='test event with sas token: https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.vmSettings?sv=2018-03-28&sr=b&sk=system-1&sig=8YHwmibhasT0r9MZgL09QmFwL7ZV%2bg%2b49QP5Zwe4ksY%3d&se=9999-01-01T00%3a00%3a00Z&sp=r', op=TestEvent._Operation) event_files = self._collect_event_files() self.assertTrue(len(event_files) == 1) first_event = event_files[0] with open(first_event) as first_fh: first_event_text = first_fh.read() self.assertTrue('' in first_event_text) def test_add_event_flush_immediately(self): def http_post_handler(url, body, **__): if self.is_telemetry_request(url): http_post_handler.request_body = body return MockHttpResponse(status=200) return None http_post_handler.request_body = None with mock_wire_protocol(wire_protocol_data.DATA_FILE, http_post_handler=http_post_handler): expected_message = 'test event' add_event('test', message=expected_message, op=TestEvent._Operation, flush=True) event_message = self._get_event_message_from_http_request_body(http_post_handler.request_body) self.assertEqual(event_message, expected_message, "The Message in the HTTP request does not match the Message in the add_event") # If immediate_flush is set, the event should send to wireserver directly and file should not be created self.assertTrue(len(self._collect_event_files()) == 0) def test_add_event_flush_fails(self): def http_post_handler(url, **__): if self.is_telemetry_request(url): return MockHttpResponse(status=500) return None with mock_wire_protocol(wire_protocol_data.DATA_FILE, http_post_handler=http_post_handler): expected_message = 'test event' add_event('test', message=expected_message, op=TestEvent._Operation, flush=True) # In case of failure, the event file should be created self.assertTrue(len(self._collect_event_files()) == 1) @staticmethod def _get_event_message(evt): for p in evt.parameters: if p.name == GuestAgentExtensionEventsSchema.Message: return p.value return None def test_collect_events_should_be_able_to_process_events_with_non_ascii_characters(self): self._create_test_event_file("custom_script_nonascii_characters.tld") event_list = self._collect_events() self.assertEqual(len(event_list), 1) self.assertEqual(TestEvent._get_event_message(event_list[0]), u'World\u05e2\u05d9\u05d5\u05ea \u05d0\u05d7\u05e8\u05d5\u05ea\u0906\u091c') def test_collect_events_should_redact_message(self): self._create_test_event_file("event_with_sas_token.tld") event_list = self._collect_events() self.assertEqual(len(event_list), 1) self.assertIn('', TestEvent._get_event_message(event_list[0])) def test_collect_events_should_ignore_invalid_event_files(self): self._create_test_event_file("custom_script_1.tld") # a valid event self._create_test_event_file("custom_script_utf-16.tld") self._create_test_event_file("custom_script_invalid_json.tld") os.chmod(self._create_test_event_file("custom_script_no_read_access.tld"), 0o200) self._create_test_event_file("custom_script_2.tld") # another valid event with patch("azurelinuxagent.common.event.add_event") as mock_add_event: # mock the max retries on parsing invalid json to avoid the test run delays with patch("azurelinuxagent.ga.collect_telemetry_events.NUM_OF_EVENT_FILE_RETRIES", 1): event_list = self._collect_events() self.assertEqual( len(event_list), 2) self.assertTrue( all(TestEvent._get_event_message(evt) == "A test telemetry message." for evt in event_list), "The valid events were not found") invalid_events = [] total_dropped_count = 0 for args, kwargs in mock_add_event.call_args_list: # pylint: disable=unused-variable match = re.search(r"DroppedEventsCount: (\d+)", kwargs['message']) if match is not None: invalid_events.append(kwargs['op']) total_dropped_count += int(match.groups()[0]) self.assertEqual(3, total_dropped_count, "Total dropped events dont match") self.assertIn(WALAEventOperation.CollectEventErrors, invalid_events, "{0} errors not reported".format(WALAEventOperation.CollectEventErrors)) self.assertIn(WALAEventOperation.CollectEventUnicodeErrors, invalid_events, "{0} errors not reported".format(WALAEventOperation.CollectEventUnicodeErrors)) def test_save_event_rollover(self): # We keep 1000 events only, and the older ones are removed. num_of_events = 999 add_event('test', message='first event') # this makes number of events to num_of_events + 1. for i in range(num_of_events): add_event('test', message='test event {0}'.format(i)) num_of_events += 1 # adding the first add_event. events = os.listdir(self.event_dir) events.sort() self.assertTrue(len(events) == num_of_events, "{0} is not equal to {1}".format(len(events), num_of_events)) first_event = os.path.join(self.event_dir, events[0]) with open(first_event) as first_fh: first_event_text = first_fh.read() self.assertTrue('first event' in first_event_text) add_event('test', message='last event') # Adding the above event displaces the first_event events = os.listdir(self.event_dir) events.sort() self.assertTrue(len(events) == num_of_events, "{0} events found, {1} expected".format(len(events), num_of_events)) first_event = os.path.join(self.event_dir, events[0]) with open(first_event) as first_fh: first_event_text = first_fh.read() self.assertFalse('first event' in first_event_text, "'first event' not in {0}".format(first_event_text)) self.assertTrue('test event 0' in first_event_text) last_event = os.path.join(self.event_dir, events[-1]) with open(last_event) as last_fh: last_event_text = last_fh.read() self.assertTrue('last event' in last_event_text) def test_save_event_cleanup(self): for i in range(0, 2000): evt = os.path.join(self.event_dir, '{0}.tld'.format(ustr(1491004920536531 + i))) with open(evt, 'w') as fh: fh.write('{0}{1}'.format(TestEvent._Operation, i)) test_events = self._collect_event_files() self.assertTrue(len(test_events) == 2000, "{0} events found, 2000 expected".format(len(test_events))) add_event('test', message='last event', op=TestEvent._Operation) events = os.listdir(self.event_dir) self.assertTrue(len(events) == 1000, "{0} events found, 1000 expected".format(len(events))) def test_elapsed_milliseconds(self): utc_start = datetime.now(UTC) + timedelta(days=1) self.assertEqual(0, elapsed_milliseconds(utc_start)) def _assert_event_includes_all_parameters_in_the_telemetry_schema(self, actual_event, expected_parameters, assert_timestamp): # add the common parameters to the set of expected parameters all_expected_parameters = self.expected_common_parameters.copy() if self._is_guest_extension_event(actual_event): all_expected_parameters.update(self.expected_extension_events_params.copy()) all_expected_parameters.update(expected_parameters) # convert the event parameters to a dictionary; do not include the timestamp, # which is verified using assert_timestamp() event_parameters = {} timestamp = None for p in actual_event.parameters: if p.name == CommonTelemetryEventSchema.OpcodeName: # the timestamp is stored in the opcode name timestamp = p.value else: event_parameters[p.name] = p.value if self._is_telemetry_log_event(actual_event): # Remove Context2 from event parameters and verify that the timestamp is correct telemetry_log_event_timestamp = event_parameters.pop(GuestAgentGenericLogsSchema.Context2, None) self.assertIsNotNone(telemetry_log_event_timestamp, "Context2 should be filled with a timestamp") assert_timestamp(telemetry_log_event_timestamp) self.maxDiff = None # the dictionary diffs can be quite large; display the whole thing self.assertDictEqual(event_parameters, all_expected_parameters) self.assertIsNotNone(timestamp, "The event does not have a timestamp (Opcode)") assert_timestamp(timestamp) def _test_create_event_function_should_create_events_that_have_all_the_parameters_in_the_telemetry_schema(self, create_event_function, expected_parameters): """ Helper to tests methods that create events (e.g. add_event, add_log_event, etc). """ # execute the method that creates the event, capturing the time range of the execution timestamp_lower = timeutil.create_utc_timestamp(datetime.now(UTC)) create_event_function() timestamp_upper = timeutil.create_utc_timestamp(datetime.now(UTC)) event_list = self._collect_events() self.assertEqual(len(event_list), 1) # verify the event parameters self._assert_event_includes_all_parameters_in_the_telemetry_schema( event_list[0], expected_parameters, assert_timestamp=lambda timestamp: self.assertTrue(timestamp_lower <= timestamp <= timestamp_upper, "The event timestamp (opcode) is incorrect") ) def test_add_event_should_create_events_that_have_all_the_parameters_in_the_telemetry_schema(self): self._test_create_event_function_should_create_events_that_have_all_the_parameters_in_the_telemetry_schema( create_event_function=lambda: add_event( name="TestEvent", op=TestEvent._Operation, is_success=True, duration=1234, version="1.2.3.4", message="Test Message"), expected_parameters={ GuestAgentExtensionEventsSchema.Name: 'TestEvent', GuestAgentExtensionEventsSchema.Version: '1.2.3.4', GuestAgentExtensionEventsSchema.Operation: TestEvent._Operation, GuestAgentExtensionEventsSchema.OperationSuccess: True, GuestAgentExtensionEventsSchema.Message: 'Test Message', GuestAgentExtensionEventsSchema.Duration: 1234, GuestAgentExtensionEventsSchema.ExtensionType: ''}) def test_add_periodic_should_create_events_that_have_all_the_parameters_in_the_telemetry_schema(self): self._test_create_event_function_should_create_events_that_have_all_the_parameters_in_the_telemetry_schema( create_event_function=lambda: add_periodic( delta=logger.EVERY_MINUTE, name="TestPeriodicEvent", op=TestEvent._Operation, is_success=False, duration=4321, version="4.3.2.1", message="Test Periodic Message"), expected_parameters={ GuestAgentExtensionEventsSchema.Name: 'TestPeriodicEvent', GuestAgentExtensionEventsSchema.Version: '4.3.2.1', GuestAgentExtensionEventsSchema.Operation: TestEvent._Operation, GuestAgentExtensionEventsSchema.OperationSuccess: False, GuestAgentExtensionEventsSchema.Message: 'Test Periodic Message', GuestAgentExtensionEventsSchema.Duration: 4321, GuestAgentExtensionEventsSchema.ExtensionType: ''}) @skip_if_predicate_true(lambda: True, "Enable this test when SEND_LOGS_TO_TELEMETRY is enabled") def test_add_log_event_should_create_events_that_have_all_the_parameters_in_the_telemetry_schema(self): self._test_create_event_function_should_create_events_that_have_all_the_parameters_in_the_telemetry_schema( create_event_function=lambda: add_log_event(logger.LogLevel.INFO, 'A test INFO log event'), expected_parameters={ GuestAgentGenericLogsSchema.EventName: 'Log', GuestAgentGenericLogsSchema.CapabilityUsed: 'INFO', GuestAgentGenericLogsSchema.Context1: 'log event', GuestAgentGenericLogsSchema.Context3: '' }) def test_add_log_event_should_always_create_events_when_forced(self): self._test_create_event_function_should_create_events_that_have_all_the_parameters_in_the_telemetry_schema( create_event_function=lambda: add_log_event(logger.LogLevel.WARNING, TestEvent._Message, forced=True), expected_parameters={ GuestAgentGenericLogsSchema.EventName: 'Log', GuestAgentGenericLogsSchema.CapabilityUsed: 'WARNING', GuestAgentGenericLogsSchema.Context1: TestEvent._Message, GuestAgentGenericLogsSchema.Context3: '' }) def test_add_log_event_should_not_create_event_if_not_allowed_and_not_forced(self): add_log_event(logger.LogLevel.WARNING, 'A test WARNING log event') event_list = self._collect_events() self.assertEqual(len(event_list), 0, "No events should be created if not forced and not allowed") def test_report_metric_should_create_events_that_have_all_the_parameters_in_the_telemetry_schema(self): self._test_create_event_function_should_create_events_that_have_all_the_parameters_in_the_telemetry_schema( create_event_function=lambda: report_metric(TestEvent._Category, "%idle", "total", 12.34), expected_parameters={ GuestAgentPerfCounterEventsSchema.Category: TestEvent._Category, GuestAgentPerfCounterEventsSchema.Counter: '%idle', GuestAgentPerfCounterEventsSchema.Instance: 'total', GuestAgentPerfCounterEventsSchema.Value: 12.34 }) def _create_test_event_file(self, source_file): source_file_path = os.path.join(data_dir, "events", source_file) target_file_path = os.path.join(self.event_dir, source_file) shutil.copy(source_file_path, target_file_path) return target_file_path def _collect_test_event_files(self, file_name): return [os.path.join(self.event_dir, f) for f in os.listdir(self.event_dir) if file_name in f] @staticmethod def _get_file_creation_timestamp(file): # pylint: disable=redefined-builtin return timeutil.create_utc_timestamp(datetime.fromtimestamp(os.path.getmtime(file)).replace(tzinfo=UTC)) def test_collect_events_should_add_all_the_parameters_in_the_telemetry_schema_to_legacy_agent_events(self): # Agents <= 2.2.46 use *.tld as the extension for event files (newer agents use "*.waagent.tld") and they populate # only a subset of fields; the rest are added by the current agent when events are collected. self._create_test_event_file("legacy_agent.tld") event_list = self._collect_events() self.assertEqual(len(event_list), 1) self._assert_event_includes_all_parameters_in_the_telemetry_schema( event_list[0], expected_parameters={ GuestAgentExtensionEventsSchema.Name: "WALinuxAgent", GuestAgentExtensionEventsSchema.Version: "9.9.9", GuestAgentExtensionEventsSchema.IsInternal: False, GuestAgentExtensionEventsSchema.Operation: TestEvent._Operation, GuestAgentExtensionEventsSchema.OperationSuccess: True, GuestAgentExtensionEventsSchema.Message: "The cgroup filesystem is ready to use", GuestAgentExtensionEventsSchema.Duration: 1234, GuestAgentExtensionEventsSchema.ExtensionType: "ALegacyExtensionType", CommonTelemetryEventSchema.GAVersion: "WALinuxAgent-1.1.1", CommonTelemetryEventSchema.ContainerId: "AAAAAAAA-BBBB-CCCC-DDDD-EEEEEEEEEEEE", CommonTelemetryEventSchema.EventTid: 98765, CommonTelemetryEventSchema.EventPid: 4321, CommonTelemetryEventSchema.TaskName: "ALegacyTask", CommonTelemetryEventSchema.KeywordName: "ALegacyKeywordName"}, assert_timestamp=lambda timestamp: self.assertEqual(timestamp, '1970-01-01 12:00:00', "The event timestamp (opcode) is incorrect") ) def test_collect_events_should_use_the_file_creation_time_for_legacy_agent_events_missing_a_timestamp(self): test_file = self._create_test_event_file("legacy_agent_no_timestamp.tld") event_creation_time = TestEvent._get_file_creation_timestamp(test_file) event_list = self._collect_events() self.assertEqual(len(event_list), 1) self._assert_event_includes_all_parameters_in_the_telemetry_schema( event_list[0], expected_parameters={ GuestAgentExtensionEventsSchema.Name: "WALinuxAgent", GuestAgentExtensionEventsSchema.Version: "9.9.9", GuestAgentExtensionEventsSchema.IsInternal: False, GuestAgentExtensionEventsSchema.Operation: TestEvent._Operation, GuestAgentExtensionEventsSchema.OperationSuccess: True, GuestAgentExtensionEventsSchema.Message: "The cgroup filesystem is ready to use", GuestAgentExtensionEventsSchema.Duration: 1234, GuestAgentExtensionEventsSchema.ExtensionType: "ALegacyExtensionType", CommonTelemetryEventSchema.GAVersion: "WALinuxAgent-1.1.1", CommonTelemetryEventSchema.ContainerId: "AAAAAAAA-BBBB-CCCC-DDDD-EEEEEEEEEEEE", CommonTelemetryEventSchema.EventTid: 98765, CommonTelemetryEventSchema.EventPid: 4321, CommonTelemetryEventSchema.TaskName: "ALegacyTask", CommonTelemetryEventSchema.KeywordName: "ALegacyKeywordName"}, assert_timestamp=lambda timestamp: self.assertEqual(timestamp, event_creation_time, "The event timestamp (opcode) is incorrect") ) def _assert_extension_event_includes_all_parameters_in_the_telemetry_schema(self, event_file): # Extensions drop their events as *.tld files on the events directory. They populate only a subset of fields, # and the rest are added by the agent when events are collected. test_file = self._create_test_event_file(event_file) event_creation_time = TestEvent._get_file_creation_timestamp(test_file) event_list = self._collect_events() self.assertEqual(len(event_list), 1) self._assert_event_includes_all_parameters_in_the_telemetry_schema( event_list[0], expected_parameters={ GuestAgentExtensionEventsSchema.Name: 'Microsoft.Azure.Extensions.CustomScript', GuestAgentExtensionEventsSchema.Version: '2.0.4', GuestAgentExtensionEventsSchema.Operation: TestEvent._Operation, GuestAgentExtensionEventsSchema.OperationSuccess: True, GuestAgentExtensionEventsSchema.Message: 'A test telemetry message.', GuestAgentExtensionEventsSchema.Duration: 150000, GuestAgentExtensionEventsSchema.ExtensionType: 'json'}, assert_timestamp=lambda timestamp: self.assertEqual(timestamp, event_creation_time, "The event timestamp (opcode) is incorrect") ) def test_collect_events_should_add_all_the_parameters_in_the_telemetry_schema_to_extension_events(self): self._assert_extension_event_includes_all_parameters_in_the_telemetry_schema('custom_script_1.tld') def test_collect_events_should_ignore_extra_parameters_in_extension_events(self): self._assert_extension_event_includes_all_parameters_in_the_telemetry_schema('custom_script_extra_parameters.tld') @staticmethod def _get_event_message_from_http_request_body(event_body): # The XML for the event is sent over as a CDATA element ("Event") in the request's body http_request_body = event_body if ( event_body is None or type(event_body) is ustr) else textutil.str_to_encoded_ustr(event_body) request_body_xml_doc = textutil.parse_doc(http_request_body) event_node = textutil.find(request_body_xml_doc, "Event") if event_node is None: raise ValueError('Could not find the Event node in the XML document') if len(event_node.childNodes) != 1: raise ValueError('The Event node in the XML document should have exactly 1 child') event_node_first_child = event_node.childNodes[0] if event_node_first_child.nodeType != xml.dom.Node.CDATA_SECTION_NODE: raise ValueError('The Event node contents should be CDATA') event_node_cdata = event_node_first_child.nodeValue # The CDATA will contain a sequence of "" nodes, which # correspond to the parameters of the telemetry event. Wrap those into a "Helper" node # and extract the "Message" event_xml_text = '{0}'.format(event_node_cdata) event_xml_doc = textutil.parse_doc(event_xml_text) helper_node = textutil.find(event_xml_doc, "Helper") for child in helper_node.childNodes: if child.getAttribute('Name') == GuestAgentExtensionEventsSchema.Message: return child.getAttribute('Value') raise ValueError( 'Could not find the Message for the telemetry event. Request body: {0}'.format(http_request_body)) def test_report_event_should_encode_call_stack_correctly(self): """ The Message in some telemetry events that include call stacks are being truncated in Kusto. While the issue doesn't seem to be in the agent itself, this test verifies that the Message of the event we send in the HTTP request matches the Message we read from the event's file. """ def get_event_message_from_event_file(event_file): with open(event_file, "rb") as fd: event_data = fd.read().decode("utf-8") # event files are UTF-8 encoded telemetry_event = json.loads(event_data) for p in telemetry_event['parameters']: if p['name'] == GuestAgentExtensionEventsSchema.Message: return p['value'] raise ValueError('Could not find the Message for the telemetry event in {0}'.format(event_file)) def http_post_handler(url, body, **__): if self.is_telemetry_request(url): http_post_handler.request_body = body return MockHttpResponse(status=200) return None http_post_handler.request_body = None with mock_wire_protocol(wire_protocol_data.DATA_FILE, http_post_handler=http_post_handler) as protocol: event_file_path = self._create_test_event_file("event_with_callstack.waagent.tld") expected_message = get_event_message_from_event_file(event_file_path) event_list = self._collect_events() self._report_events(protocol, event_list) event_message = self._get_event_message_from_http_request_body(http_post_handler.request_body) self.assertEqual(event_message, expected_message, "The Message in the HTTP request does not match the Message in the event's *.tld file") def test_report_event_should_encode_events_correctly(self): def http_post_handler(url, body, **__): if self.is_telemetry_request(url): http_post_handler.request_body = body return MockHttpResponse(status=200) return None http_post_handler.request_body = None with mock_wire_protocol(wire_protocol_data.DATA_FILE, http_post_handler=http_post_handler) as protocol: test_messages = [ 'Non-English message - 此文字不是英文的', "Ξεσκεπάζω τὴν ψυχοφθόρα βδελυγμία", "The quick brown fox jumps over the lazy dog", "El pingüino Wenceslao hizo kilómetros bajo exhaustiva lluvia y frío, añoraba a su querido cachorro.", "Portez ce vieux whisky au juge blond qui fume sur son île intérieure, à côté de l'alcôve ovoïde, où les bûches", "se consument dans l'âtre, ce qui lui permet de penser à la cænogenèse de l'être dont il est question", "dans la cause ambiguë entendue à Moÿ, dans un capharnaüm qui, pense-t-il, diminue çà et là la qualité de son œuvre.", "D'fhuascail Íosa, Úrmhac na hÓighe Beannaithe, pór Éava agus Ádhaimh", "Árvíztűrő tükörfúrógép", "Kæmi ný öxi hér ykist þjófum nú bæði víl og ádrepa", "Sævör grét áðan því úlpan var ónýt", "いろはにほへとちりぬるを わかよたれそつねならむ うゐのおくやまけふこえて あさきゆめみしゑひもせす", "? דג סקרן שט בים מאוכזב ולפתע מצא לו חברה איך הקליטה" "Pchnąć w tę łódź jeża lub ośm skrzyń fig", "Normal string event" ] for msg in test_messages: add_event('TestEventEncoding', message=msg, op=TestEvent._Operation) event_list = self._collect_events() self._report_events(protocol, event_list) # In Py2, encode() produces a str and in py3 it produces a bytes string. # type(bytes) == type(str) for Py2 so this check is mainly for Py3 to ensure that the event is encoded properly. self.assertIsInstance(http_post_handler.request_body, bytes, "The Event request body should be encoded") self.assertIn(textutil.str_to_encoded_ustr(msg).encode('utf-8'), http_post_handler.request_body, "Encoded message not found in body") class TestMetrics(AgentTestCase): @patch('azurelinuxagent.common.event.EventLogger.save_event') def test_report_metric(self, mock_event): event.report_metric("cpu", "%idle", "_total", 10.0) self.assertEqual(1, mock_event.call_count) event_json = mock_event.call_args[0][0] self.assertIn(event.TELEMETRY_EVENT_PROVIDER_ID, event_json) self.assertIn("%idle", event_json) event_dictionary = json.loads(event_json) self.assertEqual(event_dictionary['providerId'], event.TELEMETRY_EVENT_PROVIDER_ID) for parameter in event_dictionary["parameters"]: if parameter['name'] == GuestAgentPerfCounterEventsSchema.Counter: self.assertEqual(parameter['value'], '%idle') break else: self.fail("Counter '%idle' not found in event parameters: {0}".format(repr(event_dictionary))) def test_cleanup_message(self): ev_logger = event.EventLogger() self.assertEqual(None, ev_logger._clean_up_message(None)) self.assertEqual("", ev_logger._clean_up_message("")) self.assertEqual("Daemon Activate resource disk failure", ev_logger._clean_up_message( "Daemon Activate resource disk failure")) self.assertEqual("[M.A.E.CS-2.0.7] Target handler state", ev_logger._clean_up_message( '2019/10/07 21:54:16.629444 INFO [M.A.E.CS-2.0.7] Target handler state')) self.assertEqual("[M.A.E.CS-2.0.7] Initializing extension M.A.E.CS-2.0.7", ev_logger._clean_up_message( '2019/10/07 21:54:17.284385 INFO [M.A.E.CS-2.0.7] Initializing extension M.A.E.CS-2.0.7')) self.assertEqual("ExtHandler ProcessGoalState completed [incarnation 4; 4197 ms]", ev_logger._clean_up_message( "2019/10/07 21:55:38.474861 INFO ExtHandler ProcessGoalState completed [incarnation 4; 4197 ms]")) self.assertEqual("Daemon Azure Linux Agent Version:2.2.43", ev_logger._clean_up_message( "2019/10/07 21:52:28.615720 INFO Daemon Azure Linux Agent Version:2.2.43")) self.assertEqual('Daemon Cgroup controller "memory" is not mounted. Failed to create a cgroup for the VM Agent;' ' resource usage will not be tracked', ev_logger._clean_up_message('Daemon Cgroup controller "memory" is not mounted. Failed to ' 'create a cgroup for the VM Agent; resource usage will not be ' 'tracked')) self.assertEqual('ExtHandler Root directory /sys/fs/cgroup/memory/walinuxagent.extensions does not exist.', ev_logger._clean_up_message("2019/10/08 23:45:05.691037 WARNING ExtHandler Root directory " "/sys/fs/cgroup/memory/walinuxagent.extensions does not exist.")) self.assertEqual("LinuxAzureDiagnostic started to handle.", ev_logger._clean_up_message("2019/10/07 22:02:40 LinuxAzureDiagnostic started to handle.")) self.assertEqual("VMAccess started to handle.", ev_logger._clean_up_message("2019/10/07 21:56:58 VMAccess started to handle.")) self.assertEqual( '[PERIODIC] ExtHandler Root directory /sys/fs/cgroup/memory/walinuxagent.extensions does not exist.', ev_logger._clean_up_message("2019/10/08 23:45:05.691037 WARNING [PERIODIC] ExtHandler Root directory " "/sys/fs/cgroup/memory/walinuxagent.extensions does not exist.")) self.assertEqual("[PERIODIC] LinuxAzureDiagnostic started to handle.", ev_logger._clean_up_message( "2019/10/07 22:02:40 [PERIODIC] LinuxAzureDiagnostic started to handle.")) self.assertEqual("[PERIODIC] VMAccess started to handle.", ev_logger._clean_up_message("2019/10/07 21:56:58 [PERIODIC] VMAccess started to handle.")) self.assertEqual('[PERIODIC] Daemon Cgroup controller "memory" is not mounted. Failed to create a cgroup for ' 'the VM Agent; resource usage will not be tracked', ev_logger._clean_up_message('[PERIODIC] Daemon Cgroup controller "memory" is not mounted. ' 'Failed to create a cgroup for the VM Agent; resource usage will ' 'not be tracked')) self.assertEqual('The time should be in UTC', ev_logger._clean_up_message( '2019-11-26T18:15:06.866746Z INFO The time should be in UTC')) self.assertEqual('The time should be in UTC', ev_logger._clean_up_message( '2019-11-26T18:15:06.866746Z The time should be in UTC')) self.assertEqual('[PERIODIC] The time should be in UTC', ev_logger._clean_up_message( '2019-11-26T18:15:06.866746Z INFO [PERIODIC] The time should be in UTC')) self.assertEqual('[PERIODIC] The time should be in UTC', ev_logger._clean_up_message( '2019-11-26T18:15:06.866746Z [PERIODIC] The time should be in UTC')) Azure-WALinuxAgent-a976115/tests/common/test_logger.py000066400000000000000000000761121510742556200227230ustar00rootroot00000000000000# Copyright 2016 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import tempfile from datetime import datetime, timedelta from azurelinuxagent.common.event import __event_logger__, add_log_event, MAX_NUMBER_OF_EVENTS, EVENTS_DIRECTORY from azurelinuxagent.common.future import UTC import azurelinuxagent.common.logger as logger from azurelinuxagent.common.utils import fileutil from tests.lib.tools import AgentTestCase, MagicMock, patch, skip_if_predicate_true _MSG_INFO = "This is our test info logging message {0} {1}" _MSG_WARN = "This is our test warn logging message {0} {1}" _MSG_ERROR = "This is our test error logging message {0} {1}" _MSG_VERBOSE = "This is our test verbose logging message {0} {1}" _DATA = ["arg1", "arg2"] class TestLogger(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) self.lib_dir = tempfile.mkdtemp() self.event_dir = os.path.join(self.lib_dir, EVENTS_DIRECTORY) fileutil.mkdir(self.event_dir) self.log_file = tempfile.mkstemp(prefix="logfile-")[1] logger.reset_periodic() def tearDown(self): AgentTestCase.tearDown(self) logger.reset_periodic() logger.DEFAULT_LOGGER.appenders *= 0 logger.set_prefix(None) fileutil.rm_dirs(self.event_dir) @patch('azurelinuxagent.common.logger.Logger.verbose') @patch('azurelinuxagent.common.logger.Logger.warn') @patch('azurelinuxagent.common.logger.Logger.error') @patch('azurelinuxagent.common.logger.Logger.info') def test_periodic_emits_if_not_previously_sent(self, mock_info, mock_error, mock_warn, mock_verbose): logger.periodic_info(logger.EVERY_DAY, _MSG_INFO, logger.LogLevel.INFO, *_DATA) self.assertEqual(1, mock_info.call_count) logger.periodic_error(logger.EVERY_DAY, _MSG_ERROR, logger.LogLevel.ERROR, *_DATA) self.assertEqual(1, mock_error.call_count) logger.periodic_warn(logger.EVERY_DAY, _MSG_WARN, logger.LogLevel.WARNING, *_DATA) self.assertEqual(1, mock_warn.call_count) logger.periodic_verbose(logger.EVERY_DAY, _MSG_VERBOSE, logger.LogLevel.VERBOSE, *_DATA) self.assertEqual(1, mock_verbose.call_count) @patch('azurelinuxagent.common.logger.Logger.verbose') @patch('azurelinuxagent.common.logger.Logger.warn') @patch('azurelinuxagent.common.logger.Logger.error') @patch('azurelinuxagent.common.logger.Logger.info') def test_periodic_does_not_emit_if_previously_sent(self, mock_info, mock_error, mock_warn, mock_verbose): # The count does not increase from 1 - the first time it sends the data. logger.periodic_info(logger.EVERY_DAY, _MSG_INFO, *_DATA) self.assertIn(hash(_MSG_INFO), logger.DEFAULT_LOGGER.periodic_messages) self.assertEqual(1, mock_info.call_count) logger.periodic_info(logger.EVERY_DAY, _MSG_INFO, *_DATA) self.assertIn(hash(_MSG_INFO), logger.DEFAULT_LOGGER.periodic_messages) self.assertEqual(1, mock_info.call_count) logger.periodic_warn(logger.EVERY_DAY, _MSG_WARN, *_DATA) self.assertIn(hash(_MSG_WARN), logger.DEFAULT_LOGGER.periodic_messages) self.assertEqual(1, mock_warn.call_count) logger.periodic_warn(logger.EVERY_DAY, _MSG_WARN, *_DATA) self.assertIn(hash(_MSG_WARN), logger.DEFAULT_LOGGER.periodic_messages) self.assertEqual(1, mock_warn.call_count) logger.periodic_error(logger.EVERY_DAY, _MSG_ERROR, *_DATA) self.assertIn(hash(_MSG_ERROR), logger.DEFAULT_LOGGER.periodic_messages) self.assertEqual(1, mock_error.call_count) logger.periodic_error(logger.EVERY_DAY, _MSG_ERROR, *_DATA) self.assertIn(hash(_MSG_ERROR), logger.DEFAULT_LOGGER.periodic_messages) self.assertEqual(1, mock_error.call_count) logger.periodic_verbose(logger.EVERY_DAY, _MSG_VERBOSE, *_DATA) self.assertIn(hash(_MSG_VERBOSE), logger.DEFAULT_LOGGER.periodic_messages) self.assertEqual(1, mock_verbose.call_count) logger.periodic_verbose(logger.EVERY_DAY, _MSG_VERBOSE, *_DATA) self.assertIn(hash(_MSG_VERBOSE), logger.DEFAULT_LOGGER.periodic_messages) self.assertEqual(1, mock_verbose.call_count) self.assertEqual(4, len(logger.DEFAULT_LOGGER.periodic_messages)) @patch('azurelinuxagent.common.logger.Logger.verbose') @patch('azurelinuxagent.common.logger.Logger.warn') @patch('azurelinuxagent.common.logger.Logger.error') @patch('azurelinuxagent.common.logger.Logger.info') def test_periodic_emits_after_elapsed_delta(self, mock_info, mock_error, mock_warn, mock_verbose): logger.periodic_info(logger.EVERY_DAY, _MSG_INFO, *_DATA) self.assertEqual(1, mock_info.call_count) logger.periodic_info(logger.EVERY_DAY, _MSG_INFO, *_DATA) self.assertEqual(1, mock_info.call_count) logger.DEFAULT_LOGGER.periodic_messages[hash(_MSG_INFO)] = datetime.now(UTC) - \ logger.EVERY_DAY - logger.EVERY_HOUR logger.periodic_info(logger.EVERY_DAY, _MSG_INFO, *_DATA) self.assertEqual(2, mock_info.call_count) logger.periodic_warn(logger.EVERY_DAY, _MSG_WARN, *_DATA) self.assertEqual(1, mock_warn.call_count) logger.periodic_warn(logger.EVERY_DAY, _MSG_WARN, *_DATA) self.assertEqual(1, mock_warn.call_count) logger.DEFAULT_LOGGER.periodic_messages[hash(_MSG_WARN)] = datetime.now(UTC) - \ logger.EVERY_DAY - logger.EVERY_HOUR logger.periodic_warn(logger.EVERY_DAY, _MSG_WARN, *_DATA) self.assertEqual(2, mock_info.call_count) logger.periodic_error(logger.EVERY_DAY, _MSG_ERROR, *_DATA) self.assertEqual(1, mock_error.call_count) logger.periodic_error(logger.EVERY_DAY, _MSG_ERROR, *_DATA) self.assertEqual(1, mock_error.call_count) logger.DEFAULT_LOGGER.periodic_messages[hash(_MSG_ERROR)] = datetime.now(UTC) - \ logger.EVERY_DAY - logger.EVERY_HOUR logger.periodic_error(logger.EVERY_DAY, _MSG_ERROR, *_DATA) self.assertEqual(2, mock_info.call_count) logger.periodic_verbose(logger.EVERY_DAY, _MSG_VERBOSE, *_DATA) self.assertEqual(1, mock_verbose.call_count) logger.periodic_verbose(logger.EVERY_DAY, _MSG_VERBOSE, *_DATA) self.assertEqual(1, mock_verbose.call_count) logger.DEFAULT_LOGGER.periodic_messages[hash(_MSG_VERBOSE)] = datetime.now(UTC) - \ logger.EVERY_DAY - logger.EVERY_HOUR logger.periodic_verbose(logger.EVERY_DAY, _MSG_VERBOSE, *_DATA) self.assertEqual(2, mock_info.call_count) @patch('azurelinuxagent.common.logger.Logger.verbose') @patch('azurelinuxagent.common.logger.Logger.warn') @patch('azurelinuxagent.common.logger.Logger.error') @patch('azurelinuxagent.common.logger.Logger.info') def test_periodic_forwards_message_and_args(self, mock_info, mock_error, mock_warn, mock_verbose): logger.periodic_info(logger.EVERY_DAY, _MSG_INFO, *_DATA) mock_info.assert_called_once_with(_MSG_INFO, *_DATA) logger.periodic_error(logger.EVERY_DAY, _MSG_ERROR, *_DATA) mock_error.assert_called_once_with(_MSG_ERROR, *_DATA) logger.periodic_warn(logger.EVERY_DAY, _MSG_WARN, *_DATA) mock_warn.assert_called_once_with(_MSG_WARN, *_DATA) logger.periodic_verbose(logger.EVERY_DAY, _MSG_VERBOSE, *_DATA) mock_verbose.assert_called_once_with(_MSG_VERBOSE, *_DATA) _UTCTimestampFormat = u"%Y-%m-%dT%H:%M:%S.%fZ" def test_logger_should_log_in_utc(self): file_name = "test.log" file_path = os.path.join(self.tmp_dir, file_name) test_logger = logger.Logger() test_logger.add_appender(logger.AppenderType.FILE, logger.LogLevel.INFO, path=file_path) before_write_utc = datetime.now(UTC) test_logger.info("The time should be in UTC") with open(file_path, "r") as log_file: log = log_file.read() try: time_in_file = datetime.strptime(log.split(logger.LogLevel.STRINGS[logger.LogLevel.INFO])[0].strip(), self._UTCTimestampFormat).replace(tzinfo=UTC) except ValueError: self.fail("Ensure timestamp follows ISO-8601 format + 'Z' for UTC") # If the time difference is > 5secs, there's a high probability that the time_in_file is in different TZ self.assertTrue((time_in_file-before_write_utc) <= timedelta(seconds=5)) @patch("azurelinuxagent.common.logger.datetime") def test_logger_should_log_micro_seconds(self, mock_dt): # datetime.isoformat() skips ms if ms=0, this test ensures that ms is always set file_name = "test.log" file_path = os.path.join(self.tmp_dir, file_name) test_logger = logger.Logger() test_logger.add_appender(logger.AppenderType.FILE, logger.LogLevel.INFO, path=file_path) ts_with_no_ms = datetime.now(UTC).replace(microsecond=0) mock_dt.now = MagicMock(return_value=ts_with_no_ms) test_logger.info("The time should contain milli-seconds") with open(file_path, "r") as log_file: log = log_file.read() try: time_in_file = datetime.strptime(log.split(logger.LogLevel.STRINGS[logger.LogLevel.INFO])[0].strip(), self._UTCTimestampFormat).replace(tzinfo=UTC) except ValueError: self.fail("Ensure timestamp follows ISO-8601 format and has micro seconds in it") self.assertEqual(ts_with_no_ms, time_in_file, "Timestamps dont match") def test_telemetry_logger(self): mock = MagicMock() appender = logger.TelemetryAppender(logger.LogLevel.WARNING, mock) appender.write(logger.LogLevel.WARNING, "--unit-test-WARNING--") mock.assert_called_with(logger.LogLevel.WARNING, "--unit-test-WARNING--") mock.reset_mock() appender.write(logger.LogLevel.ERROR, "--unit-test-ERROR--") mock.assert_called_with(logger.LogLevel.ERROR, "--unit-test-ERROR--") mock.reset_mock() appender.write(logger.LogLevel.INFO, "--unit-test-INFO--") mock.assert_not_called() mock.reset_mock() for i in range(5): # pylint: disable=unused-variable appender.write(logger.LogLevel.ERROR, "--unit-test-ERROR--") appender.write(logger.LogLevel.INFO, "--unit-test-INFO--") self.assertEqual(5, mock.call_count) # Only ERROR should be called. @patch('azurelinuxagent.common.event.EventLogger.save_event') def test_telemetry_logger_not_on_by_default(self, mock_save): appender = logger.TelemetryAppender(logger.LogLevel.WARNING, add_log_event) appender.write(logger.LogLevel.WARNING, 'Cgroup controller "memory" is not mounted. ' 'Failed to create a cgroup for extension ' 'Microsoft.OSTCExtensions.DummyExtension-1.2.3.4') self.assertEqual(0, mock_save.call_count) @patch("azurelinuxagent.common.logger.StdoutAppender.write") @patch("azurelinuxagent.common.logger.TelemetryAppender.write") @patch("azurelinuxagent.common.logger.ConsoleAppender.write") @patch("azurelinuxagent.common.logger.FileAppender.write") def test_add_appender(self, mock_file_write, mock_console_write, mock_telem_write, mock_stdout_write): lg = logger.Logger(logger.DEFAULT_LOGGER, "TestLogger1") lg.add_appender(logger.AppenderType.FILE, logger.LogLevel.INFO, path=self.log_file) lg.add_appender(logger.AppenderType.TELEMETRY, logger.LogLevel.WARNING, path=add_log_event) lg.add_appender(logger.AppenderType.CONSOLE, logger.LogLevel.WARNING, path="/dev/null") lg.add_appender(logger.AppenderType.STDOUT, logger.LogLevel.WARNING, path=None) counter = 0 for appender in lg.appenders: if isinstance(appender, logger.FileAppender): counter += 1 elif isinstance(appender, logger.TelemetryAppender): counter += 1 elif isinstance(appender, logger.ConsoleAppender): counter += 1 elif isinstance(appender, logger.StdoutAppender): counter += 1 # All 4 appenders should have been included. self.assertEqual(4, counter) # The write for all the loggers will get called, but the levels are honored in the individual write method # itself. Each appender has its own test to validate the writing of the log message for different levels. # For Reference: tests.common.test_logger.TestAppender lg.warn("Test Log") self.assertEqual(1, mock_file_write.call_count) self.assertEqual(1, mock_console_write.call_count) self.assertEqual(1, mock_telem_write.call_count) self.assertEqual(1, mock_stdout_write.call_count) lg.info("Test Log") self.assertEqual(2, mock_file_write.call_count) self.assertEqual(2, mock_console_write.call_count) self.assertEqual(2, mock_telem_write.call_count) self.assertEqual(2, mock_stdout_write.call_count) lg.error("Test Log") self.assertEqual(3, mock_file_write.call_count) self.assertEqual(3, mock_console_write.call_count) self.assertEqual(3, mock_telem_write.call_count) self.assertEqual(3, mock_stdout_write.call_count) @patch("azurelinuxagent.common.logger.StdoutAppender.write") @patch("azurelinuxagent.common.logger.TelemetryAppender.write") @patch("azurelinuxagent.common.logger.ConsoleAppender.write") @patch("azurelinuxagent.common.logger.FileAppender.write") def test_log_should_redact_sas_tokens(self, mock_file_write, mock_console_write, mock_telem_write, mock_stdout_write): lg = logger.Logger(logger.DEFAULT_LOGGER) lg.add_appender(logger.AppenderType.FILE, logger.LogLevel.INFO, path=self.log_file) lg.add_appender(logger.AppenderType.TELEMETRY, logger.LogLevel.WARNING, path=add_log_event) lg.add_appender(logger.AppenderType.CONSOLE, logger.LogLevel.WARNING, path="/dev/null") lg.add_appender(logger.AppenderType.STDOUT, logger.LogLevel.WARNING, path=None) sas_token = "https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.vmSettings?sv=2018-03-28&sr=b&sk=system-1&sig=8YHwmibhasT0r9MZgL09QmFwL7ZV%2bg%2b49QP5Zwe4ksY%3d&se=9999-01-01T00%3a00%3a00Z&sp=r" lg.info("Test blob {0}", sas_token) self.assertRegex(mock_file_write.call_args[0][1], r"INFO.*redacted") self.assertRegex(mock_console_write.call_args[0][1], r"INFO.*redacted") self.assertRegex(mock_telem_write.call_args[0][1], r"INFO.*redacted") self.assertRegex(mock_stdout_write.call_args[0][1], r"INFO.*redacted") lg.warn("Test blob {0}", sas_token) self.assertRegex(mock_file_write.call_args[0][1], r"WARNING.*redacted") self.assertRegex(mock_console_write.call_args[0][1], r"WARNING.*redacted") self.assertRegex(mock_telem_write.call_args[0][1], r"WARNING.*redacted") self.assertRegex(mock_stdout_write.call_args[0][1], r"WARNING.*redacted") lg.error("Test blob {0}", sas_token) self.assertRegex(mock_file_write.call_args[0][1], r"ERROR.*redacted") self.assertRegex(mock_console_write.call_args[0][1], r"ERROR.*redacted") self.assertRegex(mock_telem_write.call_args[0][1], r"ERROR.*redacted") self.assertRegex(mock_stdout_write.call_args[0][1], r"ERROR.*redacted") lg.verbose("Test blob {0}", sas_token) self.assertRegex(mock_file_write.call_args[0][1], r"VERBOSE.*redacted") self.assertRegex(mock_console_write.call_args[0][1], r"VERBOSE.*redacted") self.assertRegex(mock_telem_write.call_args[0][1], r"VERBOSE.*redacted") self.assertRegex(mock_stdout_write.call_args[0][1], r"VERBOSE.*redacted") @patch("azurelinuxagent.common.logger.StdoutAppender.write") @patch("azurelinuxagent.common.logger.TelemetryAppender.write") @patch("azurelinuxagent.common.logger.ConsoleAppender.write") @patch("azurelinuxagent.common.logger.FileAppender.write") def test_set_prefix(self, mock_file_write, mock_console_write, mock_telem_write, mock_stdout_write): lg = logger.Logger(logger.DEFAULT_LOGGER) prefix = "YoloLogger" lg.set_prefix(prefix) self.assertEqual(lg.prefix, prefix) lg.add_appender(logger.AppenderType.FILE, logger.LogLevel.INFO, path=self.log_file) lg.add_appender(logger.AppenderType.TELEMETRY, logger.LogLevel.WARNING, path=add_log_event) lg.add_appender(logger.AppenderType.CONSOLE, logger.LogLevel.WARNING, path="/dev/null") lg.add_appender(logger.AppenderType.STDOUT, logger.LogLevel.WARNING, path=None) lg.error("Test Log") self.assertIn(prefix, mock_file_write.call_args[0][1]) self.assertIn(prefix, mock_console_write.call_args[0][1]) self.assertIn(prefix, mock_telem_write.call_args[0][1]) self.assertIn(prefix, mock_stdout_write.call_args[0][1]) @patch("azurelinuxagent.common.logger.StdoutAppender.write") @patch("azurelinuxagent.common.logger.TelemetryAppender.write") @patch("azurelinuxagent.common.logger.ConsoleAppender.write") @patch("azurelinuxagent.common.logger.FileAppender.write") def test_nested_logger(self, mock_file_write, mock_console_write, mock_telem_write, mock_stdout_write): """ The purpose of this test is to see if the logger gets correctly created when passed it another logger and also if the appender correctly gets the messages logged. This is how the ExtHandlerInstance logger works. I initialize the default logger(logger), then create a new logger(lg) from it, and then log using logger & lg. See if both logs are flowing through or not. """ parent_prefix = "ParentLogger" child_prefix = "ChildLogger" logger.add_logger_appender(logger.AppenderType.FILE, logger.LogLevel.INFO, path=self.log_file) logger.add_logger_appender(logger.AppenderType.TELEMETRY, logger.LogLevel.WARNING, path=add_log_event) logger.add_logger_appender(logger.AppenderType.CONSOLE, logger.LogLevel.WARNING, path="/dev/null") logger.add_logger_appender(logger.AppenderType.STDOUT, logger.LogLevel.WARNING) logger.set_prefix(parent_prefix) lg = logger.Logger(logger.DEFAULT_LOGGER, child_prefix) lg.error("Test Log") self.assertEqual(1, mock_file_write.call_count) self.assertEqual(1, mock_console_write.call_count) self.assertEqual(1, mock_telem_write.call_count) self.assertEqual(1, mock_stdout_write.call_count) self.assertIn(child_prefix, mock_file_write.call_args[0][1]) self.assertIn(child_prefix, mock_console_write.call_args[0][1]) self.assertIn(child_prefix, mock_telem_write.call_args[0][1]) self.assertIn(child_prefix, mock_stdout_write.call_args[0][1]) logger.error("Test Log") self.assertEqual(2, mock_file_write.call_count) self.assertEqual(2, mock_console_write.call_count) self.assertEqual(2, mock_telem_write.call_count) self.assertEqual(2, mock_stdout_write.call_count) self.assertIn(parent_prefix, mock_file_write.call_args[0][1]) self.assertIn(parent_prefix, mock_console_write.call_args[0][1]) self.assertIn(parent_prefix, mock_telem_write.call_args[0][1]) self.assertIn(parent_prefix, mock_stdout_write.call_args[0][1]) @patch("azurelinuxagent.common.event.send_logs_to_telemetry", return_value=True) @patch("azurelinuxagent.common.conf.get_lib_dir") def test_telemetry_logger_add_log_event(self, mock_lib_dir, *_): mock_lib_dir.return_value = self.lib_dir __event_logger__.event_dir = self.event_dir prefix = "YoloLogger" logger.add_logger_appender(logger.AppenderType.TELEMETRY, logger.LogLevel.WARNING, path=add_log_event) logger.set_prefix(prefix) logger.warn('Test Log - Warning') event_files = os.listdir(__event_logger__.event_dir) self.assertEqual(1, len(event_files)) log_file_event = os.path.join(__event_logger__.event_dir, event_files[0]) try: with open(log_file_event) as logfile: logcontent = logfile.read() # Checking the contents of the event file. self.assertIn("Test Log - Warning", logcontent) except Exception as e: self.assertFalse(True, "The log file looks like it isn't correctly setup for this test. Take a look. " # pylint: disable=redundant-unittest-assert "{0}".format(e)) @skip_if_predicate_true(lambda: True, "Enable this test when SEND_LOGS_TO_TELEMETRY is enabled") @patch("azurelinuxagent.common.logger.StdoutAppender.write") @patch("azurelinuxagent.common.logger.ConsoleAppender.write") @patch("azurelinuxagent.common.event.send_logs_to_telemetry", return_value=True) def test_telemetry_logger_verify_maximum_recursion_depths_doesnt_happen(self, *_): logger.add_logger_appender(logger.AppenderType.FILE, logger.LogLevel.INFO, path="/dev/null") logger.add_logger_appender(logger.AppenderType.TELEMETRY, logger.LogLevel.WARNING, path=add_log_event) for i in range(MAX_NUMBER_OF_EVENTS): logger.warn('Test Log - {0} - 1 - Warning'.format(i)) exception_caught = False # #1035 was caused due to too many files being written in an error condition. Adding even one more here broke # the camels back earlier - It would go into an infinite recursion as telemetry would call log, which in turn # would call telemetry, and so on. # The description of the fix is given in the comments @ azurelinuxagent.common.logger.Logger#log.write_log. try: for i in range(10): logger.warn('Test Log - {0} - 2 - Warning'.format(i)) except RuntimeError: exception_caught = True self.assertFalse(exception_caught, msg="Caught a Runtime Error. This should not have been raised.") @skip_if_predicate_true(lambda: True, "Enable this test when SEND_LOGS_TO_TELEMETRY is enabled") @patch("azurelinuxagent.common.logger.StdoutAppender.write") @patch("azurelinuxagent.common.logger.ConsoleAppender.write") @patch("azurelinuxagent.common.event.send_logs_to_telemetry", return_value=True) @patch("azurelinuxagent.common.conf.get_lib_dir") def test_telemetry_logger_check_all_file_logs_written_when_events_gt_MAX_NUMBER_OF_EVENTS(self, mock_lib_dir, *_): mock_lib_dir.return_value = self.lib_dir __event_logger__.event_dir = self.event_dir no_of_log_statements = MAX_NUMBER_OF_EVENTS + 100 exception_caught = False prefix = "YoloLogger" logger.add_logger_appender(logger.AppenderType.FILE, logger.LogLevel.INFO, path=self.log_file) logger.add_logger_appender(logger.AppenderType.TELEMETRY, logger.LogLevel.WARNING, path=add_log_event) logger.set_prefix(prefix) # Calling logger.warn no_of_log_statements times would cause the telemetry appender to writing # 1000 events into the events dir, and then drop the remaining events. It should not generate the RuntimeError try: for i in range(0, no_of_log_statements): logger.warn('Test Log - {0} - 1 - Warning'.format(i)) except RuntimeError: exception_caught = True self.assertFalse(exception_caught, msg="Caught a Runtime Error. This should not have been raised.") self.assertEqual(MAX_NUMBER_OF_EVENTS, len(os.listdir(__event_logger__.event_dir))) try: with open(self.log_file) as logfile: logcontent = logfile.readlines() # Checking the last log entry. # Subtracting 1 as range is exclusive of the upper bound self.assertIn("WARNING {1} Test Log - {0} - 1 - Warning".format(no_of_log_statements - 1, prefix), logcontent[-1]) # Checking the 1001st log entry. We know that 1001st entry would generate a PERIODIC message of too many # events, which should be captured in the log file as well. self.assertRegex(logcontent[1001], r"(.*WARNING\s*{0}\s*\[PERIODIC\]\s*Too many files under:.*{1}, " r"current count\:\s*\d+,\s*removing oldest\s*.*)".format(prefix, self.event_dir)) except Exception as e: self.assertFalse(True, "The log file looks like it isn't correctly setup for this test. " # pylint: disable=redundant-unittest-assert "Take a look. {0}".format(e)) class TestAppender(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) self.lib_dir = tempfile.mkdtemp() self.event_dir = os.path.join(self.lib_dir, EVENTS_DIRECTORY) fileutil.mkdir(self.event_dir) self.log_file = tempfile.mkstemp(prefix="logfile-")[1] logger.reset_periodic() def tearDown(self): AgentTestCase.tearDown(self) logger.reset_periodic() fileutil.rm_dirs(self.event_dir) logger.DEFAULT_LOGGER.appenders *= 0 @patch("azurelinuxagent.common.event.send_logs_to_telemetry", return_value=True) @patch("azurelinuxagent.common.logger.sys.stdout.write") @patch("azurelinuxagent.common.event.EventLogger.add_log_event") def test_no_appenders_added(self, mock_add_log_event, mock_sys_stdout, *_): # Validating no logs are written in any appender logger.verbose("test-verbose") logger.info("test-info") logger.warn("test-warn") logger.error("test-error") # Validating Console and File logs with open(self.log_file) as logfile: logcontent = logfile.readlines() self.assertEqual(0, len(logcontent)) # Validating telemetry call self.assertEqual(0, mock_add_log_event.call_count) # Validating stdout call self.assertEqual(0, mock_sys_stdout.call_count) def test_console_appender(self): logger.add_logger_appender(logger.AppenderType.CONSOLE, logger.LogLevel.WARNING, path=self.log_file) logger.verbose("test-verbose") with open(self.log_file) as logfile: logcontent = logfile.readlines() # Levels are honored and Verbose should not be written. self.assertEqual(0, len(logcontent)) logger.info("test-info") with open(self.log_file) as logfile: logcontent = logfile.readlines() # Levels are honored and Info should not be written. self.assertEqual(0, len(logcontent)) # As console has a mode of w, it'll always only have 1 line only. logger.warn("test-warn") with open(self.log_file) as logfile: logcontent = logfile.readlines() self.assertEqual(1, len(logcontent)) self.assertRegex(logcontent[0], r"(.*WARNING\s\w+\s*test-warn.*)") logger.error("test-error") with open(self.log_file) as logfile: logcontent = logfile.readlines() # Levels are honored and Info, Verbose should not be written. self.assertEqual(1, len(logcontent)) self.assertRegex(logcontent[0], r"(.*ERROR\s\w+\s*test-error.*)") def test_file_appender(self): logger.add_logger_appender(logger.AppenderType.FILE, logger.LogLevel.INFO, path=self.log_file) logger.verbose("test-verbose") logger.info("test-info") logger.warn("test-warn") logger.error("test-error") with open(self.log_file) as logfile: logcontent = logfile.readlines() # Levels are honored and Verbose should not be written. self.assertEqual(3, len(logcontent)) self.assertRegex(logcontent[0], r"(.*INFO\s\w+\s*test-info.*)") self.assertRegex(logcontent[1], r"(.*WARNING\s\w+\s*test-warn.*)") self.assertRegex(logcontent[2], r"(.*ERROR\s\w+\s*test-error.*)") @patch("azurelinuxagent.common.event.send_logs_to_telemetry", return_value=True) @patch("azurelinuxagent.common.event.EventLogger.add_log_event") def test_telemetry_appender(self, mock_add_log_event, *_): logger.add_logger_appender(logger.AppenderType.TELEMETRY, logger.LogLevel.WARNING, path=add_log_event) logger.verbose("test-verbose") logger.info("test-info") logger.warn("test-warn") logger.error("test-error") self.assertEqual(2, mock_add_log_event.call_count) @patch("azurelinuxagent.common.logger.sys.stdout.write") def test_stdout_appender(self, mock_sys_stdout): logger.add_logger_appender(logger.AppenderType.STDOUT, logger.LogLevel.ERROR) logger.verbose("test-verbose") logger.info("test-info") logger.warn("test-warn") logger.error("test-error") # Validating only test-error gets logged and not others. self.assertEqual(1, mock_sys_stdout.call_count) def test_console_output_enabled_should_return_true_when_there_are_console_appenders(self): my_logger = logger.Logger() my_logger.add_appender(logger.AppenderType.STDOUT, logger.LogLevel.INFO, None) my_logger.add_appender(logger.AppenderType.CONSOLE, logger.LogLevel.INFO, None) self.assertTrue(my_logger.console_output_enabled(), "Console output should be enabled, appenders = {0}".format(my_logger.appenders)) def test_console_output_enabled_should_return_false_when_there_are_no_console_appenders(self): my_logger = logger.Logger() my_logger.add_appender(logger.AppenderType.STDOUT, logger.LogLevel.INFO, None) self.assertFalse(my_logger.console_output_enabled(), "Console output should not be enabled, appenders = {0}".format(my_logger.appenders)) def test_disable_console_output_should_remove_all_console_appenders(self): my_logger = logger.Logger() my_logger.add_appender(logger.AppenderType.STDOUT, logger.LogLevel.INFO, None) my_logger.add_appender(logger.AppenderType.CONSOLE, logger.LogLevel.INFO, None) my_logger.add_appender(logger.AppenderType.STDOUT, logger.LogLevel.INFO, None) my_logger.add_appender(logger.AppenderType.CONSOLE, logger.LogLevel.INFO, None) my_logger.disable_console_output() self.assertTrue( len(my_logger.appenders) == 2 and all(isinstance(a, logger.StdoutAppender) for a in my_logger.appenders), "The console appender was not removed: {0}".format(my_logger.appenders)) Azure-WALinuxAgent-a976115/tests/common/test_singletonperthread.py000066400000000000000000000152041510742556200253400ustar00rootroot00000000000000import uuid from multiprocessing import Queue from threading import Thread, current_thread from azurelinuxagent.common.singletonperthread import SingletonPerThread from tests.lib.tools import AgentTestCase, clear_singleton_instances class Singleton(SingletonPerThread): """ Since these tests deal with testing in a multithreaded environment, we employ the use of multiprocessing.Queue() to ensure that the data is consistent. This test class uses a uuid to identify an object instead of directly using object reference because Queue.get() returns a different object reference than what is put in it even though the object is same (which is verified using uuid in this test class) Eg: obj1 = WireClient("obj1") obj1 <__main__.WireClient object at 0x7f5e78476198> q = Queue() q.put(obj1) test1 = q.get() test1 <__main__.WireClient object at 0x7f5e78430630> test1.endpoint == obj1.endpoint True """ def __init__(self): # Set the name of the object to the current thread name self.name = current_thread().name # Unique identifier for a class object self.uuid = str(uuid.uuid4()) class TestSingletonPerThread(AgentTestCase): THREAD_NAME_1 = 'thread-1' THREAD_NAME_2 = 'thread-2' def setUp(self): super(TestSingletonPerThread, self).setUp() # In a multi-threaded environment, exceptions thrown in the child thread will not be propagated to the parent # thread. In order to achieve that, adding all exceptions to a Queue and then checking that in parent thread. self.errors = Queue() clear_singleton_instances(Singleton) def _setup_multithread_and_execute(self, func1, args1, func2, args2, t1_name=None, t2_name=None): t1 = Thread(target=func1, args=args1) t2 = Thread(target=func2, args=args2) t1.name = t1_name if t1_name else self.THREAD_NAME_1 t2.name = t2_name if t2_name else self.THREAD_NAME_2 t1.start() t2.start() t1.join() t2.join() errs = [] while not self.errors.empty(): errs.append(self.errors.get()) if len(errs) > 0: raise Exception("Errors: %s" % ' , '.join(errs)) @staticmethod def _get_test_class_instance(q, err): try: obj = Singleton() q.put(obj) except Exception as e: err.put(str(e)) def _parse_instances_and_return_thread_objects(self, instances, t1_name=None, t2_name=None): obj1, obj2 = instances.get(), instances.get() def check_obj(name): if obj1.name == name: return obj1 elif obj2.name == name: return obj2 else: return None t1_object = check_obj(t1_name if t1_name else self.THREAD_NAME_1) t2_object = check_obj(t2_name if t2_name else self.THREAD_NAME_2) return t1_object, t2_object def test_it_should_have_only_one_instance_for_same_thread(self): obj1 = Singleton() obj2 = Singleton() self.assertEqual(obj1.uuid, obj2.uuid) def test_it_should_have_multiple_instances_for_multiple_threads(self): instances = Queue() self._setup_multithread_and_execute(func1=self._get_test_class_instance, args1=(instances, self.errors), func2=self._get_test_class_instance, args2=(instances, self.errors)) self.assertEqual(2, instances.qsize()) # Assert that there are 2 objects in the queue obj1, obj2 = instances.get(), instances.get() self.assertNotEqual(obj1.uuid, obj2.uuid) def test_it_should_return_existing_instance_for_new_thread_with_same_name(self): instances = Queue() self._setup_multithread_and_execute(func1=self._get_test_class_instance, args1=(instances, self.errors), func2=self._get_test_class_instance, args2=(instances, self.errors)) t1_obj, t2_obj = self._parse_instances_and_return_thread_objects(instances) new_instances = Queue() # The 2nd call is to get new objects with the same thread name to verify if the objects are same self._setup_multithread_and_execute(func1=self._get_test_class_instance, args1=(new_instances, self.errors), func2=self._get_test_class_instance, args2=(new_instances, self.errors)) new_t1_obj, new_t2_obj = self._parse_instances_and_return_thread_objects(new_instances) self.assertEqual(t1_obj.name, new_t1_obj.name) self.assertEqual(t1_obj.uuid, new_t1_obj.uuid) self.assertEqual(t2_obj.name, new_t2_obj.name) self.assertEqual(t2_obj.uuid, new_t2_obj.uuid) def test_singleton_object_should_match_thread_name(self): instances = Queue() t1_name = str(uuid.uuid4()) t2_name = str(uuid.uuid4()) test_class_obj_name = lambda t_name: "%s__%s" % (Singleton.__name__, t_name) self._setup_multithread_and_execute(func1=self._get_test_class_instance, args1=(instances, self.errors), func2=self._get_test_class_instance, args2=(instances, self.errors), t1_name=t1_name, t2_name=t2_name) singleton_instances = Singleton._instances # pylint: disable=no-member # Assert instance names are consistent with the thread names self.assertIn(test_class_obj_name(t1_name), singleton_instances) self.assertIn(test_class_obj_name(t2_name), singleton_instances) # Assert that the objects match their respective threads # This function matches objects with their thread names and returns the respective object or None if not found t1_obj, t2_obj = self._parse_instances_and_return_thread_objects(instances, t1_name, t2_name) # Ensure that objects for both the threads were found self.assertIsNotNone(t1_obj) self.assertIsNotNone(t2_obj) # Ensure that the objects match with their respective thread objects self.assertEqual(singleton_instances[test_class_obj_name(t1_name)].uuid, t1_obj.uuid) self.assertEqual(singleton_instances[test_class_obj_name(t2_name)].uuid, t2_obj.uuid) Azure-WALinuxAgent-a976115/tests/common/test_telemetryevent.py000066400000000000000000000056331510742556200245200ustar00rootroot00000000000000# Copyright 2019 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from azurelinuxagent.common.telemetryevent import TelemetryEvent, TelemetryEventParam, GuestAgentExtensionEventsSchema, \ CommonTelemetryEventSchema from tests.lib.tools import AgentTestCase class TestTelemetryEvent(AgentTestCase): @staticmethod def _get_test_event(name="DummyExtension", op="Unknown", is_success=True, duration=0, version="foo", evt_type="", is_internal=False, message="DummyMessage", eventId=1): event = TelemetryEvent(eventId, "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX") event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Name, name)) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Version, str(version))) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.IsInternal, is_internal)) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Operation, op)) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.OperationSuccess, is_success)) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Message, message)) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Duration, duration)) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.ExtensionType, evt_type)) return event def test_contains_works_for_TelemetryEvent(self): test_event = TestTelemetryEvent._get_test_event(message="Dummy Event") self.assertTrue(GuestAgentExtensionEventsSchema.Name in test_event) self.assertTrue(GuestAgentExtensionEventsSchema.Version in test_event) self.assertTrue(GuestAgentExtensionEventsSchema.IsInternal in test_event) self.assertTrue(GuestAgentExtensionEventsSchema.Operation in test_event) self.assertTrue(GuestAgentExtensionEventsSchema.OperationSuccess in test_event) self.assertTrue(GuestAgentExtensionEventsSchema.Message in test_event) self.assertTrue(GuestAgentExtensionEventsSchema.Duration in test_event) self.assertTrue(GuestAgentExtensionEventsSchema.ExtensionType in test_event) self.assertFalse(CommonTelemetryEventSchema.GAVersion in test_event) self.assertFalse(CommonTelemetryEventSchema.ContainerId in test_event)Azure-WALinuxAgent-a976115/tests/common/test_version.py000066400000000000000000000247661510742556200231410ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from __future__ import print_function import os import textwrap import mock import azurelinuxagent.common.conf as conf from azurelinuxagent.common.future import ustr from azurelinuxagent.common.event import EVENTS_DIRECTORY from azurelinuxagent.common.version import set_current_agent, \ AGENT_LONG_VERSION, AGENT_VERSION, AGENT_NAME, AGENT_NAME_PATTERN, \ get_f5_platform, get_distro, get_lis_version, PY_VERSION_MAJOR, \ PY_VERSION_MINOR, get_daemon_version, set_daemon_version, __DAEMON_VERSION_ENV_VARIABLE as DAEMON_VERSION_ENV_VARIABLE from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from tests.lib.tools import AgentTestCase, open_patch, patch def freebsd_system(): return ["FreeBSD"] def freebsd_system_release(x, y, z): # pylint: disable=unused-argument return "10.0" def openbsd_system(): return ["OpenBSD"] def openbsd_system_release(x, y, z): # pylint: disable=unused-argument return "20.0" def default_system(): return [""] def default_system_no_linux_distro(): return '', '', '' def default_system_exception(): raise Exception def is_platform_dist_supported(): # platform.dist() and platform.linux_distribution() is deprecated from Python 3.8+ if PY_VERSION_MAJOR == 3 and PY_VERSION_MINOR >= 8: return False return True class TestAgentVersion(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) @mock.patch('platform.system', side_effect=freebsd_system) @mock.patch('re.sub', side_effect=freebsd_system_release) def test_distro_is_correct_format_when_freebsd(self, platform_system_name, mock_variable): # pylint: disable=unused-argument osinfo = get_distro() freebsd_list = ['freebsd', "10.0", '', 'freebsd'] self.assertListEqual(freebsd_list, osinfo) @mock.patch('platform.system', side_effect=openbsd_system) @mock.patch('re.sub', side_effect=openbsd_system_release) def test_distro_is_correct_format_when_openbsd(self, platform_system_name, mock_variable): # pylint: disable=unused-argument osinfo = get_distro() openbsd_list = ['openbsd', "20.0", '', 'openbsd'] self.assertListEqual(openbsd_list, osinfo) @mock.patch('platform.system', side_effect=default_system) def test_distro_is_correct_format_when_default_case(self, *args): # pylint: disable=unused-argument default_list = ['', '', '', ''] unknown_list = ['unknown', 'FFFF', '', ''] if is_platform_dist_supported(): with patch('platform.dist', side_effect=default_system_no_linux_distro): osinfo = get_distro() self.assertListEqual(default_list, osinfo) else: # platform.dist() is deprecated in Python 3.7+ and would throw, resulting in unknown distro osinfo = get_distro() self.assertListEqual(unknown_list, osinfo) @mock.patch('platform.system', side_effect=default_system) def test_distro_is_correct_for_exception_case(self, *args): # pylint: disable=unused-argument default_list = ['unknown', 'FFFF', '', ''] if is_platform_dist_supported(): with patch('platform.dist', side_effect=default_system_exception): osinfo = get_distro() else: # platform.dist() is deprecated in Python 3.7+ so we can't patch it, but it would throw # as well, resulting in the same unknown distro osinfo = get_distro() self.assertListEqual(default_list, osinfo) def test_get_lis_version_should_return_a_string(self): """ On a Hyper-V guest with the LIS drivers installed as a module, this function should return a string of the version, like '4.3.5'. Anywhere else it should return 'Absent' and possibly return 'Failed' if an exception was raised, so we check that it returns a string'. """ lis_version = get_lis_version() self.assertIsInstance(lis_version, ustr) def test_get_daemon_version_should_return_the_version_that_was_previously_set(self): set_daemon_version("1.2.3.4") try: self.assertEqual( FlexibleVersion("1.2.3.4"), get_daemon_version(), "The daemon version should be 1.2.3.4. Environment={0}".format(os.environ) ) finally: os.environ.pop(DAEMON_VERSION_ENV_VARIABLE) def test_get_daemon_version_should_return_zero_when_the_version_has_not_been_set(self): self.assertEqual( FlexibleVersion("0.0.0.0"), get_daemon_version(), "The daemon version should not be defined. Environment={0}".format(os.environ) ) class TestCurrentAgentName(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) @patch("os.getcwd", return_value="/default/install/directory") def test_extract_name_finds_installed(self, mock_cwd): # pylint: disable=unused-argument current_agent, current_version = set_current_agent() self.assertEqual(AGENT_LONG_VERSION, current_agent) self.assertEqual(AGENT_VERSION, str(current_version)) @patch("os.getcwd", return_value="/") def test_extract_name_root_finds_installed(self, mock_cwd): # pylint: disable=unused-argument current_agent, current_version = set_current_agent() self.assertEqual(AGENT_LONG_VERSION, current_agent) self.assertEqual(AGENT_VERSION, str(current_version)) @patch("os.getcwd") def test_extract_name_in_path_finds_installed(self, mock_cwd): path = os.path.join(conf.get_lib_dir(), EVENTS_DIRECTORY) mock_cwd.return_value = path current_agent, current_version = set_current_agent() self.assertEqual(AGENT_LONG_VERSION, current_agent) self.assertEqual(AGENT_VERSION, str(current_version)) @patch("os.getcwd") def test_extract_name_finds_latest_agent(self, mock_cwd): path = os.path.join(conf.get_lib_dir(), "{0}-{1}".format( AGENT_NAME, "1.2.3")) mock_cwd.return_value = path agent = os.path.basename(path) version = AGENT_NAME_PATTERN.match(agent).group(1) current_agent, current_version = set_current_agent() self.assertEqual(agent, current_agent) self.assertEqual(version, str(current_version)) class TestGetF5Platforms(AgentTestCase): def test_get_f5_platform_bigip_12_1_1(self): version_file = textwrap.dedent(""" Product: BIG-IP Version: 12.1.1 Build: 0.0.184 Sequence: 12.1.1.0.0.184.0 BaseBuild: 0.0.184 Edition: Final Date: Thu Aug 11 17:09:01 PDT 2016 Built: 160811170901 Changelist: 1874858 JobID: 705993""") mocked_open = mock.mock_open(read_data=version_file) with patch(open_patch(), mocked_open): platform = get_f5_platform() self.assertTrue(platform[0] == 'bigip') self.assertTrue(platform[1] == '12.1.1') self.assertTrue(platform[2] == 'bigip') self.assertTrue(platform[3] == 'BIG-IP') def test_get_f5_platform_bigip_12_1_0_hf1(self): version_file = textwrap.dedent(""" Product: BIG-IP Version: 12.1.0 Build: 1.0.1447 Sequence: 12.1.0.1.0.1447.0 BaseBuild: 0.0.1434 Edition: Hotfix HF1 Date: Wed Jun 8 13:41:59 PDT 2016 Built: 160608134159 Changelist: 1773831 JobID: 673467""") mocked_open = mock.mock_open(read_data=version_file) with patch(open_patch(), mocked_open): platform = get_f5_platform() self.assertTrue(platform[0] == 'bigip') self.assertTrue(platform[1] == '12.1.0') self.assertTrue(platform[2] == 'bigip') self.assertTrue(platform[3] == 'BIG-IP') def test_get_f5_platform_bigip_12_0_0(self): version_file = textwrap.dedent(""" Product: BIG-IP Version: 12.0.0 Build: 0.0.606 Sequence: 12.0.0.0.0.606.0 BaseBuild: 0.0.606 Edition: Final Date: Fri Aug 21 13:29:22 PDT 2015 Built: 150821132922 Changelist: 1486072 JobID: 536212""") mocked_open = mock.mock_open(read_data=version_file) with patch(open_patch(), mocked_open): platform = get_f5_platform() self.assertTrue(platform[0] == 'bigip') self.assertTrue(platform[1] == '12.0.0') self.assertTrue(platform[2] == 'bigip') self.assertTrue(platform[3] == 'BIG-IP') def test_get_f5_platform_iworkflow_2_0_1(self): version_file = textwrap.dedent(""" Product: iWorkflow Version: 2.0.1 Build: 0.0.9842 Sequence: 2.0.1.0.0.9842.0 BaseBuild: 0.0.9842 Edition: Final Date: Sat Oct 1 22:52:08 PDT 2016 Built: 161001225208 Changelist: 1924048 JobID: 734712""") mocked_open = mock.mock_open(read_data=version_file) with patch(open_patch(), mocked_open): platform = get_f5_platform() self.assertTrue(platform[0] == 'iworkflow') self.assertTrue(platform[1] == '2.0.1') self.assertTrue(platform[2] == 'iworkflow') self.assertTrue(platform[3] == 'iWorkflow') def test_get_f5_platform_bigiq_5_1_0(self): version_file = textwrap.dedent(""" Product: BIG-IQ Version: 5.1.0 Build: 0.0.631 Sequence: 5.1.0.0.0.631.0 BaseBuild: 0.0.631 Edition: Final Date: Thu Sep 15 19:55:43 PDT 2016 Built: 160915195543 Changelist: 1907534 JobID: 726344""") mocked_open = mock.mock_open(read_data=version_file) with patch(open_patch(), mocked_open): platform = get_f5_platform() self.assertTrue(platform[0] == 'bigiq') self.assertTrue(platform[1] == '5.1.0') self.assertTrue(platform[2] == 'bigiq') self.assertTrue(platform[3] == 'BIG-IQ') Azure-WALinuxAgent-a976115/tests/common/utils/000077500000000000000000000000001510742556200211645ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/common/utils/__init__.py000066400000000000000000000011651510742556200233000ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # Azure-WALinuxAgent-a976115/tests/common/utils/test_archive.py000066400000000000000000000160071510742556200242220ustar00rootroot00000000000000# Copyright (c) Microsoft Corporation. All rights reserved. # Licensed under the Apache License. import os import tempfile import zipfile from datetime import datetime, timedelta import azurelinuxagent.common.logger as logger from azurelinuxagent.common.future import UTC from azurelinuxagent.common import conf from azurelinuxagent.common.utils import fileutil from azurelinuxagent.common.utils.archive import GoalStateHistory, StateArchiver, _MAX_ARCHIVED_STATES, ARCHIVE_DIRECTORY_NAME from tests.lib.tools import AgentTestCase, patch debug = False if os.environ.get('DEBUG') == '1': debug = True # Enable verbose logger to stdout if debug: logger.add_logger_appender(logger.AppenderType.STDOUT, logger.LogLevel.VERBOSE) class TestArchive(AgentTestCase): def setUp(self): super(TestArchive, self).setUp() prefix = "{0}_".format(self.__class__.__name__) self.tmp_dir = tempfile.mkdtemp(prefix=prefix) def _write_file(self, filename, contents=None): full_name = os.path.join(conf.get_lib_dir(), filename) fileutil.mkdir(os.path.dirname(full_name)) with open(full_name, 'w') as file_handler: data = contents if contents is not None else filename file_handler.write(data) return full_name @property def history_dir(self): return os.path.join(conf.get_lib_dir(), ARCHIVE_DIRECTORY_NAME) def test_archive_should_zip_all_but_the_latest_goal_state_in_the_history_folder(self): test_files = [ 'GoalState.xml', 'Prod.manifest.xml', 'Prod.agentsManifest', 'Microsoft.Azure.Extensions.CustomScript.xml' ] # these directories match the pattern that StateArchiver.archive() searches for test_directories = [] for i in range(0, 3): timestamp = datetime.now(UTC) + timedelta(minutes=i) directory = os.path.join(self.history_dir, "{0}__{1}".format(GoalStateHistory._create_timestamp(timestamp), i)) for current_file in test_files: self._write_file(os.path.join(directory, current_file)) test_directories.append(directory) test_subject = StateArchiver(conf.get_lib_dir()) # NOTE: StateArchiver sorts the state directories by creation time, but the test files are created too fast and the # time resolution is too coarse, so instead we mock getctime to simply return the path of the file with patch("azurelinuxagent.common.utils.archive.os.path.getctime", side_effect=lambda path: path): test_subject.archive() for directory in test_directories[0:2]: zip_file = directory + ".zip" self.assertTrue(os.path.exists(zip_file), "{0} was not archived (could not find {1})".format(directory, zip_file)) missing_file = self.assert_zip_contains(zip_file, test_files) self.assertEqual(None, missing_file, missing_file) self.assertFalse(os.path.exists(directory), "{0} was not removed after being archived ".format(directory)) self.assertTrue(os.path.exists(test_directories[2]), "{0}, the latest goal state, should not have being removed".format(test_directories[2])) def test_goal_state_history_init_should_purge_old_items(self): """ GoalStateHistory.__init__ should _purge the MAX_ARCHIVED_STATES oldest files or directories. The oldest timestamps are purged first. This test case creates a mixture of archive files and directories. It creates 5 more values than MAX_ARCHIVED_STATES to ensure that 5 archives are cleaned up. It asserts that the files and directories are properly deleted from the disk. """ count = 6 total = _MAX_ARCHIVED_STATES + count start = datetime.now(UTC) timestamps = [] for i in range(0, total): timestamp = start + timedelta(seconds=i) timestamps.append(timestamp) if i % 2 == 0: filename = os.path.join('history', "{0}_0".format(timestamp.isoformat()), 'Prod.manifest.xml') else: filename = os.path.join('history', "{0}_0.zip".format(timestamp.isoformat())) self._write_file(filename) self.assertEqual(total, len(os.listdir(self.history_dir))) # NOTE: The purge method sorts the items by creation time, but the test files are created too fast and the # time resolution is too coarse, so instead we mock getctime to simply return the path of the file with patch("azurelinuxagent.common.utils.archive.os.path.getctime", side_effect=lambda path: path): GoalStateHistory(datetime.now(UTC), 'test') archived_entries = os.listdir(self.history_dir) self.assertEqual(_MAX_ARCHIVED_STATES, len(archived_entries)) archived_entries.sort() for i in range(0, _MAX_ARCHIVED_STATES): timestamp = timestamps[i + count].isoformat() if i % 2 == 0: filename = "{0}_0".format(timestamp) else: filename = "{0}_0.zip".format(timestamp) self.assertTrue(filename in archived_entries, "'{0}' is not in the list of unpurged entires".format(filename)) def test_purge_legacy_goal_state_history(self): with patch("azurelinuxagent.common.conf.get_lib_dir", return_value=self.tmp_dir): # SharedConfig.xml is used by other components (Azsec and Singularity/HPC Infiniband); verify that we do not delete it shared_config = os.path.join(self.tmp_dir, 'SharedConfig.xml') legacy_files = [ 'GoalState.2.xml', 'VmSettings.2.json', 'Prod.2.manifest.xml', 'ExtensionsConfig.2.xml', 'Microsoft.Azure.Extensions.CustomScript.1.xml', 'HostingEnvironmentConfig.xml', 'RemoteAccess.xml', 'waagent_status.1.json' ] legacy_files = [os.path.join(self.tmp_dir, f) for f in legacy_files] self._write_file(shared_config) for f in legacy_files: self._write_file(f) StateArchiver.purge_legacy_goal_state_history() self.assertTrue(os.path.exists(shared_config), "{0} should not have been removed".format(shared_config)) for f in legacy_files: self.assertFalse(os.path.exists(f), "Legacy file {0} was not removed".format(f)) @staticmethod def assert_zip_contains(zip_filename, files): ziph = None try: # contextmanager for zipfile.ZipFile doesn't exist for py2.6, manually closing it ziph = zipfile.ZipFile(zip_filename, 'r') zip_files = [x.filename for x in ziph.filelist] for current_file in files: if current_file not in zip_files: return "'{0}' was not found in {1}".format(current_file, zip_filename) return None finally: if ziph is not None: ziph.close() Azure-WALinuxAgent-a976115/tests/common/utils/test_crypt_util.py000066400000000000000000000074421510742556200250020ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import unittest import azurelinuxagent.common.conf as conf from azurelinuxagent.common.exception import CryptError from azurelinuxagent.common.utils.cryptutil import CryptUtil from tests.lib.tools import AgentTestCase, data_dir, load_data, is_python_version_26, skip_if_predicate_true class TestCryptoUtilOperations(AgentTestCase): def test_decrypt_encrypted_text(self): encrypted_string = load_data("wire/encrypted.enc") prv_key = os.path.join(self.tmp_dir, "TransportPrivate.pem") with open(prv_key, 'w+') as c: c.write(load_data("wire/sample.pem")) secret = ']aPPEv}uNg1FPnl?' crypto = CryptUtil(conf.get_openssl_cmd()) decrypted_string = crypto.decrypt_secret(encrypted_string, prv_key) self.assertEqual(secret, decrypted_string, "decrypted string does not match expected") def test_decrypt_encrypted_text_missing_private_key(self): encrypted_string = load_data("wire/encrypted.enc") prv_key = os.path.join(self.tmp_dir, "TransportPrivate.pem") crypto = CryptUtil(conf.get_openssl_cmd()) self.assertRaises(CryptError, crypto.decrypt_secret, encrypted_string, "abc" + prv_key) @skip_if_predicate_true(is_python_version_26, "Disabled on Python 2.6") def test_decrypt_encrypted_text_wrong_private_key(self): encrypted_string = load_data("wire/encrypted.enc") prv_key = os.path.join(self.tmp_dir, "wrong.pem") with open(prv_key, 'w+') as c: c.write(load_data("wire/trans_prv")) crypto = CryptUtil(conf.get_openssl_cmd()) self.assertRaises(CryptError, crypto.decrypt_secret, encrypted_string, prv_key) def test_decrypt_encrypted_text_text_not_encrypted(self): encrypted_string = "abc@123" prv_key = os.path.join(self.tmp_dir, "TransportPrivate.pem") with open(prv_key, 'w+') as c: c.write(load_data("wire/sample.pem")) crypto = CryptUtil(conf.get_openssl_cmd()) self.assertRaises(CryptError, crypto.decrypt_secret, encrypted_string, prv_key) def test_get_pubkey_from_crt(self): crypto = CryptUtil(conf.get_openssl_cmd()) prv_key = os.path.join(data_dir, "wire", "trans_prv") expected_pub_key = os.path.join(data_dir, "wire", "trans_pub") with open(expected_pub_key) as fh: self.assertEqual(fh.read(), crypto.get_pubkey_from_prv(prv_key)) def test_get_pubkey_from_prv(self): crypto = CryptUtil(conf.get_openssl_cmd()) def do_test(prv_key, expected_pub_key): prv_key = os.path.join(data_dir, "wire", prv_key) expected_pub_key = os.path.join(data_dir, "wire", expected_pub_key) with open(expected_pub_key) as fh: self.assertEqual(fh.read(), crypto.get_pubkey_from_prv(prv_key)) do_test("rsa-key.pem", "rsa-key.pub.pem") do_test("ec-key.pem", "ec-key.pub.pem") def test_get_pubkey_from_crt_invalid_file(self): crypto = CryptUtil(conf.get_openssl_cmd()) prv_key = os.path.join(data_dir, "wire", "trans_prv_does_not_exist") self.assertRaises(IOError, crypto.get_pubkey_from_prv, prv_key) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-a976115/tests/common/utils/test_distro_version.py000066400000000000000000000143131510742556200256500ustar00rootroot00000000000000import os import sys import unittest from tests.lib.tools import AgentTestCase, data_dir from azurelinuxagent.common.utils.distro_version import DistroVersion from azurelinuxagent.common.utils.flexible_version import FlexibleVersion class TestDistroVersion(AgentTestCase): def test_it_should_implement_all_comparison_operators(self): self.assertTrue(DistroVersion("1.0.0") < DistroVersion("1.1.0")) self.assertTrue(DistroVersion("1.0.0") <= DistroVersion("1.0.0")) self.assertTrue(DistroVersion("1.0.0") <= DistroVersion("1.1.0")) self.assertTrue(DistroVersion("1.1.0") > DistroVersion("1.0.0")) self.assertTrue(DistroVersion("1.1.0") >= DistroVersion("1.1.0")) self.assertTrue(DistroVersion("1.1.0") >= DistroVersion("1.0.0")) self.assertTrue(DistroVersion("1.1.0") != DistroVersion("1.0.0")) self.assertTrue(DistroVersion("1.1.0") == DistroVersion("1.1.0")) def test_it_should_compare_digit_sequences_numerically(self): self.assertTrue(DistroVersion("2.0.0") < DistroVersion("10.0.0")) self.assertTrue(DistroVersion("1.2.0") < DistroVersion("1.10.0")) self.assertTrue(DistroVersion("1.0.2") < DistroVersion("1.0.10")) self.assertTrue(DistroVersion("2.0.rc.2") < DistroVersion("2.0.rc.10")) self.assertTrue(DistroVersion("2.0.rc2") < DistroVersion("2.0.rc10")) def test_it_should_compare_non_digit_sequences_lexicographically(self): self.assertTrue(DistroVersion("2.0.alpha") < DistroVersion("2.0.beta")) self.assertTrue(DistroVersion("2.0.alpha.2") < DistroVersion("2.0.beta.1")) self.assertTrue(DistroVersion("alpha") < DistroVersion("beta")) self.assertTrue(DistroVersion("<1.0.0>") < DistroVersion(">1.0.0>")) def test_it_should_parse_common_distro_versions(self): """ Test that DistroVersion can parse the versions given by azurelinuxagent.common.version.DISTRO_VERSION (the values in distro_versions.txt are current values from telemetry.) """ data_file = os.path.join(data_dir, "distro_versions.txt") with open(data_file, "r") as f: for line in f: line = line.rstrip() version = DistroVersion(line) self.assertNotEqual([], version._fragments) self.assertEqual([], DistroVersion("")._fragments) def test_it_should_compare_commonly_used_versions(self): """ Test that DistroVersion does some common comparisons correctly. """ self.assertTrue(DistroVersion("1.0.0") < DistroVersion("2.0.0.")) self.assertTrue(DistroVersion("1.0.0") < DistroVersion("1.1.0")) self.assertTrue(DistroVersion("1.0.0") < DistroVersion("1.0.1")) self.assertTrue(DistroVersion("1.0.0") == DistroVersion("1.0.0")) self.assertTrue(DistroVersion("1.0.0") != DistroVersion("2.0.0")) self.assertTrue(DistroVersion("13") != DistroVersion("13.0")) self.assertTrue(DistroVersion("13") < DistroVersion("13.0")) self.assertTrue(DistroVersion("13") < DistroVersion("13.1")) ubuntu_version = DistroVersion("16.10") self.assertTrue(ubuntu_version in [DistroVersion('16.04'), DistroVersion('16.10'), DistroVersion('17.04')]) ubuntu_version = DistroVersion("20.10") self.assertTrue(DistroVersion('18.04') <= ubuntu_version <= DistroVersion('24.04')) redhat_version = DistroVersion("7.9") self.assertTrue(DistroVersion('7') <= redhat_version <= DistroVersion('9')) self.assertTrue(DistroVersion("1.0") < DistroVersion("1.1")) self.assertTrue(DistroVersion("1.9") < DistroVersion("1.10")) self.assertTrue(DistroVersion("1.9.9") < DistroVersion("1.10.0")) self.assertTrue(DistroVersion("1.0.0.0") < DistroVersion("1.2.0.0")) self.assertTrue(DistroVersion("1.0") <= DistroVersion("1.1")) self.assertTrue(DistroVersion("1.1") > DistroVersion("1.0")) self.assertTrue(DistroVersion("1.1") >= DistroVersion("1.0")) self.assertTrue(DistroVersion("1.0") == DistroVersion("1.0")) self.assertTrue(DistroVersion("1.0") >= DistroVersion("1.0")) self.assertTrue(DistroVersion("1.0") <= DistroVersion("1.0")) def test_uncommon_versions(self): """ The comparisons in these tests may occur in prod, and they not always produce a result that makes sense. More than expressing the desired behavior, these tests are meant to document the current behavior. """ self.assertTrue(DistroVersion("2") != DistroVersion("2.0")) self.assertTrue(DistroVersion("2") < DistroVersion("2.0")) self.assertTrue(DistroVersion("10.0_RC2") != DistroVersion("10.0RC2")) self.assertTrue(DistroVersion("10.0_RC2")._fragments == [10, 0, '_', 'RC', 2]) self.assertTrue(DistroVersion("10.0RC2")._fragments == [10, 0, 'RC', 2]) self.assertTrue(DistroVersion("1.4-rolling") < DistroVersion("1.4-rolling-202402090309")) self.assertTrue(DistroVersion("2023") < DistroVersion("2023.02.1")) self.assertTrue(DistroVersion("2.1-systemd-alpha") < DistroVersion("2.1-systemd-rc")) self.assertTrue(DistroVersion("2308a") < DistroVersion("2308beta")) self.assertTrue(DistroVersion("6.0.0.beta4") < DistroVersion("6.0.0.beta5")) self.assertTrue(DistroVersion("9.13.1P8X1") < DistroVersion("9.13.1RC1")) self.assertTrue(DistroVersion("a") < DistroVersion("rc")) self.assertTrue(DistroVersion("Clawhammer__9.14.0"), DistroVersion("Clawhammer__9.14.1")) self.assertTrue(DistroVersion("FFFF") < DistroVersion("h")) self.assertTrue(DistroVersion("None") < DistroVersion("n/a")) if sys.version_info[0] == 2: self.assertTrue(DistroVersion("3.11.2-rc.1") < DistroVersion("3.11.2-rc.a")) else: # TypeError: '<' not supported between instances of 'int' and 'str' with self.assertRaises(TypeError): _ = DistroVersion("3.11.2-rc.1") < DistroVersion("3.11.2-rc.a") # AttributeError: 'FlexibleVersion' object has no attribute '_fragments' with self.assertRaises(AttributeError): _ = DistroVersion("1.0.0.0") == FlexibleVersion("1.0.0.0") if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-a976115/tests/common/utils/test_extension_process_util.py000066400000000000000000000353541510742556200274160ustar00rootroot00000000000000# Copyright Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import shutil import subprocess import tempfile from azurelinuxagent.common.exception import ExtensionError, ExtensionErrorCodes from azurelinuxagent.common.future import ustr from azurelinuxagent.ga.cpucontroller import CpuControllerV1 from azurelinuxagent.ga.extensionprocessutil import format_stdout_stderr, read_output, \ wait_for_process_completion_or_timeout, handle_process_completion from tests.lib.tools import AgentTestCase, patch, data_dir class TestProcessUtils(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) self.tmp_dir = tempfile.mkdtemp() self.stdout = tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") self.stderr = tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") self.stdout.write("The quick brown fox jumps over the lazy dog.".encode("utf-8")) self.stderr.write("The five boxing wizards jump quickly.".encode("utf-8")) def tearDown(self): self.stderr.close() self.stdout.close() super(TestProcessUtils, self).tearDown() def test_wait_for_process_completion_or_timeout_should_terminate_cleanly(self): process = subprocess.Popen( "date", shell=True, cwd=self.tmp_dir, env={}, stdout=subprocess.PIPE, stderr=subprocess.PIPE) timed_out, ret, _ = wait_for_process_completion_or_timeout(process=process, timeout=5, cpu_controller=None) self.assertEqual(timed_out, False) self.assertEqual(ret, 0) def test_wait_for_process_completion_or_timeout_should_kill_process_on_timeout(self): timeout = 5 process = subprocess.Popen( # pylint: disable=subprocess-popen-preexec-fn "sleep 1m", shell=True, cwd=self.tmp_dir, env={}, stdout=subprocess.PIPE, stderr=subprocess.PIPE, preexec_fn=os.setsid) # We don't actually mock the kill, just wrap it so we can assert its call count with patch('azurelinuxagent.ga.extensionprocessutil.os.killpg', wraps=os.killpg) as patch_kill: with patch('time.sleep') as mock_sleep: timed_out, ret, _ = wait_for_process_completion_or_timeout(process=process, timeout=timeout, cpu_controller=None) # We're mocking sleep to avoid prolonging the test execution time, but we still want to make sure # we're "waiting" the correct amount of time before killing the process self.assertEqual(mock_sleep.call_count, timeout) self.assertEqual(patch_kill.call_count, 1) self.assertEqual(timed_out, True) self.assertEqual(ret, None) def test_handle_process_completion_should_return_nonzero_when_process_fails(self): process = subprocess.Popen( "ls folder_does_not_exist", shell=True, cwd=self.tmp_dir, env={}, stdout=subprocess.PIPE, stderr=subprocess.PIPE) timed_out, ret, _ = wait_for_process_completion_or_timeout(process=process, timeout=5, cpu_controller=None) self.assertEqual(timed_out, False) self.assertEqual(ret, 2) def test_handle_process_completion_should_return_process_output(self): command = "echo 'dummy stdout' && 1>&2 echo 'dummy stderr'" with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stdout: with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stderr: process = subprocess.Popen(command, # pylint: disable=subprocess-popen-preexec-fn shell=True, cwd=self.tmp_dir, env={}, stdout=stdout, stderr=stderr, preexec_fn=os.setsid) process_output = handle_process_completion(process=process, command=command, timeout=5, stdout=stdout, stderr=stderr, error_code=42) expected_output = "[stdout]\ndummy stdout\n\n\n[stderr]\ndummy stderr\n" self.assertEqual(process_output, expected_output) def test_handle_process_completion_should_raise_on_timeout(self): command = "sleep 1m" timeout = 20 with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stdout: with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stderr: with patch('time.sleep') as mock_sleep: with self.assertRaises(ExtensionError) as context_manager: process = subprocess.Popen(command, # pylint: disable=subprocess-popen-preexec-fn shell=True, cwd=self.tmp_dir, env={}, stdout=stdout, stderr=stderr, preexec_fn=os.setsid) handle_process_completion(process=process, command=command, timeout=timeout, stdout=stdout, stderr=stderr, error_code=42) # We're mocking sleep to avoid prolonging the test execution time, but we still want to make sure # we're "waiting" the correct amount of time before killing the process and raising an exception # Due to an extra call to sleep at some point in the call stack which only happens sometimes, # we are relaxing this assertion to allow +/- 2 sleep calls. self.assertTrue(abs(mock_sleep.call_count - timeout) <= 2) self.assertEqual(context_manager.exception.code, ExtensionErrorCodes.PluginHandlerScriptTimedout) self.assertIn("Timeout({0})".format(timeout), ustr(context_manager.exception)) self.assertNotIn("CPUThrottledTime({0}secs)".format(timeout), ustr(context_manager.exception)) #Extension not started in cpuCgroup def test_handle_process_completion_should_log_throttled_time_on_timeout(self): command = "sleep 1m" timeout = 20 with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stdout: with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stderr: with patch('time.sleep') as mock_sleep: with self.assertRaises(ExtensionError) as context_manager: test_file = os.path.join(self.tmp_dir, "cpu.stat") shutil.copyfile(os.path.join(data_dir, "cgroups", "v1", "cpu.stat_t0"), test_file) # throttled_time = 50 cpu_controller = CpuControllerV1("test", self.tmp_dir) process = subprocess.Popen(command, # pylint: disable=subprocess-popen-preexec-fn shell=True, cwd=self.tmp_dir, env={}, stdout=stdout, stderr=stderr, preexec_fn=os.setsid) handle_process_completion(process=process, command=command, timeout=timeout, stdout=stdout, stderr=stderr, error_code=42, cpu_controller=cpu_controller) # We're mocking sleep to avoid prolonging the test execution time, but we still want to make sure # we're "waiting" the correct amount of time before killing the process and raising an exception # Due to an extra call to sleep at some point in the call stack which only happens sometimes, # we are relaxing this assertion to allow +/- 2 sleep calls. self.assertTrue(abs(mock_sleep.call_count - timeout) <= 2) self.assertEqual(context_manager.exception.code, ExtensionErrorCodes.PluginHandlerScriptTimedout) self.assertIn("Timeout({0})".format(timeout), ustr(context_manager.exception)) throttled_time = float(50 / 1E9) self.assertIn("CPUThrottledTime({0}secs)".format(throttled_time), ustr(context_manager.exception)) def test_handle_process_completion_should_raise_on_nonzero_exit_code(self): command = "ls folder_does_not_exist" error_code = 42 with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stdout: with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stderr: with self.assertRaises(ExtensionError) as context_manager: process = subprocess.Popen(command, # pylint: disable=subprocess-popen-preexec-fn shell=True, cwd=self.tmp_dir, env={}, stdout=stdout, stderr=stderr, preexec_fn=os.setsid) handle_process_completion(process=process, command=command, timeout=4, stdout=stdout, stderr=stderr, error_code=error_code) self.assertEqual(context_manager.exception.code, error_code) self.assertIn("Non-zero exit code:", ustr(context_manager.exception)) def test_read_output_should_return_no_content(self): with patch('azurelinuxagent.ga.extensionprocessutil.TELEMETRY_MESSAGE_MAX_LEN', 0): expected = "" actual = read_output(self.stdout, self.stderr) self.assertEqual(expected, actual) def test_read_output_should_truncate_the_content(self): with patch('azurelinuxagent.ga.extensionprocessutil.TELEMETRY_MESSAGE_MAX_LEN', 50): expected = "[stdout]\nr the lazy dog.\n\n" \ "[stderr]\ns jump quickly." actual = read_output(self.stdout, self.stderr) self.assertEqual(expected, actual) def test_read_output_should_not_truncate_the_content(self): with patch('azurelinuxagent.ga.extensionprocessutil.TELEMETRY_MESSAGE_MAX_LEN', 90): expected = "[stdout]\nThe quick brown fox jumps over the lazy dog.\n\n" \ "[stderr]\nThe five boxing wizards jump quickly." actual = read_output(self.stdout, self.stderr) self.assertEqual(expected, actual) def test_format_stdout_stderr00(self): """ If stdout and stderr are both smaller than the max length, the full representation should be displayed. """ stdout = "The quick brown fox jumps over the lazy dog." stderr = "The five boxing wizards jump quickly." expected = "[stdout]\n{0}\n\n[stderr]\n{1}".format(stdout, stderr) with patch('azurelinuxagent.ga.extensionprocessutil.TELEMETRY_MESSAGE_MAX_LEN', 1000): actual = format_stdout_stderr(stdout, stderr) self.assertEqual(expected, actual) def test_format_stdout_stderr01(self): """ If stdout and stderr both exceed the max length, then both stdout and stderr are trimmed equally. """ stdout = "The quick brown fox jumps over the lazy dog." stderr = "The five boxing wizards jump quickly." # noinspection SpellCheckingInspection expected = '[stdout]\ns over the lazy dog.\n\n[stderr]\nizards jump quickly.' with patch('azurelinuxagent.ga.extensionprocessutil.TELEMETRY_MESSAGE_MAX_LEN', 60): actual = format_stdout_stderr(stdout, stderr) self.assertEqual(expected, actual) self.assertEqual(60, len(actual)) def test_format_stdout_stderr02(self): """ If stderr is much larger than stdout, stderr is allowed to borrow space from stdout's quota. """ stdout = "empty" stderr = "The five boxing wizards jump quickly." expected = '[stdout]\nempty\n\n[stderr]\ns jump quickly.' with patch('azurelinuxagent.ga.extensionprocessutil.TELEMETRY_MESSAGE_MAX_LEN', 40): actual = format_stdout_stderr(stdout, stderr) self.assertEqual(expected, actual) self.assertEqual(40, len(actual)) def test_format_stdout_stderr03(self): """ If stdout is much larger than stderr, stdout is allowed to borrow space from stderr's quota. """ stdout = "The quick brown fox jumps over the lazy dog." stderr = "empty" expected = '[stdout]\nr the lazy dog.\n\n[stderr]\nempty' with patch('azurelinuxagent.ga.extensionprocessutil.TELEMETRY_MESSAGE_MAX_LEN', 40): actual = format_stdout_stderr(stdout, stderr) self.assertEqual(expected, actual) self.assertEqual(40, len(actual)) def test_format_stdout_stderr04(self): """ If the max length is not sufficient to even hold the stdout and stderr markers an empty string is returned. """ stdout = "The quick brown fox jumps over the lazy dog." stderr = "The five boxing wizards jump quickly." expected = '' with patch('azurelinuxagent.ga.extensionprocessutil.TELEMETRY_MESSAGE_MAX_LEN', 4): actual = format_stdout_stderr(stdout, stderr) self.assertEqual(expected, actual) self.assertEqual(0, len(actual)) def test_format_stdout_stderr05(self): """ If stdout and stderr are empty, an empty template is returned. """ expected = '[stdout]\n\n\n[stderr]\n' with patch('azurelinuxagent.ga.extensionprocessutil.TELEMETRY_MESSAGE_MAX_LEN', 1000): actual = format_stdout_stderr('', '') self.assertEqual(expected, actual) Azure-WALinuxAgent-a976115/tests/common/utils/test_file_util.py000066400000000000000000000256011510742556200245550ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import errno import glob import os import random import shutil import string import tempfile import unittest import uuid import azurelinuxagent.common.utils.fileutil as fileutil from azurelinuxagent.common.future import ustr from tests.lib.tools import AgentTestCase, patch class TestFileOperations(AgentTestCase): def test_read_write_file(self): test_file=os.path.join(self.tmp_dir, self.test_file) content = ustr(uuid.uuid4()) fileutil.write_file(test_file, content) content_read = fileutil.read_file(test_file) self.assertEqual(content, content_read) os.remove(test_file) def test_write_file_content_is_None(self): """ write_file throws when content is None. No file is created. """ try: test_file=os.path.join(self.tmp_dir, self.test_file) fileutil.write_file(test_file, None) self.fail("expected write_file to throw an exception") except: # pylint: disable=bare-except self.assertEqual(False, os.path.exists(test_file)) def test_rw_utf8_file(self): test_file=os.path.join(self.tmp_dir, self.test_file) content = u"\u6211" fileutil.write_file(test_file, content, encoding="utf-8") content_read = fileutil.read_file(test_file) self.assertEqual(content, content_read) os.remove(test_file) def test_remove_bom(self): test_file=os.path.join(self.tmp_dir, self.test_file) data = b'\xef\xbb\xbfhehe' fileutil.write_file(test_file, data, asbin=True) data = fileutil.read_file(test_file, remove_bom=True) self.assertNotEqual(0xbb, ord(data[0])) def test_append_file(self): test_file=os.path.join(self.tmp_dir, self.test_file) content = ustr(uuid.uuid4()) fileutil.append_file(test_file, content) content_read = fileutil.read_file(test_file) self.assertEqual(content, content_read) os.remove(test_file) def test_findre_in_file(self): fp = os.path.join(self.tmp_dir, "test_findre_in_file") with open(fp, 'w') as f: f.write( ''' First line Second line Third line with more words ''' ) self.assertNotEqual( None, fileutil.findre_in_file(fp, ".*rst line$")) self.assertNotEqual( None, fileutil.findre_in_file(fp, ".*ond line$")) self.assertNotEqual( None, fileutil.findre_in_file(fp, ".*with more.*")) self.assertNotEqual( None, fileutil.findre_in_file(fp, "^Third.*")) self.assertEqual( None, fileutil.findre_in_file(fp, "^Do not match.*")) def test_findstr_in_file(self): fp = os.path.join(self.tmp_dir, "test_findstr_in_file") with open(fp, 'w') as f: f.write( ''' First line Second line Third line with more words ''' ) self.assertTrue(fileutil.findstr_in_file(fp, "First line")) self.assertTrue(fileutil.findstr_in_file(fp, "Second line")) self.assertTrue( fileutil.findstr_in_file(fp, "Third line with more words")) self.assertFalse(fileutil.findstr_in_file(fp, "Not a line")) def test_get_last_path_element(self): filepath = '/tmp/abc.def' filename = fileutil.base_name(filepath) self.assertEqual('abc.def', filename) filepath = '/tmp/abc' filename = fileutil.base_name(filepath) self.assertEqual('abc', filename) def test_remove_files(self): random_word = lambda : ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(5)) #Create 10 test files test_file = os.path.join(self.tmp_dir, self.test_file) test_file2 = os.path.join(self.tmp_dir, 'another_file') test_files = [test_file + random_word() for _ in range(5)] + \ [test_file2 + random_word() for _ in range(5)] for file in test_files: # pylint: disable=redefined-builtin open(file, 'a').close() #Remove files using fileutil.rm_files test_file_pattern = test_file + '*' test_file_pattern2 = test_file2 + '*' fileutil.rm_files(test_file_pattern, test_file_pattern2) self.assertEqual(0, len(glob.glob(os.path.join(self.tmp_dir, test_file_pattern)))) self.assertEqual(0, len(glob.glob(os.path.join(self.tmp_dir, test_file_pattern2)))) def test_remove_dirs(self): dirs = [] for n in range(0,5): dirs.append(tempfile.mkdtemp()) for d in dirs: for n in range(0, random.choice(range(0,10))): fileutil.write_file(os.path.join(d, "test"+str(n)), "content") for n in range(0, random.choice(range(0,10))): dd = os.path.join(d, "testd"+str(n)) os.mkdir(dd) for nn in range(0, random.choice(range(0,10))): os.symlink(dd, os.path.join(dd, "sym"+str(nn))) for n in range(0, random.choice(range(0,10))): os.symlink(d, os.path.join(d, "sym"+str(n))) fileutil.rm_dirs(*dirs) for d in dirs: self.assertEqual(len(os.listdir(d)), 0) def test_get_all_files(self): random_word = lambda: ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(5)) # Create 10 test files at the root dir and 10 other in the sub dir test_file = os.path.join(self.tmp_dir, self.test_file) test_file2 = os.path.join(self.tmp_dir, 'another_file') expected_files = [test_file + random_word() for _ in range(5)] + \ [test_file2 + random_word() for _ in range(5)] test_subdir = os.path.join(self.tmp_dir, 'test_dir') os.mkdir(test_subdir) test_file_in_subdir = os.path.join(test_subdir, self.test_file) test_file_in_subdir2 = os.path.join(test_subdir, 'another_file') expected_files.extend([test_file_in_subdir + random_word() for _ in range(5)] + \ [test_file_in_subdir2 + random_word() for _ in range(5)]) for file in expected_files: # pylint: disable=redefined-builtin open(file, 'a').close() # Get All files using fileutil.get_all_files actual_files = fileutil.get_all_files(self.tmp_dir) self.assertEqual(set(expected_files), set(actual_files)) @patch('os.path.isfile') def test_update_conf_file(self, _): new_file = "\ DEVICE=eth0\n\ ONBOOT=yes\n\ BOOTPROTO=dhcp\n\ TYPE=Ethernet\n\ USERCTL=no\n\ PEERDNS=yes\n\ IPV6INIT=no\n\ NM_CONTROLLED=yes\n" existing_file = "\ DEVICE=eth0\n\ ONBOOT=yes\n\ BOOTPROTO=dhcp\n\ TYPE=Ethernet\n\ DHCP_HOSTNAME=existing\n\ USERCTL=no\n\ PEERDNS=yes\n\ IPV6INIT=no\n\ NM_CONTROLLED=yes\n" bad_file = "\ DEVICE=eth0\n\ ONBOOT=yes\n\ BOOTPROTO=dhcp\n\ TYPE=Ethernet\n\ USERCTL=no\n\ PEERDNS=yes\n\ IPV6INIT=no\n\ NM_CONTROLLED=yes\n\ DHCP_HOSTNAME=no_new_line" updated_file = "\ DEVICE=eth0\n\ ONBOOT=yes\n\ BOOTPROTO=dhcp\n\ TYPE=Ethernet\n\ USERCTL=no\n\ PEERDNS=yes\n\ IPV6INIT=no\n\ NM_CONTROLLED=yes\n\ DHCP_HOSTNAME=test\n" path = 'path' with patch.object(fileutil, 'write_file') as patch_write: with patch.object(fileutil, 'read_file', return_value=new_file): fileutil.update_conf_file(path, 'DHCP_HOSTNAME', 'DHCP_HOSTNAME=test') patch_write.assert_called_once_with(path, updated_file) with patch.object(fileutil, 'write_file') as patch_write: with patch.object(fileutil, 'read_file', return_value=existing_file): fileutil.update_conf_file(path, 'DHCP_HOSTNAME', 'DHCP_HOSTNAME=test') patch_write.assert_called_once_with(path, updated_file) with patch.object(fileutil, 'write_file') as patch_write: with patch.object(fileutil, 'read_file', return_value=bad_file): fileutil.update_conf_file(path, 'DHCP_HOSTNAME', 'DHCP_HOSTNAME=test') patch_write.assert_called_once_with(path, updated_file) def test_clean_ioerror_ignores_missing(self): e = IOError() e.errno = errno.ENOSPC # Send no paths fileutil.clean_ioerror(e) # Send missing file(s) / directories fileutil.clean_ioerror(e, paths=['/foo/not/here', None, '/bar/not/there']) def test_clean_ioerror_ignores_unless_ioerror(self): try: d = tempfile.mkdtemp() fd, f = tempfile.mkstemp() os.close(fd) fileutil.write_file(f, 'Not empty') # Send non-IOError exception e = Exception() fileutil.clean_ioerror(e, paths=[d, f]) self.assertTrue(os.path.isdir(d)) self.assertTrue(os.path.isfile(f)) # Send unrecognized IOError e = IOError() e.errno = errno.EFAULT self.assertFalse(e.errno in fileutil.KNOWN_IOERRORS) fileutil.clean_ioerror(e, paths=[d, f]) self.assertTrue(os.path.isdir(d)) self.assertTrue(os.path.isfile(f)) finally: shutil.rmtree(d) os.remove(f) def test_clean_ioerror_removes_files(self): fd, f = tempfile.mkstemp() os.close(fd) fileutil.write_file(f, 'Not empty') e = IOError() e.errno = errno.ENOSPC fileutil.clean_ioerror(e, paths=[f]) self.assertFalse(os.path.isdir(f)) self.assertFalse(os.path.isfile(f)) def test_clean_ioerror_removes_directories(self): d1 = tempfile.mkdtemp() d2 = tempfile.mkdtemp() for n in ['foo', 'bar']: fileutil.write_file(os.path.join(d2, n), 'Not empty') e = IOError() e.errno = errno.ENOSPC fileutil.clean_ioerror(e, paths=[d1, d2]) self.assertFalse(os.path.isdir(d1)) self.assertFalse(os.path.isfile(d1)) self.assertFalse(os.path.isdir(d2)) self.assertFalse(os.path.isfile(d2)) def test_clean_ioerror_handles_a_range_of_errors(self): for err in fileutil.KNOWN_IOERRORS: e = IOError() e.errno = err d = tempfile.mkdtemp() fileutil.clean_ioerror(e, paths=[d]) self.assertFalse(os.path.isdir(d)) self.assertFalse(os.path.isfile(d)) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-a976115/tests/common/utils/test_flexible_version.py000066400000000000000000000372041510742556200261420ustar00rootroot00000000000000import re import unittest from azurelinuxagent.common.utils.flexible_version import FlexibleVersion class TestFlexibleVersion(unittest.TestCase): def setUp(self): self.v = FlexibleVersion() def test_compile_separator(self): tests = [ '.', '', '-' ] for t in tests: t_escaped = re.escape(t) t_re = re.compile(t_escaped) self.assertEqual((t_escaped, t_re), self.v._compile_separator(t)) self.assertEqual(('', re.compile('')), self.v._compile_separator(None)) return def test_compile_pattern(self): self.v._compile_pattern() tests = { '1': True, '1.2': True, '1.2.3': True, '1.2.3.4': True, '1.2.3.4.5': True, '1alpha': True, '1.alpha': True, '1-alpha': True, '1alpha0': True, '1.alpha0': True, '1-alpha0': True, '1.2alpha': True, '1.2.alpha': True, '1.2-alpha': True, '1.2alpha0': True, '1.2.alpha0': True, '1.2-alpha0': True, '1beta': True, '1.beta': True, '1-beta': True, '1beta0': True, '1.beta0': True, '1-beta0': True, '1.2beta': True, '1.2.beta': True, '1.2-beta': True, '1.2beta0': True, '1.2.beta0': True, '1.2-beta0': True, '1rc': True, '1.rc': True, '1-rc': True, '1rc0': True, '1.rc0': True, '1-rc0': True, '1.2rc': True, '1.2.rc': True, '1.2-rc': True, '1.2rc0': True, '1.2.rc0': True, '1.2-rc0': True, '1.2.3.4alpha5': True, ' 1': False, 'beta': False, '1delta0': False, '': False } for test in iter(tests): expectation = tests[test] self.assertEqual( expectation, self.v.version_re.match(test) is not None, "test: {0} expected: {1} ".format(test, expectation)) return def test_compile_pattern_sep(self): self.v.sep = '-' self.v._compile_pattern() tests = { '1': True, '1-2': True, '1-2-3': True, '1-2-3-4': True, '1-2-3-4-5': True, '1alpha': True, '1-alpha': True, '1alpha0': True, '1-alpha0': True, '1-2alpha': True, '1-2.alpha': True, '1-2-alpha': True, '1-2alpha0': True, '1-2.alpha0': True, '1-2-alpha0': True, '1beta': True, '1-beta': True, '1beta0': True, '1-beta0': True, '1-2beta': True, '1-2.beta': True, '1-2-beta': True, '1-2beta0': True, '1-2.beta0': True, '1-2-beta0': True, '1rc': True, '1-rc': True, '1rc0': True, '1-rc0': True, '1-2rc': True, '1-2.rc': True, '1-2-rc': True, '1-2rc0': True, '1-2.rc0': True, '1-2-rc0': True, '1-2-3-4alpha5': True, ' 1': False, 'beta': False, '1delta0': False, '': False } for test in iter(tests): expectation = tests[test] self.assertEqual( expectation, self.v.version_re.match(test) is not None, "test: {0} expected: {1} ".format(test, expectation)) return def test_compile_pattern_prerel(self): self.v.prerel_tags = ('a', 'b', 'c') self.v._compile_pattern() tests = { '1': True, '1.2': True, '1.2.3': True, '1.2.3.4': True, '1.2.3.4.5': True, '1a': True, '1.a': True, '1-a': True, '1a0': True, '1.a0': True, '1-a0': True, '1.2a': True, '1.2.a': True, '1.2-a': True, '1.2a0': True, '1.2.a0': True, '1.2-a0': True, '1b': True, '1.b': True, '1-b': True, '1b0': True, '1.b0': True, '1-b0': True, '1.2b': True, '1.2.b': True, '1.2-b': True, '1.2b0': True, '1.2.b0': True, '1.2-b0': True, '1c': True, '1.c': True, '1-c': True, '1c0': True, '1.c0': True, '1-c0': True, '1.2c': True, '1.2.c': True, '1.2-c': True, '1.2c0': True, '1.2.c0': True, '1.2-c0': True, '1.2.3.4a5': True, ' 1': False, '1.2.3.4alpha5': False, 'beta': False, '1delta0': False, '': False } for test in iter(tests): expectation = tests[test] self.assertEqual( expectation, self.v.version_re.match(test) is not None, "test: {0} expected: {1} ".format(test, expectation)) return def test_ensure_compatible_separators(self): v1 = FlexibleVersion('1.2.3') v2 = FlexibleVersion('1-2-3', sep='-') try: v1 == v2 # pylint: disable=pointless-statement self.assertTrue(False, "Incompatible separators failed to raise an exception") # pylint: disable=redundant-unittest-assert except ValueError: pass except Exception as e: t = e.__class__.__name__ # pylint: disable=redundant-unittest-assert self.assertTrue(False, "Incompatible separators raised an unexpected exception: {0}" \ .format(t)) # pylint: enable=redundant-unittest-assert return def test_ensure_compatible_prerel(self): v1 = FlexibleVersion('1.2.3', prerel_tags=('alpha', 'beta', 'rc')) v2 = FlexibleVersion('1.2.3', prerel_tags=('a', 'b', 'c')) try: v1 == v2 # pylint: disable=pointless-statement self.assertTrue(False, "Incompatible prerel_tags failed to raise an exception") # pylint: disable=redundant-unittest-assert except ValueError: pass except Exception as e: t = e.__class__.__name__ # pylint: disable=redundant-unittest-assert self.assertTrue(False, "Incompatible prerel_tags raised an unexpected exception: {0}" \ .format(t)) # pylint: enable=redundant-unittest-assert return def test_ensure_compatible_prerel_length(self): v1 = FlexibleVersion('1.2.3', prerel_tags=('a', 'b', 'c')) v2 = FlexibleVersion('1.2.3', prerel_tags=('a', 'b')) try: v1 == v2 # pylint: disable=pointless-statement self.assertTrue(False, "Incompatible prerel_tags failed to raise an exception") # pylint: disable=redundant-unittest-assert except ValueError: pass except Exception as e: t = e.__class__.__name__ # pylint: disable=redundant-unittest-assert self.assertTrue(False, "Incompatible prerel_tags raised an unexpected exception: {0}" \ .format(t)) # pylint: enable=redundant-unittest-assert return def test_ensure_compatible_prerel_order(self): v1 = FlexibleVersion('1.2.3', prerel_tags=('a', 'b')) v2 = FlexibleVersion('1.2.3', prerel_tags=('b', 'a')) try: v1 == v2 # pylint: disable=pointless-statement self.assertTrue(False, "Incompatible prerel_tags failed to raise an exception") # pylint: disable=redundant-unittest-assert except ValueError: pass except Exception as e: t = e.__class__.__name__ # pylint: disable=redundant-unittest-assert self.assertTrue(False, "Incompatible prerel_tags raised an unexpected exception: {0}" \ .format(t)) # pylint: enable=redundant-unittest-assert return def test_major(self): tests = { '1' : 1, '1.2' : 1, '1.2.3' : 1, '1.2.3.4' : 1 } for test in iter(tests): expectation = tests[test] self.assertEqual( expectation, FlexibleVersion(test).major) return def test_minor(self): tests = { '1' : 0, '1.2' : 2, '1.2.3' : 2, '1.2.3.4' : 2 } for test in iter(tests): expectation = tests[test] self.assertEqual( expectation, FlexibleVersion(test).minor) return def test_patch(self): tests = { '1' : 0, '1.2' : 0, '1.2.3' : 3, '1.2.3.4' : 3 } for test in iter(tests): expectation = tests[test] self.assertEqual( expectation, FlexibleVersion(test).patch) return def test_parse(self): tests = { "1.2.3.4": ((1, 2, 3, 4), None), "1.2.3.4alpha5": ((1, 2, 3, 4), ('alpha', 5)), "1.2.3.4-alpha5": ((1, 2, 3, 4), ('alpha', 5)), "1.2.3.4.alpha5": ((1, 2, 3, 4), ('alpha', 5)) } for test in iter(tests): expectation = tests[test] self.v._parse(test) self.assertEqual(expectation, (self.v.version, self.v.prerelease)) return def test_decrement(self): src_v = FlexibleVersion('1.0.0.0.10') dst_v = FlexibleVersion(str(src_v)) for i in range(1,10): dst_v -= 1 self.assertEqual(i, src_v.version[-1] - dst_v.version[-1]) return def test_decrement_disallows_below_zero(self): try: FlexibleVersion('1.0') - 1 # pylint: disable=expression-not-assigned self.assertTrue(False, "Decrement failed to raise an exception") # pylint: disable=redundant-unittest-assert except ArithmeticError: pass except Exception as e: t = e.__class__.__name__ self.assertTrue(False, "Decrement raised an unexpected exception: {0}".format(t)) # pylint: disable=redundant-unittest-assert return def test_increment(self): src_v = FlexibleVersion('1.0.0.0.0') dst_v = FlexibleVersion(str(src_v)) for i in range(1,10): dst_v += 1 self.assertEqual(i, dst_v.version[-1] - src_v.version[-1]) return def test_str(self): tests = [ '1', '1.2', '1.2.3', '1.2.3.4', '1.2.3.4.5', '1alpha', '1.alpha', '1-alpha', '1alpha0', '1.alpha0', '1-alpha0', '1.2alpha', '1.2.alpha', '1.2-alpha', '1.2alpha0', '1.2.alpha0', '1.2-alpha0', '1beta', '1.beta', '1-beta', '1beta0', '1.beta0', '1-beta0', '1.2beta', '1.2.beta', '1.2-beta', '1.2beta0', '1.2.beta0', '1.2-beta0', '1rc', '1.rc', '1-rc', '1rc0', '1.rc0', '1-rc0', '1.2rc', '1.2.rc', '1.2-rc', '1.2rc0', '1.2.rc0', '1.2-rc0', '1.2.3.4alpha5', ] for test in tests: self.assertEqual(test, str(FlexibleVersion(test))) return def test_creation_from_flexible_version(self): tests = [ '1', '1.2', '1.2.3', '1.2.3.4', '1.2.3.4.5', '1alpha', '1.alpha', '1-alpha', '1alpha0', '1.alpha0', '1-alpha0', '1.2alpha', '1.2.alpha', '1.2-alpha', '1.2alpha0', '1.2.alpha0', '1.2-alpha0', '1beta', '1.beta', '1-beta', '1beta0', '1.beta0', '1-beta0', '1.2beta', '1.2.beta', '1.2-beta', '1.2beta0', '1.2.beta0', '1.2-beta0', '1rc', '1.rc', '1-rc', '1rc0', '1.rc0', '1-rc0', '1.2rc', '1.2.rc', '1.2-rc', '1.2rc0', '1.2.rc0', '1.2-rc0', '1.2.3.4alpha5', ] for test in tests: v = FlexibleVersion(test) self.assertEqual(test, str(FlexibleVersion(v))) return def test_repr(self): v = FlexibleVersion('1,2,3rc4', ',', ['lol', 'rc']) expected = "FlexibleVersion ('1,2,3rc4', ',', ('lol', 'rc'))" self.assertEqual(expected, repr(v)) def test_order(self): test0 = ["1.7.0", "1.7.0rc0", "1.11.0"] expected0 = ['1.7.0rc0', '1.7.0', '1.11.0'] self.assertEqual(expected0, list(map(str, sorted([FlexibleVersion(v) for v in test0])))) test1 = [ '2.0.2rc2', '2.2.0beta3', '2.0.10', '2.1.0alpha42', '2.0.2beta4', '2.1.1', '2.0.1', '2.0.2rc3', '2.2.0', '2.0.0', '3.0.1', '2.1.0rc1' ] expected1 = [ '2.0.0', '2.0.1', '2.0.2beta4', '2.0.2rc2', '2.0.2rc3', '2.0.10', '2.1.0alpha42', '2.1.0rc1', '2.1.1', '2.2.0beta3', '2.2.0', '3.0.1' ] self.assertEqual(expected1, list(map(str, sorted([FlexibleVersion(v) for v in test1])))) self.assertEqual(FlexibleVersion("1.0.0.0.0.0.0.0"), FlexibleVersion("1")) self.assertFalse(FlexibleVersion("1.0") > FlexibleVersion("1.0")) self.assertFalse(FlexibleVersion("1.0") < FlexibleVersion("1.0")) self.assertTrue(FlexibleVersion("1.0") < FlexibleVersion("1.1")) self.assertTrue(FlexibleVersion("1.9") < FlexibleVersion("1.10")) self.assertTrue(FlexibleVersion("1.9.9") < FlexibleVersion("1.10.0")) self.assertTrue(FlexibleVersion("1.0.0.0") < FlexibleVersion("1.2.0.0")) self.assertTrue(FlexibleVersion("1.1") > FlexibleVersion("1.0")) self.assertTrue(FlexibleVersion("1.10") > FlexibleVersion("1.9")) self.assertTrue(FlexibleVersion("1.10.0") > FlexibleVersion("1.9.9")) self.assertTrue(FlexibleVersion("1.2.0.0") > FlexibleVersion("1.0.0.0")) self.assertTrue(FlexibleVersion("1.0") <= FlexibleVersion("1.1")) self.assertTrue(FlexibleVersion("1.1") > FlexibleVersion("1.0")) self.assertTrue(FlexibleVersion("1.1") >= FlexibleVersion("1.0")) self.assertTrue(FlexibleVersion("1.0") == FlexibleVersion("1.0")) self.assertTrue(FlexibleVersion("1.0") >= FlexibleVersion("1.0")) self.assertTrue(FlexibleVersion("1.0") <= FlexibleVersion("1.0")) self.assertFalse(FlexibleVersion("1.0") != FlexibleVersion("1.0")) self.assertTrue(FlexibleVersion("1.1") != FlexibleVersion("1.0")) return if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-a976115/tests/common/utils/test_network_util.py000066400000000000000000000052161510742556200253270ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import azurelinuxagent.common.utils.networkutil as networkutil from tests.lib.tools import AgentTestCase class TestNetworkOperations(AgentTestCase): def test_route_entry(self): interface = "eth0" mask = "C0FFFFFF" # 255.255.255.192 destination = "C0BB910A" # gateway = "C1BB910A" flags = "1" metric = "0" expected = 'Iface: eth0\tDestination: 10.145.187.192\tGateway: 10.145.187.193\tMask: 255.255.255.192\tFlags: 0x0001\tMetric: 0' expected_json = '{"Iface": "eth0", "Destination": "10.145.187.192", "Gateway": "10.145.187.193", "Mask": "255.255.255.192", "Flags": "0x0001", "Metric": "0"}' entry = networkutil.RouteEntry(interface, destination, gateway, mask, flags, metric) self.assertEqual(str(entry), expected) self.assertEqual(entry.to_json(), expected_json) def test_nic_link_only(self): nic = networkutil.NetworkInterfaceCard("test0", "link info") self.assertEqual(str(nic), '{ "name": "test0", "link": "link info" }') def test_nic_ipv4(self): nic = networkutil.NetworkInterfaceCard("test0", "link info") nic.add_ipv4("ipv4-1") self.assertEqual(str(nic), '{ "name": "test0", "link": "link info", "ipv4": ["ipv4-1"] }') nic.add_ipv4("ipv4-2") self.assertEqual(str(nic), '{ "name": "test0", "link": "link info", "ipv4": ["ipv4-1","ipv4-2"] }') def test_nic_ipv6(self): nic = networkutil.NetworkInterfaceCard("test0", "link info") nic.add_ipv6("ipv6-1") self.assertEqual(str(nic), '{ "name": "test0", "link": "link info", "ipv6": ["ipv6-1"] }') nic.add_ipv6("ipv6-2") self.assertEqual(str(nic), '{ "name": "test0", "link": "link info", "ipv6": ["ipv6-1","ipv6-2"] }') def test_nic_ordinary(self): nic = networkutil.NetworkInterfaceCard("test0", "link INFO") nic.add_ipv6("ipv6-1") nic.add_ipv4("ipv4-1") self.assertEqual(str(nic), '{ "name": "test0", "link": "link INFO", "ipv4": ["ipv4-1"], "ipv6": ["ipv6-1"] }') Azure-WALinuxAgent-a976115/tests/common/utils/test_rest_util.py000066400000000000000000001013371510742556200246140ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import unittest from azurelinuxagent.common.exception import HttpError, ResourceGoneError, InvalidContainerError import azurelinuxagent.common.utils.restutil as restutil from azurelinuxagent.common.utils.restutil import HTTP_USER_AGENT from azurelinuxagent.common.future import httpclient, ustr from tests.lib.tools import AgentTestCase, call, Mock, MagicMock, patch class TestIOErrorCounter(AgentTestCase): def test_increment_hostplugin(self): restutil.IOErrorCounter.reset() restutil.IOErrorCounter.set_protocol_endpoint() restutil.IOErrorCounter.increment( restutil.KNOWN_WIRESERVER_IP, restutil.HOST_PLUGIN_PORT) counts = restutil.IOErrorCounter.get_and_reset() self.assertEqual(1, counts["hostplugin"]) self.assertEqual(0, counts["protocol"]) self.assertEqual(0, counts["other"]) def test_increment_protocol(self): restutil.IOErrorCounter.reset() restutil.IOErrorCounter.set_protocol_endpoint() restutil.IOErrorCounter.increment( restutil.KNOWN_WIRESERVER_IP, 80) counts = restutil.IOErrorCounter.get_and_reset() self.assertEqual(0, counts["hostplugin"]) self.assertEqual(1, counts["protocol"]) self.assertEqual(0, counts["other"]) def test_increment_other(self): restutil.IOErrorCounter.reset() restutil.IOErrorCounter.set_protocol_endpoint() restutil.IOErrorCounter.increment( '169.254.169.254', 80) counts = restutil.IOErrorCounter.get_and_reset() self.assertEqual(0, counts["hostplugin"]) self.assertEqual(0, counts["protocol"]) self.assertEqual(1, counts["other"]) def test_get_and_reset(self): restutil.IOErrorCounter.reset() restutil.IOErrorCounter.set_protocol_endpoint() restutil.IOErrorCounter.increment( restutil.KNOWN_WIRESERVER_IP, restutil.HOST_PLUGIN_PORT) restutil.IOErrorCounter.increment( restutil.KNOWN_WIRESERVER_IP, restutil.HOST_PLUGIN_PORT) restutil.IOErrorCounter.increment( restutil.KNOWN_WIRESERVER_IP, 80) restutil.IOErrorCounter.increment( '169.254.169.254', 80) restutil.IOErrorCounter.increment( '169.254.169.254', 80) counts = restutil.IOErrorCounter.get_and_reset() self.assertEqual(2, counts.get("hostplugin")) self.assertEqual(1, counts.get("protocol")) self.assertEqual(2, counts.get("other")) self.assertEqual( {"hostplugin":0, "protocol":0, "other":0}, restutil.IOErrorCounter._counts) class TestHttpOperations(AgentTestCase): def test_parse_url(self): test_uri = "http://abc.def/ghi#hash?jkl=mn" host, port, secure, rel_uri = restutil._parse_url(test_uri) # pylint: disable=unused-variable self.assertEqual("abc.def", host) self.assertEqual("/ghi#hash?jkl=mn", rel_uri) test_uri = "http://abc.def/" host, port, secure, rel_uri = restutil._parse_url(test_uri) self.assertEqual("abc.def", host) self.assertEqual("/", rel_uri) self.assertEqual(False, secure) test_uri = "https://abc.def/ghi?jkl=mn" host, port, secure, rel_uri = restutil._parse_url(test_uri) self.assertEqual(True, secure) test_uri = "http://abc.def:80/" host, port, secure, rel_uri = restutil._parse_url(test_uri) self.assertEqual("abc.def", host) host, port, secure, rel_uri = restutil._parse_url("") self.assertEqual(None, host) self.assertEqual(rel_uri, "") host, port, secure, rel_uri = restutil._parse_url("None") self.assertEqual(None, host) self.assertEqual(rel_uri, "None") def test_trim_url_parameters(self): test_uri = "http://abc.def/ghi#hash?jkl=mn" rel_uri = restutil._trim_url_parameters(test_uri) self.assertEqual("http://abc.def/ghi", rel_uri) test_uri = "https://abc.def/ghi?jkl=mn" rel_uri = restutil._trim_url_parameters(test_uri) self.assertEqual("https://abc.def/ghi", rel_uri) test_uri = "https://abc.def:8443/ghi?jkl=mn" rel_uri = restutil._trim_url_parameters(test_uri) self.assertEqual("https://abc.def:8443/ghi", rel_uri) rel_uri = restutil._trim_url_parameters("") self.assertEqual(rel_uri, "") rel_uri = restutil._trim_url_parameters("None") self.assertEqual(rel_uri, "None") @patch('azurelinuxagent.common.conf.get_httpproxy_port') @patch('azurelinuxagent.common.conf.get_httpproxy_host') def test_get_http_proxy_none_is_default(self, mock_host, mock_port): mock_host.return_value = None mock_port.return_value = None h, p = restutil._get_http_proxy() self.assertEqual(None, h) self.assertEqual(None, p) @patch('azurelinuxagent.common.conf.get_httpproxy_port') @patch('azurelinuxagent.common.conf.get_httpproxy_host') def test_get_http_proxy_configuration_overrides_env(self, mock_host, mock_port): mock_host.return_value = "host" mock_port.return_value = None h, p = restutil._get_http_proxy() self.assertEqual("host", h) self.assertEqual(None, p) self.assertEqual(1, mock_host.call_count) self.assertEqual(1, mock_port.call_count) @patch('azurelinuxagent.common.conf.get_httpproxy_port') @patch('azurelinuxagent.common.conf.get_httpproxy_host') def test_get_http_proxy_configuration_requires_host(self, mock_host, mock_port): mock_host.return_value = None mock_port.return_value = None h, p = restutil._get_http_proxy() self.assertEqual(None, h) self.assertEqual(None, p) self.assertEqual(1, mock_host.call_count) self.assertEqual(0, mock_port.call_count) @patch('azurelinuxagent.common.conf.get_httpproxy_host') def test_get_http_proxy_http_uses_httpproxy(self, mock_host): mock_host.return_value = None with patch.dict(os.environ, { 'http_proxy' : 'http://foo.com:80', 'https_proxy' : 'https://bar.com:443' }): h, p = restutil._get_http_proxy() self.assertEqual("foo.com", h) self.assertEqual(80, p) @patch('azurelinuxagent.common.conf.get_httpproxy_host') def test_get_http_proxy_https_uses_httpsproxy(self, mock_host): mock_host.return_value = None with patch.dict(os.environ, { 'http_proxy' : 'http://foo.com:80', 'https_proxy' : 'https://bar.com:443' }): h, p = restutil._get_http_proxy(secure=True) self.assertEqual("bar.com", h) self.assertEqual(443, p) @patch('azurelinuxagent.common.conf.get_httpproxy_host') def test_get_http_proxy_ignores_user_in_httpproxy(self, mock_host): mock_host.return_value = None with patch.dict(os.environ, { 'http_proxy' : 'http://user:pw@foo.com:80' }): h, p = restutil._get_http_proxy() self.assertEqual("foo.com", h) self.assertEqual(80, p) def test_get_no_proxy_with_values_set(self): no_proxy_list = ["foo.com", "www.google.com"] with patch.dict(os.environ, { 'no_proxy': ",".join(no_proxy_list) }): no_proxy_from_environment = restutil.get_no_proxy() self.assertEqual(len(no_proxy_list), len(no_proxy_from_environment)) for i, j in zip(no_proxy_from_environment, no_proxy_list): self.assertEqual(i, j) def test_get_no_proxy_with_incorrect_variable_set(self): no_proxy_list = ["foo.com", "www.google.com", "", ""] no_proxy_list_cleaned = [entry for entry in no_proxy_list if entry] with patch.dict(os.environ, { 'no_proxy': ",".join(no_proxy_list) }): no_proxy_from_environment = restutil.get_no_proxy() self.assertEqual(len(no_proxy_list_cleaned), len(no_proxy_from_environment)) for i, j in zip(no_proxy_from_environment, no_proxy_list_cleaned): print(i, j) self.assertEqual(i, j) def test_get_no_proxy_with_ip_addresses_set(self): no_proxy_var = "10.0.0.1,10.0.0.2,10.0.0.3,10.0.0.4,10.0.0.5,10.0.0.6,10.0.0.7,10.0.0.8,10.0.0.9,10.0.0.10," no_proxy_list = ['10.0.0.1', '10.0.0.2', '10.0.0.3', '10.0.0.4', '10.0.0.5', '10.0.0.6', '10.0.0.7', '10.0.0.8', '10.0.0.9', '10.0.0.10'] with patch.dict(os.environ, { 'no_proxy': no_proxy_var }): no_proxy_from_environment = restutil.get_no_proxy() self.assertEqual(len(no_proxy_list), len(no_proxy_from_environment)) for i, j in zip(no_proxy_from_environment, no_proxy_list): self.assertEqual(i, j) def test_get_no_proxy_default(self): no_proxy_generator = restutil.get_no_proxy() self.assertIsNone(no_proxy_generator) def test_is_ipv4_address(self): self.assertTrue(restutil.is_ipv4_address('8.8.8.8')) self.assertFalse(restutil.is_ipv4_address('localhost.localdomain')) self.assertFalse(restutil.is_ipv4_address('2001:4860:4860::8888')) # ipv6 tests def test_is_valid_cidr(self): self.assertTrue(restutil.is_valid_cidr('192.168.1.0/24')) self.assertFalse(restutil.is_valid_cidr('8.8.8.8')) self.assertFalse(restutil.is_valid_cidr('192.168.1.0/a')) self.assertFalse(restutil.is_valid_cidr('192.168.1.0/128')) self.assertFalse(restutil.is_valid_cidr('192.168.1.0/-1')) self.assertFalse(restutil.is_valid_cidr('192.168.1.999/24')) def test_address_in_network(self): self.assertTrue(restutil.address_in_network('192.168.1.1', '192.168.1.0/24')) self.assertFalse(restutil.address_in_network('172.16.0.1', '192.168.1.0/24')) def test_dotted_netmask(self): self.assertEqual(restutil.dotted_netmask(0), '0.0.0.0') self.assertEqual(restutil.dotted_netmask(8), '255.0.0.0') self.assertEqual(restutil.dotted_netmask(16), '255.255.0.0') self.assertEqual(restutil.dotted_netmask(24), '255.255.255.0') self.assertEqual(restutil.dotted_netmask(32), '255.255.255.255') self.assertRaises(ValueError, restutil.dotted_netmask, 33) def test_bypass_proxy(self): no_proxy_list = ["foo.com", "www.google.com", "168.63.129.16", "Microsoft.com"] with patch.dict(os.environ, { 'no_proxy': ",".join(no_proxy_list) }): self.assertFalse(restutil.bypass_proxy("http://bar.com")) self.assertTrue(restutil.bypass_proxy("http://foo.com")) self.assertTrue(restutil.bypass_proxy("http://168.63.129.16")) self.assertFalse(restutil.bypass_proxy("http://baz.com")) self.assertFalse(restutil.bypass_proxy("http://10.1.1.1")) self.assertTrue(restutil.bypass_proxy("http://www.microsoft.com")) @patch("azurelinuxagent.common.future.httpclient.HTTPSConnection") @patch("azurelinuxagent.common.future.httpclient.HTTPConnection") def test_http_request_direct(self, HTTPConnection, HTTPSConnection): mock_conn = \ MagicMock(getresponse=\ Mock(return_value=\ Mock(read=Mock(return_value="TheResults")))) HTTPConnection.return_value = mock_conn resp = restutil._http_request("GET", "foo", "/bar", 10) HTTPConnection.assert_has_calls([ call("foo", 80, timeout=10) ]) HTTPSConnection.assert_not_called() mock_conn.request.assert_has_calls([ call(method="GET", url="/bar", body=None, headers={'User-Agent': HTTP_USER_AGENT, 'Connection': 'close'}) ]) self.assertEqual(1, mock_conn.getresponse.call_count) self.assertNotEqual(None, resp) self.assertEqual("TheResults", resp.read()) @patch("azurelinuxagent.common.future.httpclient.HTTPSConnection") @patch("azurelinuxagent.common.future.httpclient.HTTPConnection") def test_http_request_direct_secure(self, HTTPConnection, HTTPSConnection): mock_conn = \ MagicMock(getresponse=\ Mock(return_value=\ Mock(read=Mock(return_value="TheResults")))) HTTPSConnection.return_value = mock_conn resp = restutil._http_request("GET", "foo", "/bar", 10, secure=True) HTTPConnection.assert_not_called() HTTPSConnection.assert_has_calls([ call("foo", 443, timeout=10) ]) mock_conn.request.assert_has_calls([ call(method="GET", url="/bar", body=None, headers={'User-Agent': HTTP_USER_AGENT, 'Connection': 'close'}) ]) self.assertEqual(1, mock_conn.getresponse.call_count) self.assertNotEqual(None, resp) self.assertEqual("TheResults", resp.read()) @patch("azurelinuxagent.common.future.httpclient.HTTPSConnection") @patch("azurelinuxagent.common.future.httpclient.HTTPConnection") def test_http_request_proxy(self, HTTPConnection, HTTPSConnection): mock_conn = \ MagicMock(getresponse=\ Mock(return_value=\ Mock(read=Mock(return_value="TheResults")))) HTTPConnection.return_value = mock_conn resp = restutil._http_request("GET", "foo", "/bar", 10, proxy_host="foo.bar", proxy_port=23333) HTTPConnection.assert_has_calls([ call("foo.bar", 23333, timeout=10) ]) HTTPSConnection.assert_not_called() mock_conn.request.assert_has_calls([ call(method="GET", url="http://foo:80/bar", body=None, headers={'User-Agent': HTTP_USER_AGENT, 'Connection': 'close'}) ]) self.assertEqual(1, mock_conn.getresponse.call_count) self.assertNotEqual(None, resp) self.assertEqual("TheResults", resp.read()) @patch("azurelinuxagent.common.utils.restutil._get_http_proxy") @patch("time.sleep") @patch("azurelinuxagent.common.utils.restutil._http_request") def test_http_request_proxy_with_no_proxy_check(self, _http_request, sleep, mock_get_http_proxy): # pylint: disable=unused-argument mock_http_resp = MagicMock() mock_http_resp.read = Mock(return_value="hehe") _http_request.return_value = mock_http_resp mock_get_http_proxy.return_value = "host", 1234 # Return a host/port combination no_proxy_list = ["foo.com", "www.google.com", "168.63.129.16"] with patch.dict(os.environ, { 'no_proxy': ",".join(no_proxy_list) }): # Test http get resp = restutil.http_get("http://foo.com", use_proxy=True) self.assertEqual("hehe", resp.read()) self.assertEqual(0, mock_get_http_proxy.call_count) # Test http get resp = restutil.http_get("http://bar.com", use_proxy=True) self.assertEqual("hehe", resp.read()) self.assertEqual(1, mock_get_http_proxy.call_count) def test_proxy_conditions_with_no_proxy(self): should_use_proxy = True should_not_use_proxy = False use_proxy = True no_proxy_list = ["foo.com", "www.google.com", "168.63.129.16"] with patch.dict(os.environ, { 'no_proxy': ",".join(no_proxy_list) }): host = "10.0.0.1" self.assertEqual(should_use_proxy, use_proxy and not restutil.bypass_proxy(host)) host = "foo.com" self.assertEqual(should_not_use_proxy, use_proxy and not restutil.bypass_proxy(host)) host = "www.google.com" self.assertEqual(should_not_use_proxy, use_proxy and not restutil.bypass_proxy(host)) host = "168.63.129.16" self.assertEqual(should_not_use_proxy, use_proxy and not restutil.bypass_proxy(host)) host = "www.bar.com" self.assertEqual(should_use_proxy, use_proxy and not restutil.bypass_proxy(host)) no_proxy_list = ["10.0.0.1/24"] with patch.dict(os.environ, { 'no_proxy': ",".join(no_proxy_list) }): host = "www.bar.com" self.assertEqual(should_use_proxy, use_proxy and not restutil.bypass_proxy(host)) host = "10.0.0.1" self.assertEqual(should_not_use_proxy, use_proxy and not restutil.bypass_proxy(host)) host = "10.0.1.1" self.assertEqual(should_use_proxy, use_proxy and not restutil.bypass_proxy(host)) # When No_proxy is empty with patch.dict(os.environ, { 'no_proxy': "" }): host = "10.0.0.1" self.assertTrue(use_proxy and not restutil.bypass_proxy(host)) host = "foo.com" self.assertTrue(use_proxy and not restutil.bypass_proxy(host)) host = "www.google.com" self.assertTrue(use_proxy and not restutil.bypass_proxy(host)) host = "168.63.129.16" self.assertTrue(use_proxy and not restutil.bypass_proxy(host)) host = "www.bar.com" self.assertTrue(use_proxy and not restutil.bypass_proxy(host)) host = "10.0.0.1" self.assertTrue(use_proxy and not restutil.bypass_proxy(host)) host = "10.0.1.1" self.assertTrue(use_proxy and not restutil.bypass_proxy(host)) # When os.environ is empty - No global variables defined. with patch.dict(os.environ, {}): host = "10.0.0.1" self.assertTrue(use_proxy and not restutil.bypass_proxy(host)) host = "foo.com" self.assertTrue(use_proxy and not restutil.bypass_proxy(host)) host = "www.google.com" self.assertTrue(use_proxy and not restutil.bypass_proxy(host)) host = "168.63.129.16" self.assertTrue(use_proxy and not restutil.bypass_proxy(host)) host = "www.bar.com" self.assertTrue(use_proxy and not restutil.bypass_proxy(host)) host = "10.0.0.1" self.assertTrue(use_proxy and not restutil.bypass_proxy(host)) host = "10.0.1.1" self.assertTrue(use_proxy and not restutil.bypass_proxy(host)) @patch("azurelinuxagent.common.future.httpclient.HTTPSConnection") @patch("azurelinuxagent.common.future.httpclient.HTTPConnection") def test_http_request_proxy_secure(self, HTTPConnection, HTTPSConnection): mock_conn = \ MagicMock(getresponse=\ Mock(return_value=\ Mock(read=Mock(return_value="TheResults")))) HTTPSConnection.return_value = mock_conn resp = restutil._http_request("GET", "foo", "/bar", 10, proxy_host="foo.bar", proxy_port=23333, secure=True) HTTPConnection.assert_not_called() HTTPSConnection.assert_has_calls([ call("foo.bar", 23333, timeout=10) ]) mock_conn.request.assert_has_calls([ call(method="GET", url="https://foo:443/bar", body=None, headers={'User-Agent': HTTP_USER_AGENT, 'Connection': 'close'}) ]) self.assertEqual(1, mock_conn.getresponse.call_count) self.assertNotEqual(None, resp) self.assertEqual("TheResults", resp.read()) @patch("time.sleep") @patch("azurelinuxagent.common.utils.restutil._http_request") def test_http_request_with_retry(self, _http_request, sleep): # pylint: disable=unused-argument mock_http_resp = MagicMock() mock_http_resp.read = Mock(return_value="hehe") _http_request.return_value = mock_http_resp # Test http get resp = restutil.http_get("http://foo.bar") self.assertEqual("hehe", resp.read()) # Test https get resp = restutil.http_get("https://foo.bar") self.assertEqual("hehe", resp.read()) # Test http failure _http_request.side_effect = httpclient.HTTPException("Http failure") self.assertRaises(restutil.HttpError, restutil.http_get, "http://foo.bar") # Test http failure _http_request.side_effect = IOError("IO failure") self.assertRaises(restutil.HttpError, restutil.http_get, "http://foo.bar") @patch("time.sleep") @patch("azurelinuxagent.common.utils.restutil._http_request") def test_http_request_retries_status_codes(self, _http_request, _sleep): _http_request.side_effect = [ Mock(status=httpclient.SERVICE_UNAVAILABLE), Mock(status=httpclient.OK) ] restutil.http_get("https://foo.bar") self.assertEqual(2, _http_request.call_count) self.assertEqual(1, _sleep.call_count) @patch("time.sleep") @patch("azurelinuxagent.common.utils.restutil._http_request") def test_http_request_retries_passed_status_codes(self, _http_request, _sleep): # Ensure the code is not part of the standard set self.assertFalse(httpclient.UNAUTHORIZED in restutil.RETRY_CODES) _http_request.side_effect = [ Mock(status=httpclient.UNAUTHORIZED), Mock(status=httpclient.OK) ] restutil.http_get("https://foo.bar", retry_codes=[httpclient.UNAUTHORIZED]) self.assertEqual(2, _http_request.call_count) self.assertEqual(1, _sleep.call_count) @patch("time.sleep") @patch("azurelinuxagent.common.utils.restutil._http_request") def test_http_request_retries_with_fibonacci_delay(self, _http_request, _sleep): # Ensure the code is not a throttle code self.assertFalse(httpclient.BAD_GATEWAY in restutil.THROTTLE_CODES) _http_request.side_effect = [ Mock(status=httpclient.BAD_GATEWAY) for i in range(restutil.DEFAULT_RETRIES) ] + [Mock(status=httpclient.OK)] restutil.http_get("https://foo.bar", max_retry=restutil.DEFAULT_RETRIES+1) self.assertEqual(restutil.DEFAULT_RETRIES+1, _http_request.call_count) self.assertEqual(restutil.DEFAULT_RETRIES, _sleep.call_count) self.assertEqual( [ call(restutil._compute_delay(i+1, restutil.DELAY_IN_SECONDS)) for i in range(restutil.DEFAULT_RETRIES)], _sleep.call_args_list) @patch("time.sleep") @patch("azurelinuxagent.common.utils.restutil._http_request") def test_http_request_retries_with_constant_delay_when_throttled(self, _http_request, _sleep): # Ensure the code is a throttle code self.assertTrue(httpclient.SERVICE_UNAVAILABLE in restutil.THROTTLE_CODES) _http_request.side_effect = [ Mock(status=httpclient.SERVICE_UNAVAILABLE) for i in range(restutil.DEFAULT_RETRIES) # pylint: disable=unused-variable ] + [Mock(status=httpclient.OK)] restutil.http_get("https://foo.bar", max_retry=restutil.DEFAULT_RETRIES+1) self.assertEqual(restutil.DEFAULT_RETRIES+1, _http_request.call_count) self.assertEqual(restutil.DEFAULT_RETRIES, _sleep.call_count) self.assertEqual( [call(1) for i in range(restutil.DEFAULT_RETRIES)], _sleep.call_args_list) @patch("time.sleep") @patch("azurelinuxagent.common.utils.restutil._http_request") def test_http_request_retries_for_safe_minimum_number_when_throttled(self, _http_request, _sleep): # Ensure the code is a throttle code self.assertTrue(httpclient.SERVICE_UNAVAILABLE in restutil.THROTTLE_CODES) _http_request.side_effect = [ Mock(status=httpclient.SERVICE_UNAVAILABLE) for i in range(restutil.THROTTLE_RETRIES-1) # pylint: disable=unused-variable ] + [Mock(status=httpclient.OK)] restutil.http_get("https://foo.bar", max_retry=1) self.assertEqual(restutil.THROTTLE_RETRIES, _http_request.call_count) self.assertEqual(restutil.THROTTLE_RETRIES-1, _sleep.call_count) self.assertEqual( [call(1) for i in range(restutil.THROTTLE_RETRIES-1)], _sleep.call_args_list) @patch("time.sleep") @patch("azurelinuxagent.common.utils.restutil._http_request") def test_http_request_try_max_retries_when_telemetry_throttled(self, _http_request, _sleep): # Ensure the code is a throttle code self.assertTrue(httpclient.SERVICE_UNAVAILABLE in restutil.THROTTLE_CODES) max_retries = 5 _http_request.side_effect = [ Mock(status=httpclient.SERVICE_UNAVAILABLE) for i in range(max_retries-1) # pylint: disable=unused-variable ] + [Mock(status=httpclient.OK)] restutil.http_post("https://telemetrydata", data=None, max_retry=max_retries) self.assertEqual(max_retries, _http_request.call_count) self.assertEqual(max_retries-1, _sleep.call_count) @patch("time.sleep") @patch("azurelinuxagent.common.utils.restutil._http_request") def test_http_request_raises_for_resource_gone(self, _http_request, _sleep): _http_request.side_effect = [ Mock(status=httpclient.GONE) ] self.assertRaises(ResourceGoneError, restutil.http_get, "https://foo.bar") self.assertEqual(1, _http_request.call_count) @patch("time.sleep") @patch("azurelinuxagent.common.utils.restutil._http_request") def test_http_request_raises_for_invalid_container_configuration(self, _http_request, _sleep): def read(): return b'{ "errorCode": "InvalidContainerConfiguration", "message": "Invalid request." }' _http_request.side_effect = [ Mock(status=httpclient.BAD_REQUEST, reason='Bad Request', read=read) ] self.assertRaises(InvalidContainerError, restutil.http_get, "https://foo.bar") self.assertEqual(1, _http_request.call_count) @patch("time.sleep") @patch("azurelinuxagent.common.utils.restutil._http_request") def test_http_request_raises_for_invalid_role_configuration(self, _http_request, _sleep): def read(): return b'{ "errorCode": "RequestRoleConfigFileNotFound", "message": "Invalid request." }' _http_request.side_effect = [ Mock(status=httpclient.GONE, reason='Resource Gone', read=read) ] self.assertRaises(ResourceGoneError, restutil.http_get, "https://foo.bar") self.assertEqual(1, _http_request.call_count) @patch("time.sleep") @patch("azurelinuxagent.common.utils.restutil._http_request") def test_http_request_retries_exceptions(self, _http_request, _sleep): # Testing each exception is difficult because they have varying # signatures; for now, test one and ensure the set is unchanged recognized_exceptions = [ httpclient.NotConnected, httpclient.IncompleteRead, httpclient.ImproperConnectionState, httpclient.BadStatusLine ] self.assertEqual(recognized_exceptions, restutil.RETRY_EXCEPTIONS) _http_request.side_effect = [ httpclient.IncompleteRead(''), Mock(status=httpclient.OK) ] restutil.http_get("https://foo.bar") self.assertEqual(2, _http_request.call_count) self.assertEqual(1, _sleep.call_count) @patch("time.sleep") @patch("azurelinuxagent.common.utils.restutil._http_request") def test_http_request_retries_ioerrors(self, _http_request, _sleep): ioerror = IOError() ioerror.errno = 42 _http_request.side_effect = [ ioerror, Mock(status=httpclient.OK) ] restutil.http_get("https://foo.bar") self.assertEqual(2, _http_request.call_count) self.assertEqual(1, _sleep.call_count) def test_request_failed(self): self.assertTrue(restutil.request_failed(None)) resp = Mock() for status in restutil.OK_CODES: resp.status = status self.assertFalse(restutil.request_failed(resp)) self.assertFalse(httpclient.BAD_REQUEST in restutil.OK_CODES) resp.status = httpclient.BAD_REQUEST self.assertTrue(restutil.request_failed(resp)) self.assertFalse( restutil.request_failed( resp, ok_codes=[httpclient.BAD_REQUEST])) def test_request_succeeded(self): self.assertFalse(restutil.request_succeeded(None)) resp = Mock() for status in restutil.OK_CODES: resp.status = status self.assertTrue(restutil.request_succeeded(resp)) self.assertFalse(httpclient.BAD_REQUEST in restutil.OK_CODES) resp.status = httpclient.BAD_REQUEST self.assertFalse(restutil.request_succeeded(resp)) self.assertTrue( restutil.request_succeeded( resp, ok_codes=[httpclient.BAD_REQUEST])) def test_read_response_error(self): """ Validate the read_response_error method handles encoding correctly """ responses = ['message', b'message', '\x80message\x80'] response = MagicMock() response.status = 'status' response.reason = 'reason' with patch.object(response, 'read') as patch_response: for s in responses: patch_response.return_value = s result = restutil.read_response_error(response) print("RESPONSE: {0}".format(s)) print("RESULT: {0}".format(result)) print("PRESENT: {0}".format('[status: reason]' in result)) self.assertTrue('[status: reason]' in result) self.assertTrue('message' in result) def test_read_response_bytes(self): response_bytes = '7b:0a:20:20:20:20:22:65:72:72:6f:72:43:6f:64:65:22:' \ '3a:20:22:54:68:65:20:62:6c:6f:62:20:74:79:70:65:20:' \ '69:73:20:69:6e:76:61:6c:69:64:20:66:6f:72:20:74:68:' \ '69:73:20:6f:70:65:72:61:74:69:6f:6e:2e:22:2c:0a:20:' \ '20:20:20:22:6d:65:73:73:61:67:65:22:3a:20:22:c3:af:' \ 'c2:bb:c2:bf:3c:3f:78:6d:6c:20:76:65:72:73:69:6f:6e:' \ '3d:22:31:2e:30:22:20:65:6e:63:6f:64:69:6e:67:3d:22:' \ '75:74:66:2d:38:22:3f:3e:3c:45:72:72:6f:72:3e:3c:43:' \ '6f:64:65:3e:49:6e:76:61:6c:69:64:42:6c:6f:62:54:79:' \ '70:65:3c:2f:43:6f:64:65:3e:3c:4d:65:73:73:61:67:65:' \ '3e:54:68:65:20:62:6c:6f:62:20:74:79:70:65:20:69:73:' \ '20:69:6e:76:61:6c:69:64:20:66:6f:72:20:74:68:69:73:' \ '20:6f:70:65:72:61:74:69:6f:6e:2e:0a:52:65:71:75:65:' \ '73:74:49:64:3a:63:37:34:32:39:30:63:62:2d:30:30:30:' \ '31:2d:30:30:62:35:2d:30:36:64:61:2d:64:64:36:36:36:' \ '61:30:30:30:22:2c:0a:20:20:20:20:22:64:65:74:61:69:' \ '6c:73:22:3a:20:22:22:0a:7d'.split(':') expected_response = '[HTTP Failed] [status: reason] {\n "errorCode": "The blob ' \ 'type is invalid for this operation.",\n ' \ '"message": "' \ 'InvalidBlobTypeThe ' \ 'blob type is invalid for this operation.\n' \ 'RequestId:c74290cb-0001-00b5-06da-dd666a000",' \ '\n "details": ""\n}' response_string = ''.join(chr(int(b, 16)) for b in response_bytes) response = MagicMock() response.status = 'status' response.reason = 'reason' with patch.object(response, 'read') as patch_response: patch_response.return_value = response_string result = restutil.read_response_error(response) self.assertEqual(result, expected_response) try: raise HttpError("{0}".format(result)) except HttpError as e: self.assertTrue(result in ustr(e)) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-a976115/tests/common/utils/test_shell_util.py000066400000000000000000000561531510742556200247530ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import signal import subprocess import sys import tempfile import threading import unittest from azurelinuxagent.common.future import ustr import azurelinuxagent.common.utils.shellutil as shellutil from tests.lib.tools import AgentTestCase, patch, skip_if_predicate_true from tests.lib.miscellaneous_tools import wait_for, format_processes class ShellQuoteTestCase(AgentTestCase): def test_shellquote(self): self.assertEqual("\'foo\'", shellutil.quote("foo")) self.assertEqual("\'foo bar\'", shellutil.quote("foo bar")) self.assertEqual("'foo'\\''bar'", shellutil.quote("foo\'bar")) class RunTestCase(AgentTestCase): def test_it_should_return_the_exit_code_of_the_command(self): exit_code = shellutil.run("exit 123") self.assertEqual(123, exit_code) def test_it_should_be_a_pass_thru_to_run_get_output(self): with patch.object(shellutil, "run_get_output", return_value=(0, "")) as mock_run_get_output: shellutil.run("echo hello word!", chk_err=False, expected_errors=[1, 2, 3]) self.assertEqual(mock_run_get_output.call_count, 1) args, kwargs = mock_run_get_output.call_args self.assertEqual(args[0], "echo hello word!") self.assertEqual(kwargs["chk_err"], False) self.assertEqual(kwargs["expected_errors"], [1, 2, 3]) class RunGetOutputTestCase(AgentTestCase): def test_run_get_output(self): output = shellutil.run_get_output(u"ls /") self.assertNotEqual(None, output) self.assertEqual(0, output[0]) err = shellutil.run_get_output(u"ls /not-exists") self.assertNotEqual(0, err[0]) err = shellutil.run_get_output(u"ls 我") self.assertNotEqual(0, err[0]) def test_it_should_log_the_command(self): command = "echo hello world!" with patch("azurelinuxagent.common.utils.shellutil.logger", autospec=True) as mock_logger: shellutil.run_get_output(command) self.assertEqual(mock_logger.verbose.call_count, 1) args, kwargs = mock_logger.verbose.call_args # pylint: disable=unused-variable command_in_message = args[1] self.assertEqual(command_in_message, command) def test_it_should_log_command_failures_as_errors(self): return_code = 99 command = "exit {0}".format(return_code) with patch("azurelinuxagent.common.utils.shellutil.logger", autospec=True) as mock_logger: shellutil.run_get_output(command, log_cmd=False) self.assertEqual(mock_logger.error.call_count, 1) args, _ = mock_logger.error.call_args message = args[0] # message is similar to "Command: [exit 99], return code: [99], result: []" self.assertIn("[{0}]".format(command), message) self.assertIn("[{0}]".format(return_code), message) self.assertEqual(mock_logger.info.call_count, 0, "Did not expect any info messages. Got: {0}".format(mock_logger.info.call_args_list)) self.assertEqual(mock_logger.warn.call_count, 0, "Did not expect any warnings. Got: {0}".format(mock_logger.warn.call_args_list)) def test_it_should_log_expected_errors_as_info(self): return_code = 99 command = "exit {0}".format(return_code) with patch("azurelinuxagent.common.utils.shellutil.logger", autospec=True) as mock_logger: shellutil.run_get_output(command, log_cmd=False, expected_errors=[return_code]) self.assertEqual(mock_logger.info.call_count, 1) args, _ = mock_logger.info.call_args message = args[0] # message is similar to "Command: [exit 99], return code: [99], result: []" self.assertIn("[{0}]".format(command), message) self.assertIn("[{0}]".format(return_code), message) self.assertEqual(mock_logger.warn.call_count, 0, "Did not expect any warnings. Got: {0}".format(mock_logger.warn.call_args_list)) self.assertEqual(mock_logger.error.call_count, 0, "Did not expect any errors. Got: {0}".format(mock_logger.error.call_args_list)) def test_it_should_log_unexpected_errors_as_errors(self): return_code = 99 command = "exit {0}".format(return_code) with patch("azurelinuxagent.common.utils.shellutil.logger", autospec=True) as mock_logger: shellutil.run_get_output(command, log_cmd=False, expected_errors=[return_code + 1]) self.assertEqual(mock_logger.error.call_count, 1) args, _ = mock_logger.error.call_args message = args[0] # message is similar to "Command: [exit 99], return code: [99], result: []" self.assertIn("[{0}]".format(command), message) self.assertIn("[{0}]".format(return_code), message) self.assertEqual(mock_logger.info.call_count, 0, "Did not expect any info messages. Got: {0}".format(mock_logger.info.call_args_list)) self.assertEqual(mock_logger.warn.call_count, 0, "Did not expect any warnings. Got: {0}".format(mock_logger.warn.call_args_list)) class RunCommandTestCase(AgentTestCase): """ Tests for shellutil.run_command/run_pipe """ def __create_tee_script(self, return_code=0): """ Creates a Python script that tees its stdin to stdout and stderr """ tee_script = os.path.join(self.tmp_dir, "tee.py") AgentTestCase.create_script(tee_script, """ import sys for line in sys.stdin: sys.stdout.write(line) sys.stderr.write(line) exit({0}) """.format(return_code)) return tee_script def test_run_command_should_execute_the_command(self): command = ["echo", "-n", "A TEST STRING"] ret = shellutil.run_command(command) self.assertEqual(ret, "A TEST STRING") def test_run_command_should_use_popen_arg_list(self): with patch("azurelinuxagent.common.utils.shellutil.subprocess.Popen", wraps=subprocess.Popen) as popen_patch: command = ["echo", "-n", "A TEST STRING"] ret = shellutil.run_command(command) self.assertEqual(ret, "A TEST STRING") self.assertEqual(popen_patch.call_count, 1) args, kwargs = popen_patch.call_args self.assertTrue(any(arg for arg in args[0] if "A TEST STRING" in arg), "command not being used") self.assertEqual(kwargs['env'].get(shellutil.PARENT_PROCESS_NAME), shellutil.AZURE_GUEST_AGENT, "Env flag not being used") def test_run_pipe_should_execute_a_pipe_with_two_commands(self): # Output the same string 3 times and then remove duplicates test_string = "A TEST STRING\n" pipe = [["echo", "-n", "-e", test_string * 3], ["uniq"]] output = shellutil.run_pipe(pipe) self.assertEqual(output, test_string) def test_run_pipe_should_execute_a_pipe_with_more_than_two_commands(self): # # The test pipe splits the output of "ls" in lines and then greps for "." # # Sample output of "ls -d .": # drwxrwxr-x 13 nam nam 4096 Nov 13 16:54 . # pipe = [["ls", "-ld", "."], ["sed", "-r", "s/\\s+/\\n/g"], ["grep", "\\."]] output = shellutil.run_pipe(pipe) self.assertEqual(".\n", output, "The pipe did not produce the expected output. Got: {0}".format(output)) def __it_should_raise_an_exception_when_the_command_fails(self, action): with self.assertRaises(shellutil.CommandError) as context_manager: action() exception = context_manager.exception self.assertIn("tee.py", str(exception), "The CommandError does not include the expected command") self.assertEqual(1, exception.returncode, "Unexpected return value from the test pipe") self.assertEqual("TEST_STRING\n", exception.stdout, "Unexpected stdout from the test pipe") self.assertEqual("TEST_STRING\n", exception.stderr, "Unexpected stderr from the test pipe") def test_run_command_should_raise_an_exception_when_the_command_fails(self): tee_script = self.__create_tee_script(return_code=1) self.__it_should_raise_an_exception_when_the_command_fails( lambda: shellutil.run_command(tee_script, input="TEST_STRING\n")) def test_run_pipe_should_raise_an_exception_when_the_last_command_fails(self): tee_script = self.__create_tee_script(return_code=1) self.__it_should_raise_an_exception_when_the_command_fails( lambda: shellutil.run_pipe([["echo", "-n", "TEST_STRING\n"], [tee_script]])) def __it_should_raise_an_exception_when_it_cannot_execute_the_command(self, action): with self.assertRaises(Exception) as context_manager: action() exception = context_manager.exception self.assertIn("No such file or directory", str(exception)) def test_run_command_should_raise_an_exception_when_it_cannot_execute_the_command(self): self.__it_should_raise_an_exception_when_it_cannot_execute_the_command( lambda: shellutil.run_command("nonexistent_command")) @skip_if_predicate_true(lambda: sys.version_info[0] == 2, "Timeouts are not supported on Python 2") def test_run_command_should_raise_an_exception_when_the_command_times_out(self): with self.assertRaises(shellutil.CommandError) as context: shellutil.run_command(["sleep", "5"], timeout=1) self.assertIn("command timeout", context.exception.stderr, "The command did not time out") def test_run_pipe_should_raise_an_exception_when_it_cannot_execute_the_pipe(self): self.__it_should_raise_an_exception_when_it_cannot_execute_the_command( lambda: shellutil.run_pipe([["ls", "-ld", "."], ["nonexistent_command"], ["wc", "-l"]])) def __it_should_not_log_by_default(self, action): with patch("azurelinuxagent.common.utils.shellutil.logger", autospec=True) as mock_logger: try: action() except Exception: pass self.assertEqual(mock_logger.warn.call_count, 0, "Did not expect any WARNINGS; Got: {0}".format(mock_logger.warn.call_args)) self.assertEqual(mock_logger.error.call_count, 0, "Did not expect any ERRORS; Got: {0}".format(mock_logger.error.call_args)) def test_run_command_it_should_not_log_by_default(self): self.__it_should_not_log_by_default( lambda: shellutil.run_command(["ls", "nonexistent_file"])) # Raises a CommandError self.__it_should_not_log_by_default( lambda: shellutil.run_command("nonexistent_command")) # Raises an OSError def test_run_pipe_it_should_not_log_by_default(self): self.__it_should_not_log_by_default( lambda: shellutil.run_pipe([["date"], [self.__create_tee_script(return_code=1)]])) # Raises a CommandError self.__it_should_not_log_by_default( lambda: shellutil.run_pipe([["date"], ["nonexistent_command"]])) # Raises an OSError def __it_should_log_an_error_when_log_error_is_set(self, action, command): with patch("azurelinuxagent.common.utils.shellutil.logger.error") as mock_log_error: try: action() except Exception: pass self.assertEqual(mock_log_error.call_count, 1) args, _ = mock_log_error.call_args self.assertTrue(any(command in str(a) for a in args), "The command was not logged") self.assertTrue(any("2" in str(a) for a in args), "The command's return code was not logged") # errno 2: No such file or directory def test_run_command_should_log_an_error_when_log_error_is_set(self): self.__it_should_log_an_error_when_log_error_is_set( lambda: shellutil.run_command(["ls", "file-does-not-exist"], log_error=True), # Raises a CommandError command="ls") self.__it_should_log_an_error_when_log_error_is_set( lambda: shellutil.run_command("command-does-not-exist", log_error=True), # Raises a CommandError command="command-does-not-exist") def test_run_command_should_raise_when_both_the_input_and_stdin_parameters_are_specified(self): with tempfile.TemporaryFile() as input_file: with self.assertRaises(ValueError): shellutil.run_command(["cat"], input='0123456789ABCDEF', stdin=input_file) def test_run_command_should_read_the_command_input_from_the_input_parameter_when_it_is_a_string(self): command_input = 'TEST STRING' output = shellutil.run_command(["cat"], input=command_input) self.assertEqual(output, command_input, "The command did not process its input correctly; the output should match the input") def test_run_command_should_read_stdin_from_the_input_parameter_when_it_is_a_sequence_of_bytes(self): command_input = 'TEST BYTES' output = shellutil.run_command(["cat"], input=command_input) self.assertEqual(output, command_input, "The command did not process its input correctly; the output should match the input") def __it_should_read_the_command_input_from_the_stdin_parameter(self, action): command_input = 'TEST STRING\n' with tempfile.TemporaryFile() as input_file: input_file.write(command_input.encode()) input_file.seek(0) output = action(stdin=input_file) self.assertEqual(output, command_input, "The command did not process its input correctly; the output should match the input") def test_run_command_should_read_the_command_input_from_the_stdin_parameter(self): self.__it_should_read_the_command_input_from_the_stdin_parameter( lambda stdin: shellutil.run_command(["cat"], stdin=stdin)) def test_run_pipe_should_read_the_command_input_from_the_stdin_parameter(self): self.__it_should_read_the_command_input_from_the_stdin_parameter( lambda stdin: shellutil.run_pipe([["cat"], ["sort"]], stdin=stdin)) def __it_should_write_the_command_output_to_the_stdout_parameter(self, action): with tempfile.TemporaryFile() as output_file: captured_output = action(stdout=output_file) output_file.seek(0) command_output = ustr(output_file.read(), encoding='utf-8', errors='backslashreplace') self.assertEqual(command_output, "TEST STRING\n", "The command did not produce the correct output; the output should match the input") self.assertEqual("", captured_output, "No output should have been captured since it was redirected to a file. Output: [{0}]".format(captured_output)) def test_run_command_should_write_the_command_output_to_the_stdout_parameter(self): self.__it_should_write_the_command_output_to_the_stdout_parameter( lambda stdout: shellutil.run_command(["echo", "TEST STRING"], stdout=stdout)) def test_run_pipe_should_write_the_command_output_to_the_stdout_parameter(self): self.__it_should_write_the_command_output_to_the_stdout_parameter( lambda stdout: shellutil.run_pipe([["echo", "TEST STRING"], ["sort"]], stdout=stdout)) def __it_should_write_the_command_error_output_to_the_stderr_parameter(self, action): with tempfile.TemporaryFile() as output_file: action(stderr=output_file) output_file.seek(0) command_error_output = ustr(output_file.read(), encoding='utf-8', errors="backslashreplace") self.assertEqual("TEST STRING\n", command_error_output, "stderr was not redirected to the output file correctly") def test_run_command_should_write_the_command_error_output_to_the_stderr_parameter(self): self.__it_should_write_the_command_error_output_to_the_stderr_parameter( lambda stderr: shellutil.run_command(self.__create_tee_script(), input="TEST STRING\n", stderr=stderr)) def test_run_pipe_should_write_the_command_error_output_to_the_stderr_parameter(self): self.__it_should_write_the_command_error_output_to_the_stderr_parameter( lambda stderr: shellutil.run_pipe([["echo", "TEST STRING"], [self.__create_tee_script()]], stderr=stderr)) def test_run_pipe_should_capture_the_stderr_of_all_the_commands_in_the_pipe(self): with self.assertRaises(shellutil.CommandError) as context_manager: shellutil.run_pipe([ ["echo", "TEST STRING"], [self.__create_tee_script()], [self.__create_tee_script()], [self.__create_tee_script(return_code=1)]]) self.assertEqual("TEST STRING\n" * 3, context_manager.exception.stderr, "Expected 3 copies of the test string since there are 3 commands in the pipe") def test_run_command_should_return_a_string_by_default(self): output = shellutil.run_command(self.__create_tee_script(), input="TEST STRING") self.assertTrue(isinstance(output, ustr), "The return value should be a string. Got: '{0}'".format(type(output))) def test_run_pipe_should_return_a_string_by_default(self): output = shellutil.run_pipe([["echo", "TEST STRING"], [self.__create_tee_script()]]) self.assertTrue(isinstance(output, ustr), "The return value should be a string. Got: '{0}'".format(type(output))) def test_run_command_should_return_a_bytes_object_when_encode_output_is_false(self): output = shellutil.run_command(self.__create_tee_script(), input="TEST STRING", encode_output=False) self.assertTrue(isinstance(output, bytes), "The return value should be a bytes object. Got: '{0}'".format(type(output))) def test_run_pipe_should_return_a_bytes_object_when_encode_output_is_false(self): output = shellutil.run_pipe([["echo", "TEST STRING"], [self.__create_tee_script()]], encode_output=False) self.assertTrue(isinstance(output, bytes), "The return value should be a bytes object. Got: '{0}'".format(type(output))) def test_run_command_run_pipe_run_get_output_should_keep_track_of_the_running_commands(self): # The children processes run this script, which creates a file with the PIDs of the script and its parent and then sleeps for a long time child_script = os.path.join(self.tmp_dir, "write_pids.py") AgentTestCase.create_script(child_script, """ import os import sys import time with open(sys.argv[1], "w") as pid_file: pid_file.write("{0} {1}".format(os.getpid(), os.getppid())) time.sleep(120) """) threads = [] try: child_processes = [] parent_processes = [] try: # each of these files will contain the PIDs of the command that created it and its parent pid_files = [os.path.join(self.tmp_dir, "pids.txt.{0}".format(i)) for i in range(4)] # we test these functions in shellutil commands_to_execute = [ # run_get_output must be the first in this list; see the code to fetch the PIDs a few lines below lambda: shellutil.run_get_output("{0} {1}".format(child_script, pid_files[0])), lambda: shellutil.run_command([child_script, pid_files[1]]), lambda: shellutil.run_pipe([[child_script, pid_files[2]], [child_script, pid_files[3]]]), ] # start each command on a separate thread (since we need to examine the processes running the commands while they are running) def invoke(command): try: command() except shellutil.CommandError as command_error: if command_error.returncode != -9: # test cleanup terminates the commands, so this is expected raise for cmd in commands_to_execute: thread = threading.Thread(target=invoke, args=(cmd,)) thread.start() threads.append(thread) # now fetch the PIDs in the files created by the commands, but wait until they are created if not wait_for(lambda: all(os.path.exists(file) and os.path.getsize(file) > 0 for file in pid_files)): raise Exception("The child processes did not start within the allowed timeout") for sig_file in pid_files: with open(sig_file, "r") as read_handle: pids = read_handle.read().split() child_processes.append(int(pids[0])) parent_processes.append(int(pids[1])) # the first item to in the PIDs we fetched corresponds to run_get_output, which invokes the command using the # shell, so in that case we need to use the parent's pid (i.e. the shell that we started) started_commands = parent_processes[0:1] + child_processes[1:] # wait for all the commands to start def all_commands_running(): all_commands_running.running_commands = shellutil.get_running_commands() return len(all_commands_running.running_commands) >= len(commands_to_execute) + 1 # +1 because run_pipe starts 2 commands all_commands_running.running_commands = [] if not wait_for(all_commands_running): self.fail("shellutil.get_running_commands() did not report the expected number of commands after the allowed timeout.\nExpected: {0}\nGot: {1}".format( format_processes(started_commands), format_processes(all_commands_running.running_commands))) started_commands.sort() all_commands_running.running_commands.sort() self.assertEqual( started_commands, all_commands_running.running_commands, "shellutil.get_running_commands() did not return the expected commands.\nExpected: {0}\nGot: {1}".format( format_processes(started_commands), format_processes(all_commands_running.running_commands))) finally: # terminate the child processes, since they are blocked for pid in child_processes: os.kill(pid, signal.SIGKILL) # once the processes complete, their PIDs should go away def no_commands_running(): no_commands_running.running_commands = shellutil.get_running_commands() return len(no_commands_running.running_commands) == 0 no_commands_running.running_commands = [] if not wait_for(no_commands_running): self.fail("shellutil.get_running_commands() should return empty after the commands complete. Got: {0}".format( format_processes(no_commands_running.running_commands))) finally: for thread in threads: thread.join(timeout=5) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-a976115/tests/common/utils/test_text_util.py000066400000000000000000000155371510742556200246310ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import unittest import azurelinuxagent.common.utils.textutil as textutil from azurelinuxagent.common.future import ustr from tests.lib.tools import AgentTestCase class TestTextUtil(AgentTestCase): def test_replace_non_ascii(self): data = ustr(b'\xef\xbb\xbfhehe', encoding='utf-8') self.assertEqual('hehe', textutil.replace_non_ascii(data)) data = "abcd\xa0e\xf0fghijk\xbblm" self.assertEqual("abcdefghijklm", textutil.replace_non_ascii(data)) data = "abcd\xa0e\xf0fghijk\xbblm" self.assertEqual("abcdXeXfghijkXlm", textutil.replace_non_ascii(data, replace_char='X')) self.assertEqual('', textutil.replace_non_ascii(None)) def test_remove_bom(self): #Test bom could be removed data = ustr(b'\xef\xbb\xbfhehe', encoding='utf-8') data = textutil.remove_bom(data) self.assertNotEqual(0xbb, data[0]) #bom is comprised of a sequence of three bytes and ff length of the input is shorter # than three bytes, remove_bom should not do anything data = u"\xa7" data = textutil.remove_bom(data) self.assertEqual(data, data[0]) data = u"\xa7\xef" data = textutil.remove_bom(data) self.assertEqual(u"\xa7", data[0]) self.assertEqual(u"\xef", data[1]) #Test string without BOM is not affected data = u"hehe" data = textutil.remove_bom(data) self.assertEqual(u"h", data[0]) data = u"" data = textutil.remove_bom(data) self.assertEqual(u"", data) data = u" " data = textutil.remove_bom(data) self.assertEqual(u" ", data) def test_get_bytes_from_pem(self): content = ("-----BEGIN CERTIFICATE-----\n" "certificate\n" "-----END CERTIFICATE----\n") base64_bytes = textutil.get_bytes_from_pem(content) self.assertEqual("certificate", base64_bytes) content = ("-----BEGIN PRIVATE KEY-----\n" "private key\n" "-----END PRIVATE Key-----\n") base64_bytes = textutil.get_bytes_from_pem(content) self.assertEqual("private key", base64_bytes) def test_swap_hexstring(self): data = [ ['12', 1, '21'], ['12', 2, '12'], ['12', 3, '012'], ['12', 4, '0012'], ['123', 1, '321'], ['123', 2, '2301'], ['123', 3, '123'], ['123', 4, '0123'], ['1234', 1, '4321'], ['1234', 2, '3412'], ['1234', 3, '234001'], ['1234', 4, '1234'], ['abcdef12', 1, '21fedcba'], ['abcdef12', 2, '12efcdab'], ['abcdef12', 3, 'f12cde0ab'], ['abcdef12', 4, 'ef12abcd'], ['aBcdEf12', 1, '21fEdcBa'], ['aBcdEf12', 2, '12EfcdaB'], ['aBcdEf12', 3, 'f12cdE0aB'], ['aBcdEf12', 4, 'Ef12aBcd'] ] for t in data: self.assertEqual(t[2], textutil.swap_hexstring(t[0], width=t[1])) def test_compress(self): result = textutil.compress('[stdout]\nHello World\n\n[stderr]\n\n') self.assertEqual('eJyLLi5JyS8tieXySM3JyVcIzy/KSeHiigaKphYVxXJxAQDAYQr2', result) def test_empty_strings(self): self.assertTrue(textutil.is_str_none_or_whitespace(None)) self.assertTrue(textutil.is_str_none_or_whitespace(' ')) self.assertTrue(textutil.is_str_none_or_whitespace('\t')) self.assertTrue(textutil.is_str_none_or_whitespace('\n')) self.assertTrue(textutil.is_str_none_or_whitespace(' \t')) self.assertTrue(textutil.is_str_none_or_whitespace(' \r\n')) self.assertTrue(textutil.is_str_empty(None)) self.assertTrue(textutil.is_str_empty(' ')) self.assertTrue(textutil.is_str_empty('\t')) self.assertTrue(textutil.is_str_empty('\n')) self.assertTrue(textutil.is_str_empty(' \t')) self.assertTrue(textutil.is_str_empty(' \r\n')) self.assertFalse(textutil.is_str_none_or_whitespace(u' \x01 ')) self.assertFalse(textutil.is_str_none_or_whitespace(u'foo')) self.assertFalse(textutil.is_str_none_or_whitespace('bar')) self.assertFalse(textutil.is_str_empty(u' \x01 ')) self.assertFalse(textutil.is_str_empty(u'foo')) self.assertFalse(textutil.is_str_empty('bar')) hex_null_1 = u'\x00' hex_null_2 = u' \x00 ' self.assertFalse(textutil.is_str_none_or_whitespace(hex_null_1)) self.assertFalse(textutil.is_str_none_or_whitespace(hex_null_2)) self.assertTrue(textutil.is_str_empty(hex_null_1)) self.assertTrue(textutil.is_str_empty(hex_null_2)) self.assertNotEqual(textutil.is_str_none_or_whitespace(hex_null_1), textutil.is_str_empty(hex_null_1)) self.assertNotEqual(textutil.is_str_none_or_whitespace(hex_null_2), textutil.is_str_empty(hex_null_2)) def test_format_memory_value(self): """ Test formatting of memory amounts into human-readable units """ self.assertEqual(2048, textutil.format_memory_value('kilobytes', 2)) self.assertEqual(0, textutil.format_memory_value('kilobytes', 0)) self.assertEqual(2048000, textutil.format_memory_value('kilobytes', 2000)) self.assertEqual(2048 * 1024, textutil.format_memory_value('megabytes', 2)) self.assertEqual((1024 + 512) * 1024 * 1024, textutil.format_memory_value('gigabytes', 1.5)) self.assertRaises(ValueError, textutil.format_memory_value, 'KiloBytes', 1) self.assertRaises(TypeError, textutil.format_memory_value, 'bytes', None) def test_format_exception(self): """ Test formatting of exception into human-readable format """ def raise_exception(count=3): if count <= 1: raise Exception("Test Exception") raise_exception(count - 1) msg = "" try: raise_exception() except Exception as e: msg = textutil.format_exception(e) self.assertIn("Test Exception", msg) # Raise exception at count 1 after two nested calls since count starts at 3 self.assertEqual(2, msg.count("raise_exception(count - 1)")) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-a976115/tests/daemon/000077500000000000000000000000001510742556200177775ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/daemon/__init__.py000066400000000000000000000011651510742556200221130ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # Azure-WALinuxAgent-a976115/tests/daemon/test_daemon.py000066400000000000000000000134341510742556200226600ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import unittest from multiprocessing import Process import azurelinuxagent.common.conf as conf from azurelinuxagent.daemon.main import OPENSSL_FIPS_ENVIRONMENT, get_daemon_handler from azurelinuxagent.pa.provision.default import ProvisionHandler from tests.lib.tools import AgentTestCase, Mock, patch class MockDaemonCall(object): def __init__(self, daemon_handler, count): self.daemon_handler = daemon_handler self.count = count def __call__(self, *args, **kw): self.count = self.count - 1 # Stop daemon after restarting for n times if self.count <= 0: self.daemon_handler.running = False raise Exception("Mock unhandled exception") class TestDaemon(AgentTestCase): @patch("time.sleep") def test_daemon_restart(self, mock_sleep): # Mock daemon function daemon_handler = get_daemon_handler() mock_daemon = Mock(side_effect=MockDaemonCall(daemon_handler, 2)) daemon_handler.daemon = mock_daemon daemon_handler.check_pid = Mock() daemon_handler.run() mock_sleep.assert_any_call(15) self.assertEqual(2, daemon_handler.daemon.call_count) @patch("time.sleep") @patch("azurelinuxagent.daemon.main.conf") @patch("azurelinuxagent.daemon.main.sys.exit") def test_check_pid(self, mock_exit, mock_conf, _): daemon_handler = get_daemon_handler() mock_pid_file = os.path.join(self.tmp_dir, "pid") mock_conf.get_agent_pid_file_path = Mock(return_value=mock_pid_file) daemon_handler.check_pid() self.assertTrue(os.path.isfile(mock_pid_file)) daemon_handler.check_pid() mock_exit.assert_any_call(0) @patch("azurelinuxagent.daemon.main.DaemonHandler.check_pid") @patch("azurelinuxagent.common.conf.get_fips_enabled", return_value=True) def test_set_openssl_fips(self, _, __): daemon_handler = get_daemon_handler() daemon_handler.running = False with patch.dict("os.environ"): daemon_handler.run() self.assertTrue(OPENSSL_FIPS_ENVIRONMENT in os.environ) self.assertEqual('1', os.environ[OPENSSL_FIPS_ENVIRONMENT]) @patch("azurelinuxagent.daemon.main.DaemonHandler.check_pid") @patch("azurelinuxagent.common.conf.get_fips_enabled", return_value=False) def test_does_not_set_openssl_fips(self, _, __): daemon_handler = get_daemon_handler() daemon_handler.running = False with patch.dict("os.environ"): daemon_handler.run() self.assertFalse(OPENSSL_FIPS_ENVIRONMENT in os.environ) @patch('azurelinuxagent.common.conf.get_provisioning_agent', return_value='waagent') @patch('azurelinuxagent.ga.update.UpdateHandler.run_latest') @patch('azurelinuxagent.pa.provision.default.ProvisionHandler.run') def test_daemon_agent_enabled(self, patch_run_provision, patch_run_latest, gpa): # pylint: disable=unused-argument """ Agent should run normally when no disable_agent is found """ with patch('azurelinuxagent.pa.provision.get_provision_handler', return_value=ProvisionHandler()): # DaemonHandler._initialize_telemetry requires communication with WireServer and IMDS; since we # are not using telemetry in this test we mock it out with patch('azurelinuxagent.daemon.main.DaemonHandler._initialize_telemetry'): self.assertFalse(os.path.exists(conf.get_disable_agent_file_path())) daemon_handler = get_daemon_handler() def stop_daemon(child_args): # pylint: disable=unused-argument daemon_handler.running = False patch_run_latest.side_effect = stop_daemon daemon_handler.run() self.assertEqual(1, patch_run_provision.call_count) self.assertEqual(1, patch_run_latest.call_count) @patch('azurelinuxagent.common.conf.get_provisioning_agent', return_value='waagent') @patch('azurelinuxagent.ga.update.UpdateHandler.run_latest', side_effect=AgentTestCase.fail) @patch('azurelinuxagent.pa.provision.default.ProvisionHandler.run', side_effect=ProvisionHandler.write_agent_disabled) def test_daemon_agent_disabled(self, _, patch_run_latest, gpa): # pylint: disable=unused-argument """ Agent should provision, then sleep forever when disable_agent is found """ with patch('azurelinuxagent.pa.provision.get_provision_handler', return_value=ProvisionHandler()): # file is created by provisioning handler self.assertFalse(os.path.exists(conf.get_disable_agent_file_path())) daemon_handler = get_daemon_handler() # we need to assert this thread will sleep forever, so fork it daemon = Process(target=daemon_handler.run) daemon.start() daemon.join(timeout=5) self.assertTrue(daemon.is_alive()) daemon.terminate() # disable_agent was written, run_latest was not called self.assertTrue(os.path.exists(conf.get_disable_agent_file_path())) self.assertEqual(0, patch_run_latest.call_count) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-a976115/tests/daemon/test_resourcedisk.py000066400000000000000000000174501510742556200241210ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import stat import sys import unittest from tests.lib.tools import AgentTestCase, patch, DEFAULT from azurelinuxagent.daemon.resourcedisk import get_resourcedisk_handler from azurelinuxagent.daemon.resourcedisk.default import ResourceDiskHandler from azurelinuxagent.common.utils import shellutil class TestResourceDisk(AgentTestCase): def test_mount_flags_empty(self): partition = '/dev/sdb1' mountpoint = '/mnt/resource' options = None expected = 'mount -t ext3 /dev/sdb1 /mnt/resource' rdh = ResourceDiskHandler() mount_string = rdh.get_mount_string(options, partition, mountpoint) self.assertEqual(expected, mount_string) def test_mount_flags_many(self): partition = '/dev/sdb1' mountpoint = '/mnt/resource' options = 'noexec,noguid,nodev' expected = 'mount -t ext3 -o noexec,noguid,nodev /dev/sdb1 /mnt/resource' rdh = ResourceDiskHandler() mount_string = rdh.get_mount_string(options, partition, mountpoint) self.assertEqual(expected, mount_string) @patch('azurelinuxagent.common.utils.shellutil.run_get_output') @patch('azurelinuxagent.common.utils.shellutil.run') @patch('azurelinuxagent.daemon.resourcedisk.default.ResourceDiskHandler.mkfile') @patch('azurelinuxagent.daemon.resourcedisk.default.os.path.isfile', return_value=False) @patch( 'azurelinuxagent.daemon.resourcedisk.default.ResourceDiskHandler.check_existing_swap_file', return_value=False) def test_create_swap_space( self, mock_check_existing_swap_file, # pylint: disable=unused-argument mock_isfile, # pylint: disable=unused-argument mock_mkfile, # pylint: disable=unused-argument mock_run, mock_run_get_output): mount_point = '/mnt/resource' size_mb = 128 rdh = ResourceDiskHandler() def rgo_side_effect(*args, **kwargs): # pylint: disable=unused-argument if args[0] == 'swapon -s': return (0, 'Filename\t\t\t\tType\t\tSize\tUsed\tPriority\n/mnt/resource/swapfile \tfile \t131068\t0\t-2\n') return DEFAULT def run_side_effect(*args, **kwargs): # pylint: disable=unused-argument # We have to change the default mock behavior to return a falsey value # (instead of the default truthy of the mock), because we are testing # really for the exit code of the the swapon command to return 0. if 'swapon' in args[0]: return 0 return None mock_run_get_output.side_effect = rgo_side_effect mock_run.side_effect = run_side_effect rdh.create_swap_space( mount_point=mount_point, size_mb=size_mb ) def test_mkfile(self): # setup test_file = os.path.join(self.tmp_dir, 'test_file') file_size = 1024 * 128 if os.path.exists(test_file): os.remove(test_file) # execute get_resourcedisk_handler().mkfile(test_file, file_size) # assert assert os.path.exists(test_file) # only the owner should have access mode = os.stat(test_file).st_mode & ( stat.S_IRWXU | stat.S_IRWXG | stat.S_IRWXO) assert mode == stat.S_IRUSR | stat.S_IWUSR # cleanup os.remove(test_file) def test_mkfile_dd_fallback(self): with patch.object(shellutil, "run") as run_patch: # setup run_patch.return_value = 1 test_file = os.path.join(self.tmp_dir, 'test_file') file_size = 1024 * 128 # execute if sys.version_info >= (3, 3): with patch("os.posix_fallocate", side_effect=Exception('failure')): get_resourcedisk_handler().mkfile(test_file, file_size) else: get_resourcedisk_handler().mkfile(test_file, file_size) # assert assert run_patch.call_count > 1 assert "fallocate" in run_patch.call_args_list[0][0][0] assert "dd if" in run_patch.call_args_list[-1][0][0] def test_mkfile_xfs_fs(self): # setup test_file = os.path.join(self.tmp_dir, 'test_file') file_size = 1024 * 128 if os.path.exists(test_file): os.remove(test_file) # execute resource_disk_handler = get_resourcedisk_handler() resource_disk_handler.fs = 'xfs' with patch.object(shellutil, "run") as run_patch: resource_disk_handler.mkfile(test_file, file_size) # assert if sys.version_info >= (3, 3): with patch("os.posix_fallocate") as posix_fallocate: self.assertEqual(0, posix_fallocate.call_count) assert run_patch.call_count == 1 assert "dd if" in run_patch.call_args_list[0][0][0] def test_change_partition_type(self): resource_handler = get_resourcedisk_handler() # test when sfdisk --part-type does not exist with patch.object(shellutil, "run_get_output", side_effect=[[1, ''], [0, '']]) as run_patch: resource_handler.change_partition_type( suppress_message=True, option_str='') # assert assert run_patch.call_count == 2 assert "sfdisk --part-type" in run_patch.call_args_list[0][0][0] assert "sfdisk -c" in run_patch.call_args_list[1][0][0] # test when sfdisk --part-type exists with patch.object(shellutil, "run_get_output", side_effect=[[0, '']]) as run_patch: resource_handler.change_partition_type( suppress_message=True, option_str='') # assert assert run_patch.call_count == 1 assert "sfdisk --part-type" in run_patch.call_args_list[0][0][0] def test_check_existing_swap_file(self): test_file = os.path.join(self.tmp_dir, 'test_swap_file') file_size = 1024 * 128 if os.path.exists(test_file): os.remove(test_file) with open(test_file, "wb") as file: # pylint: disable=redefined-builtin file.write(bytearray(file_size)) os.chmod(test_file, stat.S_ISUID | stat.S_ISGID | stat.S_IRUSR | stat.S_IWUSR | stat.S_IRWXG | stat.S_IRWXO) # 0o6677 def swap_on(_): # mimic the output of "swapon -s" return [ "Filename Type Size Used Priority", "{0} partition 16498684 0 -2".format(test_file) ] with patch.object(shellutil, "run_get_output", side_effect=swap_on): get_resourcedisk_handler().check_existing_swap_file( test_file, test_file, file_size) # it should remove access from group, others mode = os.stat(test_file).st_mode & (stat.S_ISUID | stat.S_ISGID | stat.S_IRWXU | stat.S_IWUSR | stat.S_IRWXG | stat.S_IRWXO) # 0o6777 assert mode == stat.S_ISUID | stat.S_ISGID | stat.S_IRUSR | stat.S_IWUSR # 0o6600 os.remove(test_file) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-a976115/tests/daemon/test_scvmm.py000066400000000000000000000066171510742556200225470ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # # Implements parts of RFC 2131, 1541, 1497 and # http://msdn.microsoft.com/en-us/library/cc227282%28PROT.10%29.aspx # http://msdn.microsoft.com/en-us/library/cc227259%28PROT.13%29.aspx import os import unittest import mock import azurelinuxagent.daemon.scvmm as scvmm from azurelinuxagent.common import conf from azurelinuxagent.common.osutil.default import DefaultOSUtil from azurelinuxagent.common.utils import fileutil from tests.lib.tools import AgentTestCase, Mock, patch class TestSCVMM(AgentTestCase): def test_scvmm_detection_with_file(self): # setup conf.get_dvd_mount_point = Mock(return_value=self.tmp_dir) conf.get_detect_scvmm_env = Mock(return_value=True) scvmm_file = os.path.join(self.tmp_dir, scvmm.VMM_CONF_FILE_NAME) fileutil.write_file(scvmm_file, "") with patch.object(scvmm.ScvmmHandler, 'start_scvmm_agent') as po: with patch('os.listdir', return_value=["sr0", "sr1", "sr2"]): with patch('time.sleep', return_value=0): # execute failed = False try: scvmm.get_scvmm_handler().run() except: # pylint: disable=bare-except failed = True # assert self.assertTrue(failed) self.assertTrue(po.call_count == 1) # cleanup os.remove(scvmm_file) def test_scvmm_detection_with_multiple_cdroms(self): # setup conf.get_dvd_mount_point = Mock(return_value=self.tmp_dir) conf.get_detect_scvmm_env = Mock(return_value=True) # execute with mock.patch.object(DefaultOSUtil, 'mount_dvd') as patch_mount: with patch('os.listdir', return_value=["sr0", "sr1", "sr2"]): scvmm.ScvmmHandler().detect_scvmm_env() # assert assert patch_mount.call_count == 3 assert patch_mount.call_args_list[0][1]['dvd_device'] == '/dev/sr0' assert patch_mount.call_args_list[1][1]['dvd_device'] == '/dev/sr1' assert patch_mount.call_args_list[2][1]['dvd_device'] == '/dev/sr2' def test_scvmm_detection_without_file(self): # setup conf.get_dvd_mount_point = Mock(return_value=self.tmp_dir) conf.get_detect_scvmm_env = Mock(return_value=True) scvmm_file = os.path.join(self.tmp_dir, scvmm.VMM_CONF_FILE_NAME) if os.path.exists(scvmm_file): os.remove(scvmm_file) with mock.patch.object(scvmm.ScvmmHandler, 'start_scvmm_agent') as patch_start: # execute scvmm.ScvmmHandler().detect_scvmm_env() # assert patch_start.assert_not_called() if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-a976115/tests/data/000077500000000000000000000000001510742556200174455ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/2000066400000000000000000000006361510742556200175360ustar00rootroot00000000000000# This is private data. Do not parse. ADDRESS=10.0.0.69 NETMASK=255.255.255.0 ROUTER=10.0.0.1 SERVER_ADDRESS=168.63.129.16 NEXT_SERVER=168.63.129.16 T1=4294967295 T2=4294967295 LIFETIME=4294967295 DNS=168.63.129.16 DOMAINNAME=2rdlxelcdvjkok2emfc.bx.internal.cloudapp.net ROUTES=0.0.0.0/0,10.0.0.1 168.63.129.16/32,10.0.0.1 169.254.169.254/32,10.0.0.1 CLIENTID=ff0406a3a3000201120dc9092eccd2344 OPTION_245=a83f8110 Azure-WALinuxAgent-a976115/tests/data/cgroups/000077500000000000000000000000001510742556200211275ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/cgroups/cgroup.procs000066400000000000000000000000131510742556200234700ustar00rootroot00000000000000123 234 345Azure-WALinuxAgent-a976115/tests/data/cgroups/dummy_proc_cmdline000066400000000000000000000000751510742556200247250ustar00rootroot00000000000000python-ubin/WALinuxAgent-2.2.45-py2.7.egg-run-exthandlersAzure-WALinuxAgent-a976115/tests/data/cgroups/dummy_proc_comm000066400000000000000000000000061510742556200242370ustar00rootroot00000000000000pythonAzure-WALinuxAgent-a976115/tests/data/cgroups/dummy_proc_statm000066400000000000000000000000361510742556200244370ustar00rootroot00000000000000980608 81022 30304 4 0 93606 0Azure-WALinuxAgent-a976115/tests/data/cgroups/hybrid/000077500000000000000000000000001510742556200224105ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/cgroups/hybrid/sys_fs_cgroup_cgroup.controllers000066400000000000000000000000001510742556200311320ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/cgroups/proc_self_cgroup_azure_slice000066400000000000000000000006031510742556200267710ustar00rootroot0000000000000012:blkio:/azure.slice/walinuxagent.service 11:cpu,cpuacct:/azure.slice/walinuxagent.service 10:devices:/azure.slice/walinuxagent.service 9:pids:/azure.slice/walinuxagent.service 8:memory:/azure.slice/walinuxagent.service 7:freezer:/ 6:hugetlb:/ 5:perf_event:/ 4:net_cls,net_prio:/ 3:cpuset:/ 2:rdma:/ 1:name=systemd:/azure.slice/walinuxagent.service 0::/azure.slice/walinuxagent.service Azure-WALinuxAgent-a976115/tests/data/cgroups/v1/000077500000000000000000000000001510742556200214555ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/cgroups/v1/cpu.stat000066400000000000000000000000561510742556200231420ustar00rootroot00000000000000nr_periods 1 nr_throttled 1 throttled_time 50 Azure-WALinuxAgent-a976115/tests/data/cgroups/v1/cpu.stat_t0000066400000000000000000000000561510742556200235450ustar00rootroot00000000000000nr_periods 1 nr_throttled 1 throttled_time 50 Azure-WALinuxAgent-a976115/tests/data/cgroups/v1/cpu.stat_t1000066400000000000000000000001011510742556200235350ustar00rootroot00000000000000nr_periods 66927 nr_throttled 25803 throttled_time 2075541442327 Azure-WALinuxAgent-a976115/tests/data/cgroups/v1/cpuacct.stat000066400000000000000000000000301510742556200237650ustar00rootroot00000000000000user 42380 system 21383 Azure-WALinuxAgent-a976115/tests/data/cgroups/v1/cpuacct.stat_t0000066400000000000000000000000301510742556200243700ustar00rootroot00000000000000user 42380 system 21383 Azure-WALinuxAgent-a976115/tests/data/cgroups/v1/cpuacct.stat_t1000066400000000000000000000000301510742556200243710ustar00rootroot00000000000000user 42390 system 21390 Azure-WALinuxAgent-a976115/tests/data/cgroups/v1/cpuacct.stat_t2000066400000000000000000000000301510742556200243720ustar00rootroot00000000000000user 42417 system 21428 Azure-WALinuxAgent-a976115/tests/data/cgroups/v1/memory.max_usage_in_bytes000066400000000000000000000000071510742556200265510ustar00rootroot000000000000001000000Azure-WALinuxAgent-a976115/tests/data/cgroups/v1/memory.stat000066400000000000000000000012771510742556200236710ustar00rootroot00000000000000cache 50000 rss 100000 rss_huge 4194304 shmem 8192 mapped_file 540672 dirty 0 writeback 0 swap 20000 pgpgin 42584 pgpgout 24188 pgfault 71983 pgmajfault 402 inactive_anon 32854016 active_anon 12288 inactive_file 47472640 active_file 1290240 unevictable 0 hierarchical_memory_limit 9223372036854771712 hierarchical_memsw_limit 9223372036854771712 total_cache 48771072 total_rss 32845824 total_rss_huge 4194304 total_shmem 8192 total_mapped_file 540672 total_dirty 0 total_writeback 0 total_swap 0 total_pgpgin 42584 total_pgpgout 24188 total_pgfault 71983 total_pgmajfault 402 total_inactive_anon 32854016 total_active_anon 12288 total_inactive_file 47472640 total_active_file 1290240 total_unevictable 0Azure-WALinuxAgent-a976115/tests/data/cgroups/v1/memory.stat_missing000066400000000000000000000012511510742556200254120ustar00rootroot00000000000000cache 50000 rss_huge 4194304 shmem 8192 mapped_file 540672 dirty 0 writeback 0 pgpgin 42584 pgpgout 24188 pgfault 71983 pgmajfault 402 inactive_anon 32854016 active_anon 12288 inactive_file 47472640 active_file 1290240 unevictable 0 hierarchical_memory_limit 9223372036854771712 hierarchical_memsw_limit 9223372036854771712 total_cache 48771072 total_rss 32845824 total_rss_huge 4194304 total_shmem 8192 total_mapped_file 540672 total_dirty 0 total_writeback 0 total_swap 0 total_pgpgin 42584 total_pgpgout 24188 total_pgfault 71983 total_pgmajfault 402 total_inactive_anon 32854016 total_active_anon 12288 total_inactive_file 47472640 total_active_file 1290240 total_unevictable 0Azure-WALinuxAgent-a976115/tests/data/cgroups/v1/proc_pid_cgroup000066400000000000000000000014311510742556200245550ustar00rootroot0000000000000012:devices:/system.slice/Microsoft.A.Sample.Extension_1.0.1_aeac05dc-8c24-4542-95f2-a0d6be1c5ba7.scope 11:perf_event:/ 10:rdma:/ 9:blkio:/system.slice/Microsoft.A.Sample.Extension_1.0.1_aeac05dc-8c24-4542-95f2-a0d6be1c5ba7.scope 8:net_cls,net_prio:/ 7:freezer:/ 6:hugetlb:/ 5:memory:/system.slice/Microsoft.A.Sample.Extension_1.0.1_aeac05dc-8c24-4542-95f2-a0d6be1c5ba7.scope 4:cpuset:/ 3:cpu,cpuacct:/system.slice/Microsoft.A.Sample.Extension_1.0.1_aeac05dc-8c24-4542-95f2-a0d6be1c5ba7.scope 2:pids:/system.slice/Microsoft.A.Sample.Extension_1.0.1_aeac05dc-8c24-4542-95f2-a0d6be1c5ba7.scope 1:name=systemd:/system.slice/Microsoft.A.Sample.Extension_1.0.1_aeac05dc-8c24-4542-95f2-a0d6be1c5ba7.scope 0::/system.slice/Microsoft.A.Sample.Extension_1.0.1_aeac05dc-8c24-4542-95f2-a0d6be1c5ba7.scope Azure-WALinuxAgent-a976115/tests/data/cgroups/v1/proc_self_cgroup000066400000000000000000000006121510742556200247320ustar00rootroot0000000000000012:blkio:/system.slice/walinuxagent.service 11:cpu,cpuacct:/system.slice/walinuxagent.service 10:devices:/system.slice/walinuxagent.service 9:pids:/system.slice/walinuxagent.service 8:memory:/system.slice/walinuxagent.service 7:freezer:/ 6:hugetlb:/ 5:perf_event:/ 4:net_cls,net_prio:/ 3:cpuset:/ 2:rdma:/ 1:name=systemd:/system.slice/walinuxagent.service 0::/system.slice/walinuxagent.service Azure-WALinuxAgent-a976115/tests/data/cgroups/v1/proc_stat_t0000066400000000000000000000047371510742556200240140ustar00rootroot00000000000000cpu 1242996 18547 360544 3839094 7843 0 27848 0 0 0 cpu0 305718 4380 129988 2580650 5164 0 11050 0 0 0 cpu1 316342 4742 80358 416652 1001 0 8333 0 0 0 cpu2 311233 4916 75691 419501 720 0 4268 0 0 0 cpu3 309702 4508 74507 422289 956 0 4196 0 0 0 intr 78869641 14 5118 0 0 0 0 0 0 1 17275956 0 0 977671 0 0 0 0 0 0 0 1285764 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 24614913 290 40 7069 56 9208 8660 8574436 0 0 0 0 0 0 0 0 0 0 0 0 10238 107667 2597 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ctxt 145952189 btime 1575311513 processes 49655 procs_running 2 procs_blocked 0 softirq 36699327 11497379 8043346 4993 1111315 54 54 1755674 7556345 0 6730167 Azure-WALinuxAgent-a976115/tests/data/cgroups/v1/proc_stat_t1000066400000000000000000000047371510742556200240150ustar00rootroot00000000000000cpu 1251670 18548 368052 3877822 7926 0 28103 0 0 0 cpu0 307860 4380 135060 2583922 5171 0 11173 0 0 0 cpu1 318474 4742 81276 428425 1019 0 8384 0 0 0 cpu2 313521 4916 76464 431225 748 0 4297 0 0 0 cpu3 311814 4508 75250 434248 986 0 4247 0 0 0 intr 81910494 14 5118 0 0 0 0 0 0 1 18573886 0 0 977671 0 0 0 0 0 0 0 1291486 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 25913782 290 40 7152 56 9383 8954 8604958 0 0 0 0 0 0 0 0 0 0 0 0 10367 107667 2672 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ctxt 150920611 btime 1575311513 processes 49693 procs_running 4 procs_blocked 0 softirq 37131388 11544829 8149486 5023 1127332 54 54 1776947 7667169 0 6860494 Azure-WALinuxAgent-a976115/tests/data/cgroups/v1/proc_stat_t2000066400000000000000000000047411510742556200240110ustar00rootroot00000000000000cpu 1293131 18554 388040 3961230 8246 0 28928 0 0 0 cpu0 317730 4381 146935 2591197 5184 0 11535 0 0 0 cpu1 329039 4746 84213 453402 1178 0 8587 0 0 0 cpu2 324213 4917 79154 456426 826 0 4410 0 0 0 cpu3 322147 4509 77737 460203 1057 0 4395 0 0 0 intr 89815534 14 5118 0 0 0 0 0 0 1 21544211 0 0 977671 0 0 0 0 0 0 0 1306547 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 29282036 290 40 7566 56 10241 9584 8929545 0 0 0 0 0 0 0 0 0 0 0 0 11076 107667 2867 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ctxt 162816290 btime 1575311513 processes 49900 procs_running 2 procs_blocked 0 softirq 38518020 11917848 8439326 5100 1170153 54 54 1834614 7944771 0 7206100 Azure-WALinuxAgent-a976115/tests/data/cgroups/v2/000077500000000000000000000000001510742556200214565ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/cgroups/v2/cpu.stat000066400000000000000000000002651510742556200231450ustar00rootroot00000000000000usage_usec 817045397 user_usec 742283732 system_usec 74761665 core_sched.force_idle_usec 0 nr_periods 165261 nr_throttled 162912 throttled_usec 15735198706 nr_bursts 0 burst_usec 0 Azure-WALinuxAgent-a976115/tests/data/cgroups/v2/cpu.stat_t0000066400000000000000000000002651510742556200235500ustar00rootroot00000000000000usage_usec 817045397 user_usec 742283732 system_usec 74761665 core_sched.force_idle_usec 0 nr_periods 165261 nr_throttled 162912 throttled_usec 15735198706 nr_bursts 0 burst_usec 0 Azure-WALinuxAgent-a976115/tests/data/cgroups/v2/cpu.stat_t1000066400000000000000000000002651510742556200235510ustar00rootroot00000000000000usage_usec 819624087 user_usec 744545316 system_usec 75078770 core_sched.force_idle_usec 0 nr_periods 165783 nr_throttled 163430 throttled_usec 15796563650 nr_bursts 0 burst_usec 0 Azure-WALinuxAgent-a976115/tests/data/cgroups/v2/cpu.stat_t2000066400000000000000000000002651510742556200235520ustar00rootroot00000000000000usage_usec 822052295 user_usec 746640066 system_usec 75412229 core_sched.force_idle_usec 0 nr_periods 166274 nr_throttled 163917 throttled_usec 15853013984 nr_bursts 0 burst_usec 0 Azure-WALinuxAgent-a976115/tests/data/cgroups/v2/memory.events000066400000000000000000000000651510742556200242150ustar00rootroot00000000000000low 0 high 9 max 0 oom 0 oom_kill 0 oom_group_kill 0 Azure-WALinuxAgent-a976115/tests/data/cgroups/v2/memory.events_missing000066400000000000000000000000561510742556200257460ustar00rootroot00000000000000low 0 max 0 oom 0 oom_kill 0 oom_group_kill 0 Azure-WALinuxAgent-a976115/tests/data/cgroups/v2/memory.peak000066400000000000000000000000121510742556200236210ustar00rootroot00000000000000194494464 Azure-WALinuxAgent-a976115/tests/data/cgroups/v2/memory.stat000066400000000000000000000015611510742556200236660ustar00rootroot00000000000000anon 17589300 file 134553600 kernel 25653248 kernel_stack 0 pagetables 0 sec_pagetables 0 percpu 726400 sock 0 vmalloc 0 shmem 0 zswap 0 zswapped 0 file_mapped 0 file_dirty 12288 file_writeback 0 swapcached 0 anon_thp 0 file_thp 0 shmem_thp 0 inactive_anon 0 active_anon 0 inactive_file 127213568 active_file 7340032 unevictable 0 slab_reclaimable 24061424 slab_unreclaimable 0 slab 24061424 workingset_refault_anon 0 workingset_refault_file 0 workingset_activate_anon 0 workingset_activate_file 0 workingset_restore_anon 0 workingset_restore_file 0 workingset_nodereclaim 128 pgscan 56624 pgsteal 56622 pgscan_kswapd 56624 pgscan_direct 0 pgscan_khugepaged 0 pgsteal_kswapd 56622 pgsteal_direct 0 pgsteal_khugepaged 0 pgfault 3673191 pgmajfault 1 pgrefill 124195 pgactivate 2 pgdeactivate 0 pglazyfree 0 pglazyfreed 0 zswpin 0 zswpout 0 thp_fault_alloc 255 thp_collapse_alloc 111 Azure-WALinuxAgent-a976115/tests/data/cgroups/v2/memory.stat_missing000066400000000000000000000015241510742556200254160ustar00rootroot00000000000000kernel 25653248 kernel_stack 0 pagetables 0 sec_pagetables 0 percpu 726400 sock 0 vmalloc 0 shmem 0 zswap 0 zswapped 0 file_mapped 0 file_dirty 12288 file_writeback 0 swapcached 0 anon_thp 0 file_thp 0 shmem_thp 0 inactive_anon 0 active_anon 0 inactive_file 127213568 active_file 7340032 unevictable 0 slab_reclaimable 24061424 slab_unreclaimable 0 slab 24061424 workingset_refault_anon 0 workingset_refault_file 0 workingset_activate_anon 0 workingset_activate_file 0 workingset_restore_anon 0 workingset_restore_file 0 workingset_nodereclaim 128 pgscan 56624 pgsteal 56622 pgscan_kswapd 56624 pgscan_direct 0 pgscan_khugepaged 0 pgsteal_kswapd 56622 pgsteal_direct 0 pgsteal_khugepaged 0 pgfault 3673191 pgmajfault 1 pgrefill 124195 pgactivate 2 pgdeactivate 0 pglazyfree 0 pglazyfreed 0 zswpin 0 zswpout 0 thp_fault_alloc 255 thp_collapse_alloc 111 Azure-WALinuxAgent-a976115/tests/data/cgroups/v2/memory.swap.current000066400000000000000000000000061510742556200253370ustar00rootroot0000000000000020000 Azure-WALinuxAgent-a976115/tests/data/cgroups/v2/proc_pid_cgroup000066400000000000000000000001371510742556200245600ustar00rootroot000000000000000::/system.slice/Microsoft.A.Sample.Extension_1.0.1_aeac05dc-8c24-4542-95f2-a0d6be1c5ba7.scope Azure-WALinuxAgent-a976115/tests/data/cgroups/v2/proc_self_cgroup000066400000000000000000000000461510742556200247340ustar00rootroot000000000000000::/system.slice/walinuxagent.service Azure-WALinuxAgent-a976115/tests/data/cgroups/v2/proc_uptime_t0000066400000000000000000000000251510742556200243270ustar00rootroot00000000000000776968.02 1495073.30 Azure-WALinuxAgent-a976115/tests/data/cgroups/v2/proc_uptime_t1000066400000000000000000000000251510742556200243300ustar00rootroot00000000000000777350.57 1495797.44 Azure-WALinuxAgent-a976115/tests/data/cgroups/v2/proc_uptime_t2000066400000000000000000000000251510742556200243310ustar00rootroot00000000000000779218.68 1499425.34 Azure-WALinuxAgent-a976115/tests/data/cgroups/v2/sys_fs_cgroup_cgroup.subtree_control000066400000000000000000000000321510742556200310500ustar00rootroot00000000000000cpuset cpu io memory pids Azure-WALinuxAgent-a976115/tests/data/cgroups/v2/sys_fs_cgroup_cgroup.subtree_control_empty000066400000000000000000000000001510742556200322610ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/cloud-init/000077500000000000000000000000001510742556200215145ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/cloud-init/set-hostname000066400000000000000000000001261510742556200240450ustar00rootroot00000000000000{ "fqdn": "a-sample-set-hostname.domain.com", "hostname": "a-sample-set-hostname" } Azure-WALinuxAgent-a976115/tests/data/config/000077500000000000000000000000001510742556200207125ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/config/waagent_auto_update_disabled.conf000066400000000000000000000004721510742556200274330ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Enable or disable goal state processing auto-update, default is enabled. # Deprecated now but keep it for backward compatibility AutoUpdate.Enabled=n # Enable or disable goal state processing auto-update, default is enabled # AutoUpdate.UpdateToLatestVersion=y waagent_auto_update_disabled_update_to_latest_version_disabled.conf000066400000000000000000000004701510742556200363660ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/config# # Microsoft Azure Linux Agent Configuration # # Enable or disable goal state processing auto-update, default is enabled. # Deprecated now but keep it for backward compatibility AutoUpdate.Enabled=n # Enable or disable goal state processing auto-update, default is enabled AutoUpdate.UpdateToLatestVersion=n waagent_auto_update_disabled_update_to_latest_version_enabled.conf000066400000000000000000000004701510742556200362110ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/config# # Microsoft Azure Linux Agent Configuration # # Enable or disable goal state processing auto-update, default is enabled. # Deprecated now but keep it for backward compatibility AutoUpdate.Enabled=n # Enable or disable goal state processing auto-update, default is enabled AutoUpdate.UpdateToLatestVersion=y Azure-WALinuxAgent-a976115/tests/data/config/waagent_auto_update_enabled.conf000066400000000000000000000004721510742556200272560ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Enable or disable goal state processing auto-update, default is enabled. # Deprecated now but keep it for backward compatibility AutoUpdate.Enabled=y # Enable or disable goal state processing auto-update, default is enabled # AutoUpdate.UpdateToLatestVersion=y waagent_auto_update_enabled_update_to_latest_version_disabled.conf000066400000000000000000000004701510742556200362110ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/config# # Microsoft Azure Linux Agent Configuration # # Enable or disable goal state processing auto-update, default is enabled. # Deprecated now but keep it for backward compatibility AutoUpdate.Enabled=y # Enable or disable goal state processing auto-update, default is enabled AutoUpdate.UpdateToLatestVersion=n waagent_auto_update_enabled_update_to_latest_version_enabled.conf000066400000000000000000000004701510742556200360340ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/config# # Microsoft Azure Linux Agent Configuration # # Enable or disable goal state processing auto-update, default is enabled. # Deprecated now but keep it for backward compatibility AutoUpdate.Enabled=y # Enable or disable goal state processing auto-update, default is enabled AutoUpdate.UpdateToLatestVersion=y Azure-WALinuxAgent-a976115/tests/data/config/waagent_update_to_latest_version_disabled.conf000066400000000000000000000004721510742556200322260ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Enable or disable goal state processing auto-update, default is enabled. # Deprecated now but keep it for backward compatibility # AutoUpdate.Enabled=n # Enable or disable goal state processing auto-update, default is enabled AutoUpdate.UpdateToLatestVersion=n Azure-WALinuxAgent-a976115/tests/data/config/waagent_update_to_latest_version_enabled.conf000066400000000000000000000004721510742556200320510ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Enable or disable goal state processing auto-update, default is enabled. # Deprecated now but keep it for backward compatibility # AutoUpdate.Enabled=n # Enable or disable goal state processing auto-update, default is enabled AutoUpdate.UpdateToLatestVersion=y Azure-WALinuxAgent-a976115/tests/data/dhcp000066400000000000000000000005101510742556200203020ustar00rootroot00000000000000ƪ] >` >* >]88RD008CFA06B61CcSc56 >* > >"test-cs12.h1.internal.cloudapp.net:;3 >Azure-WALinuxAgent-a976115/tests/data/dhcp.leases000066400000000000000000000035721510742556200215700ustar00rootroot00000000000000lease { interface "eth0"; fixed-address 10.0.1.4; server-name "RDE41D2D9BB18C"; option subnet-mask 255.255.255.0; option dhcp-lease-time 4294967295; option routers 10.0.1.1; option dhcp-message-type 5; option dhcp-server-identifier 168.63.129.16; option domain-name-servers invalid; option dhcp-renewal-time 4294967295; option rfc3442-classless-static-routes 0,10,0,1,1,32,168,63,129,16,10,0,1,1; option unknown-245 a8:3f:81:10; option dhcp-rebinding-time 4294967295; option domain-name "qylsde3bnlhu5dstzf3bav5inc.fx.internal.cloudapp.net"; renew 0 2152/07/23 23:27:10; rebind 0 2152/07/23 23:27:10; expire 0 never; } lease { interface "eth0"; fixed-address 10.0.1.4; server-name "RDE41D2D9BB18C"; option subnet-mask 255.255.255.0; option dhcp-lease-time 4294967295; option routers 10.0.1.1; option dhcp-message-type 5; option dhcp-server-identifier 168.63.129.16; option domain-name-servers expired; option dhcp-renewal-time 4294967295; option unknown-245 a8:3f:81:10; option dhcp-rebinding-time 4294967295; option domain-name "qylsde3bnlhu5dstzf3bav5inc.fx.internal.cloudapp.net"; renew 4 2015/06/16 16:58:54; rebind 4 2015/06/16 16:58:54; expire 4 2015/06/16 16:58:54; } lease { interface "eth0"; fixed-address 10.0.1.4; server-name "RDE41D2D9BB18C"; option subnet-mask 255.255.255.0; option dhcp-lease-time 4294967295; option routers 10.0.1.1; option dhcp-message-type 5; option dhcp-server-identifier 168.63.129.16; option domain-name-servers 168.63.129.16; option dhcp-renewal-time 4294967295; option rfc3442-classless-static-routes 0,10,0,1,1,32,168,63,129,16,10,0,1,1; option unknown-245 a8:3f:81:10; option dhcp-rebinding-time 4294967295; option domain-name "qylsde3bnlhu5dstzf3bav5inc.fx.internal.cloudapp.net"; renew 0 2152/07/23 23:27:10; rebind 0 2152/07/23 23:27:10; expire 0 2152/07/23 23:27:10; } Azure-WALinuxAgent-a976115/tests/data/dhcp.leases.custom.dns000066400000000000000000000035641510742556200236650ustar00rootroot00000000000000lease { interface "eth0"; fixed-address 10.0.1.4; server-name "RDE41D2D9BB18C"; option subnet-mask 255.255.255.0; option dhcp-lease-time 4294967295; option routers 10.0.1.1; option dhcp-message-type 5; option dhcp-server-identifier 168.63.129.16; option domain-name-servers invalid; option dhcp-renewal-time 4294967295; option rfc3442-classless-static-routes 0,10,0,1,1,32,168,63,129,16,10,0,1,1; option unknown-245 a8:3f:81:01; option dhcp-rebinding-time 4294967295; option domain-name "qylsde3bnlhu5dstzf3bav5inc.fx.internal.cloudapp.net"; renew 0 2152/07/23 23:27:10; rebind 0 2152/07/23 23:27:10; expire 0 never; } lease { interface "eth0"; fixed-address 10.0.1.4; server-name "RDE41D2D9BB18C"; option subnet-mask 255.255.255.0; option dhcp-lease-time 4294967295; option routers 10.0.1.1; option dhcp-message-type 5; option dhcp-server-identifier 168.63.129.16; option domain-name-servers expired; option dhcp-renewal-time 4294967295; option unknown-245 a8:3f:81:02; option dhcp-rebinding-time 4294967295; option domain-name "qylsde3bnlhu5dstzf3bav5inc.fx.internal.cloudapp.net"; renew 4 2015/06/16 16:58:54; rebind 4 2015/06/16 16:58:54; expire 4 2015/06/16 16:58:54; } lease { interface "eth0"; fixed-address 10.0.1.4; server-name "RDE41D2D9BB18C"; option subnet-mask 255.255.255.0; option dhcp-lease-time 4294967295; option routers 10.0.1.1; option dhcp-message-type 5; option dhcp-server-identifier 168.63.129.16; option domain-name-servers 8.8.8.8; option dhcp-renewal-time 4294967295; option rfc3442-classless-static-routes 0,10,0,1,1,32,168,63,129,16,10,0,1,1; option unknown-245 a8:3f:81:10; option dhcp-rebinding-time 4294967295; option domain-name "qylsde3bnlhu5dstzf3bav5inc.fx.internal.cloudapp.net"; renew 0 2152/07/23 23:27:10; rebind 0 2152/07/23 23:27:10; expire 0 2152/07/23 23:27:10; } Azure-WALinuxAgent-a976115/tests/data/dhcp.leases.multi000066400000000000000000000037161510742556200227210ustar00rootroot00000000000000lease { interface "eth0"; fixed-address 10.0.1.4; server-name "RDE41D2D9BB18C"; option subnet-mask 255.255.255.0; option dhcp-lease-time 4294967295; option routers 10.0.1.1; option dhcp-message-type 5; option dhcp-server-identifier 168.63.129.16; option domain-name-servers first; option dhcp-renewal-time 4294967295; option rfc3442-classless-static-routes 0,10,0,1,1,32,168,63,129,16,10,0,1,1; option unknown-245 a8:3f:81:01; option dhcp-rebinding-time 4294967295; option domain-name "qylsde3bnlhu5dstzf3bav5inc.fx.internal.cloudapp.net"; renew 0 2152/07/23 23:27:10; rebind 0 2152/07/23 23:27:10; expire 0 2152/07/23 23:27:10; } lease { interface "eth0"; fixed-address 10.0.1.4; server-name "RDE41D2D9BB18C"; option subnet-mask 255.255.255.0; option dhcp-lease-time 4294967295; option routers 10.0.1.1; option dhcp-message-type 5; option dhcp-server-identifier 168.63.129.16; option domain-name-servers second; option dhcp-renewal-time 4294967295; option rfc3442-classless-static-routes 0,10,0,1,1,32,168,63,129,16,10,0,1,1; option unknown-245 a8:3f:81:02; option dhcp-rebinding-time 4294967295; option domain-name "qylsde3bnlhu5dstzf3bav5inc.fx.internal.cloudapp.net"; renew 0 2152/07/23 23:27:10; rebind 0 2152/07/23 23:27:10; expire 0 2152/07/23 23:27:10; } lease { interface "eth0"; fixed-address 10.0.1.4; server-name "RDE41D2D9BB18C"; option subnet-mask 255.255.255.0; option dhcp-lease-time 4294967295; option routers 10.0.1.1; option dhcp-message-type 5; option dhcp-server-identifier 168.63.129.16; option domain-name-servers expired; option dhcp-renewal-time 4294967295; option rfc3442-classless-static-routes 0,10,0,1,1,32,168,63,129,16,10,0,1,1; option unknown-245 a8:3f:81:03; option dhcp-rebinding-time 4294967295; option domain-name "qylsde3bnlhu5dstzf3bav5inc.fx.internal.cloudapp.net"; renew 0 2152/07/23 23:27:10; rebind 0 2152/07/23 23:27:10; expire 0 2012/07/23 23:27:10; } Azure-WALinuxAgent-a976115/tests/data/distro_versions.txt000066400000000000000000000441311510742556200234450ustar00rootroot00000000000000# 0.0.0.0_99466 0.0.0.0_99492 0.0.0.0_99494 0.0.0.0_99496 0.0.0.0_99500 0.0.0.0_99504 0.0.0.0_99506 0.0.0.0_99530 0.0.0.0_99533 0.0.0.0_99539 0.0.0.0_99541 0.0.0.0_99543 0.0.0.0_99560 0.0.0.0_99562 0.0.0.0_99570 0.0.0.0_99572 0.0.0.0_99580 0.0.0.0_99587 0.0.0.0_99589 0.0.0.0_99591 0.0.0.0_99595 0.0.0.0_99597 0.0.0.0_99634 0.0.0.0_99637 0.0.0.0_99639 0.0.0.0_99646 0.0.0.0_99660 0.0.0.0_99664 0.0.0.0_99665 0.0.0.0_99669 0.0.0.0_99681 0.0.0.0_99696 0.0.0.0_99702 0.0.0.0_99704 0.0.0.0_99710 0.0.0.0_99815 0.0.0.0_99824 0.0.0.0_99826 0.0.0.0_99828 0.0.0.0_99835 0.0.0.0_99839 0.0.0.0_99841 0.10.1 0.11.1 0.12.1 0.13.1 0.14.1 0.6.1 0.6.2 0.6.3 0.8.1 0.9.1 0.999.0.0-1093544 1.0 10 10.0.1.0 10.0.2.0 10.0.3.0 10.0.3.1 10.0.4.0 10.0.5.0 10.0.6.0 10.0.7.0 10.0_RC2 10.1 10.10 10.11 10.12 10.13 10.2 1.0.20210807 1.0.20210928 1.0.20211027 1.0.20211230 1.0.20220122 1.0.20220127 1.0.20220307 1.0.20220331 1.0.20220504 1.0.20220521 1.0.20220608 1.0.20220709 1.0.20220805 1.0.20220817 1.0.20220909 1.0.20220926 1.0.20221007 1.0.20221028 1.0.20221119 1.0.20221202 1.0.20221220 1.0.20230106 1.0.20230123 1.0.20230208 1.0.20230225 1.0.20230308 1.0.20230330 1.0.20230414 1.0.20230427 1.0.20230518 1.0.20230607 1.0.20230615 1.0.20230713 1.0.20230811 10.3 10.4 10.5 10.6 1063 1069 10.7 10.8 1084 1086 10.9 11 11.0.108.0 11.0.93.0 11.0.96.0 11.1 11.2 11.3 11.33 11.4 11.7 11.8 11.9 11-updates 12 12.0 12.04 12.1 12.10 12.10.1 12.10.2 12.2 12.3 12.4 12.5 12.7.2 1.27.5 12.8 12.8.2 12.9 12.9.2 12.9.3 12.9.4 12-updates 13 13.0 13.1 13.10 13.2 13.3 1353.7.0 14.0 14.04 14.1 14.1.0.10 14.10.1.10 14.10.1.11 14.11.1.10 14.12.1.10 14.12.1.11 14.13.1.10 14.14.1.10 14.15.1.10 14.16.1.10 14.2 14.2.0.0 14.2.0.20 14.3.0.10 14.3.0.20 14.3.0.21 14.4.0.10 14.4.0.16 14.4.1.10 14.5.0.11 14.5.0.20 14.6.0.10 14.6.0.20 14.6.0.30 14.6.1.10 14.6.1.11 14.7.0.20 14.7.0.30 14.7.0.40 14.7.0.41 14.7.0.50 14.7.0.60 14.7.1.100 14.7.1.20 14.7.1.31 14.7.1.40 14.7.1.426 14.7.1.50 14.7.1.60 14.7.1.61 14.7.1.62 14.7.1.70 14.7.1.71 14.7.1.80 14.7.1.90 14.8.1.10 14.9.1.10 14.9.1.11 1.4-rolling-202402090309 1.4-rolling-202402241557 15 15.0 15.1 15.2 15.3 153.1 15.4 15.5 15.6 1576.5.0 16.04 16.10 16.1-11023 16.1-11047 16.1-11052 16.1-11057 16.1-11065 16.1-11066 16.1-11067 16.1-11079 1688.5.3 17.04 17.10 17.3 18 18.04 18.06.4 18.10 1855.4.0 1883.1.0 19 19.04 19.10 1911.1.1 1911.3.0 2 2.0 20 20.04 20.10 20.10.10 20.10.12 20.10.13 20.10.9 2015.11-git 2019.2 2.0.20220124 2.0.20220226 2.0.20220325 2.0.20220403 2.0.20220409 2.0.20220426 2.0.20220527 2.0.20220617 2.0.20220625 2.0.20220713 2.0.20220731 2.0.20220804 2.0.20220824 2.0.20220909 2.0.20220916 2.0.20220921 2.0.20221004 2.0.20221010 2.0.20221026 2.0.20221029 2.0.20221110 2.0.20221122 2.0.20221203 2.0.20221215 2.0.20221218 2.0.20221222 2.0.20230107 2.0.20230126 2.0.20230208 2.0.20230212 2.0.20230218 2.0.20230303 2.0.20230321 2.0.20230407 2.0.20230410 2.0.20230426 2.0.20230518 2.0.20230526 2.0.20230609 2.0.20230611 2.0.20230621 2.0.20230630 2.0.20230721 2.0.20230805 2.0.20230811 2.0.20230823 2.0.20230904 2.0.20230924 2.0.20231004 2.0.20231101 2.0.20231106 2.0.20231115 2.0.20231130 2.0.20240111 2.0.20240112 2.0.20240117 2.0.20240123 2.0.20240202 2.0.20240208 2.0.20240209 2.0.20240211 2.0.20240212 2.0.20240213 2.0.20240214 2.0.20240215 2.0.20240216 2.0.20240217 2.0.20240218 2.0.20240219 2.0.20240220 2.0.20240221 2.0.20240222 2.0.20240223 2.0.20240224 2.0.20240225 2.0.20240226 2.0.20240227 2.0.20240228 2.0.20240229 2021.1 2021.4 2022.2 2022.3 2022.4 2023 2023.02.1 2023.1 2023.2 2023.3 2023.4 2023.5.0 2024.1 2.1 2.10 21.04 2.1.1 2.11 21.10 2.1.2 2.12 2.1.3 2.13 21.3 2135.4.0 2.14 2.15 2.16 2.17 2.18 2.19 2191.5.0 2.1-systemd-rc1 2.2 22 2.2.0 22.03 22.04 2.2.1 2.21 22.10 22.11 22.1.10_4 22.1.4_1 2.22 2.26 22.7.11_1 22.7_4 22.7.9_3 2.3 23 2.30 2303.3.0 23.04 23.05 2308a 2308b 2.31 23.10 23.10.2 23.11 23.1.11 23.1.11_2 23.1.1_2 23.1.2 23.1_6 23.1.7_3 23.1.8 23.4.2_4 2345.3.0 2345.3.1 23.7.10_1 23.7.11 23.7.12 23.7.12_5 23.7.1_3 23.7.4 23.7.5 23.7.6 23.7.9 2.3.91 2.4 24 24.04 24.05 24.1.1 24.1_1 2411.1.0 2411.1.1 24.1.2 24.1.2_1 2430.0.0 2466.0.0 2492.0.0 2.5 2512.1.0 2512.2.0 2512.3.0 2512.4.0 2512.5.0 2513.0.0 2513.0.1 2513.1.0 2513.2.0 2513.3.0 2.5.4 2.5-5155 2.5-5193 2.5-5201 2.5-5202 2.5-5204 2592.0.0 2.6 2605.0.0 2605.1.0 2605.10.0 2605.11.0 2605.12.0 2605.2.0 2605.3.0 2605.4.0 2605.5.0 2605.6.0 2605.7.0 2605.8.0 2605.9.0 2632.0.0 2632.1.0 2643.0.0 2643.1.0 2643.1.1 2661.0.0 2671.0.0 2697.0.0 2705.0.0 2705.1.0 2705.1.1 2705.1.2 2723.0.0 2748.0.0 2765.0.0 2765.1.0 2765.2.0 2765.2.1 2765.2.2 2765.2.3 2765.2.4 2765.2.5 2765.2.6 2783.0.0 2.8 2801.0.0 2801.0.1 2801.1.0 2823.0.0 2823.1.0 2823.1.1 2823.1.2 2823.1.3 2857.0.0 2879.0.0 2879.0.1 2.9 29 2905.0.0 2905.1.0 2905.2.0 2905.2.1 2905.2.2 2905.2.3 2905.2.4 2905.2.5 2905.2.6 2920.0.0 2920.1.0 2942.0.0 2942.1.0 2942.1.1 2942.1.2 2955.0.0 2969.0.0 2983.0.0 2983.1.0 2983.1.1 2983.1.2 2983.2.0 2983.2.1 3 3.0 3.0.0.448 3.0.0.480 3005.0.0 3005.0.1 3.0.310-6230 3.0.310-6235 3.0.310-6240 3.0.310-6242 3.0.310-6250 3.0.310-6252 3.0.310-6264 3033.0.0 3033.1.0 3033.1.1 3033.2.0 3033.2.1 3033.2.2 3033.2.3 3033.2.4 3033.3.0 3033.3.1 3033.3.10 3033.3.11 3033.3.12 3033.3.13 3033.3.14 3033.3.15 3033.3.16 3033.3.17 3033.3.18 3033.3.2 3033.3.3 3033.3.4 3033.3.5 3033.3.6 3033.3.7 3033.3.8 3033.3.9 3046.0.0 3066.0.0 3066.1.0 3066.1.1 3066.1.2 3.10.3 3.11.0 3.11.0-20240102t2200edt-tagged 3.11.2-dev20240209t1755utc-autotag 3.11.2-dev20240212t1512utc-autotag 3.11.2-dev20240212t2004utc-autotag 3.11.2-dev20240212t2307utc-autotag 3.11.2-dev20240213t0602utc-autotag 3.11.2-dev20240214t1413utc-autotag 3.11.2-rc.1 3.11.2-rc.2 3.11.2-rc.3 3.11.2-rc.4 3115.0.0 3.12.0 3.1.22-1.8 3127.0.0 3139.0.0 3139.1.0 3139.1.1 3139.2.0 3139.2.1 3139.2.2 3139.2.3 3.14.2 3.15.0 3.15.10 3.15.11 3.15.4 3.15.7 3.15.8 3.15.9 3.16.2 3.16.4 3165.0.0 3.17.1 3.17.7 3.18.0 3.18.5 3185.0.0 3185.1.0 3185.1.1 32 3200.0.0 3227.0.0 3227.1.0 3227.1.1 3227.2.1 3227.2.2 3227.2.3 3227.2.4 3255.0.0 3277.0.0 3277.1.0 3277.1.1 3277.1.2 33 3305.0.0 3305.0.1 3.3.2009 3.3.4 3346.0.0 3346.1.0 3374.0.0 3374.1.0 3374.1.1 3374.2.0 3374.2.1 3374.2.2 3374.2.3 3374.2.4 3374.2.5 34 3402.0.0 3402.0.1 3402.1.0 3417.0.0 3417.1.0 3432.0.0 3432.1.0 3446.0.0 3446.1.0 3446.1.1 3480.0.0 3493.0.0 3493.1.0 3.5 35 3.5.0 3510.0.0 3510.1.0 3510.2.0 3510.2.1 3510.2.2 3510.2.3 3510.2.4 3510.2.5 3510.2.6 3510.2.7 3510.2.8 3510.3.1 3510.3.2 3.5.2-dev20230505t0041edt-manual 3535.0.0 3549.0.0 3549.1.0 3549.1.1 3.5.5 3.5.6 3572.0.0 3572.0.1 3572.1.0 36 3602.0.0 3602.1.0 3602.1.1 3602.1.2 3602.1.3 3602.1.4 3602.1.5 3602.1.6 3602.2.0 3602.2.1 3602.2.2 3602.2.3 3619.0.0 3637.0.0 3654.0.0 3665.0.0 3689.0.0 37 3717.0.0 3732.0.0 3745.1.0 3760.0.0 3760.1.0 3760.1.1 3760.2.0 3794.0.0 38 3815.0.0 3815.1.0 3815.2.0 3850.0.0 3850.1.0 3874.0.0 3878.0.0 3885.0.0 3886.0.0 3888.0.0 3892.0.0 39 4 4.0 40 41 42.3 4.24.3.1 4.24.3.2 4.26.1.1 4.27.0 4.27.3 4.32 4.33 4.3.3-117 4.6 4.7 5.0 5.1 5.10.0-18-cloud-amd64 5.11 5.2 5.3 5.4 5.4.0.00198 5.4.1.00026 5.4.1.00056 5.6 6 6.0 6.0.0.beta4 6.1 6.10 6.10.0 6.11.0 6.11.1 6.11.2 6.11.3 6.11.4 6.11.5 6.11.6 6.11.7 6.12.0 6.1.22 6.13.0 6.14.0 6.2 6.3 6.4 6.5 6.5.0 6.5.4 6.5.5 6.5.6 6.5.7 6.6 6.7 6.7.2 6.8 6.8.2 6.9 6.9.1 6.9.2 7 7.0 7.0.1 7.0.1406 7.1 7.10.0.0-1017741 7.10.0.20-1023227 7.10.1.0-1042928 7.10.1.10-1068159 7.10.1.1-1049892 7.10.1.15-1078832 7.10.1.20-1090468 7.11 7.11.0.0-1035502 7.1.1503 7.12.0.0-1053185 7.13.0.10-1078781 7.13.0.20-1082704 7.13.1.0-1085623 7.13.1.0-1093040 7.13.1.0-1093865 7.2 7.2.0 7.2.1511 7.3 7.3.1611 7.4 7.4.1708 7.5 7.5.0.10-680584 7.5.1804 7.6 7.6.1810 7.7 7.7.0.7-1007134 7.7.1.0-1007743 7.7.1908 7.7.5.11-1046187 7.7.5.20-1063368 7.7.5.25-1078970 7.7.5.30-1089690 7.7.5.30-1091295 7.8 7.8.0.0-1008134 7.8.0.10-1009761 7.8.0.20-1011246 7.8.0.8.0 7.8.1.7.0 7.8.2003 7.8.2.1 7.9 7.9.0.0-1011258 7.9.2009 8 8. 8.0 8.0.0.0-1091527 8.0.0.0-1091581 8.0.0.0-1091682 8.0.0.0-1091972 8.0.0.0-1092170 8.0.0.0-1092707 8.0.0.0-1092873 8.0.0.0-1093024 8.0.0.0-1093042 8.0.0.0-1093255 8.0.0.0-1094303 8.0.1905 8.1 8.1.0 8.10 8.1.0.0-1092701 8.1.0.0-1093328 8.11 8.1.1911 8.1.3-p1-24838 8.1.3-p2-24912 8.1.3-p3-24955 8.1.3-p4-25026 8.1.3-p5-25104 8.1.3-p6-25199 8.1.3-p7-25298 8.1.3-p8-25333 8.1.3-p8-25334 8.1.3-p8-25335 8.1.3-p8-25336 8.1.3-p8-25339 8.1.3-p8-25341 8.1.3-p8-25342 8.1.3-p8-25343 8.1.3-p8-25345 8.1.3-p8-25349 8.1.3-p8-25350 8.1.3-p8-25351 8.1.3-p8-25352 8.1.3-p8-25353 8.1.3-p8-25354 8.1.3-p8-25355 8.1.3-p8-25356 8.1.3-p8-25357 8.1.3-p8-25360 8.1.3-p8-25361 8.1.3-p8-25362 8.1.3-p8-25363 8.1.3-p8-25364 8.1.3-p8-25365 8.1.3-p8-25366 8.1.3-p8-25367 8.1.3-p8-25370 8.1.3-p8-25371 8.1.3-p8-25372 8.1.3-p8-25373 8.1.3-p8-25375 8.1.3-p8-25376 8.1.3-p8-khil.un-08415223c9a99546b566df0dbc683ffa378cfd77 8.1.3-p8-khil.un-29562fd3e583d0b1529db6f92fedf409aec35c53 8.1.3-p8-khil.un-7802727eceff485a5339f081ba97c8eccc697c62 8.1.4-p1-25119 8.2 8.2.2004 8.3 8.3.0.6_87213 8.3.2011 8.3.2.1_85580 8.3.2.2_85607 8.3.3 8.3.8.0_86519 8.3.8.0_86525 8.4 8.4.1 8.4.2 8.4.2105 8.4.3 8.5 8.5.0 8.5.1 8.5.2 8.5.2111 8.5.8 8.6 8.6.2 8.6.3 8.6.7 8.7 8.8 8.8.1 8.9 9 9.0 9.0.0-p1-24746 9.0.0-p2-24858 9.0.1-24945 9.0.1-p1-25067 9.0.2-25173 9.0.2-p1-25268 9.0.3-25350 9.0.3-p1-25395 9.0.3-p1-25397 9.0.3-p1-25398 9.0.3-p1-25399 9.0.3-p1-25400 9.0.3-p1-25402 9.0.3-p1-25405 9.0.3-p1-25406 9.0.3-p1-abhinav.agarwal-18771999cdf52e2eb4cac4515764035f673da0b4 9.0.3-p1-khil.un-33723dc9b6a306de91bc2a9fcc7768810f1457bf 9.0.3-p2-25407 9.0.3-p2-25408 9.0.3-p2-25410 9.0.3-p2-25411 9.0.3-p2-25413 9.0.3-p2-25414 9.0.3-p2-25415 9.0.3-p2-25416 9.0.3-p2-25417 9.0.3-p2-25418 9.0.3-p2-25421 9.0.3-p2-25422 9.0.3-p2-25423 9.0.3-p2-25424 9.0.3-p2-25425 9.0.3-p2-25426 9.0.3-p2-25427 9.0.3-p2-25428 9.0.3-p2-25429 9.0.3-p2-25430 9.0.3-p2-25431 9.0.3-p2-25432 9.0.3-p2-25433 9.0.3-p2-25434 9.0.3-p2-25436 9.0.3-p2-25437 9.0.3-p2-25439 9.0.3-p2-25440 9.0.3-p2-25441 9.0.3-p2-25442 9.0.3-p2-25444 9.0.3-p2-25445 9.0.3-p2-khil.un-2bf873fb17f994904dcf673399774dc8b9c79c12 9.0.3-p2-khil.un-ac0b199a717c00707168ad80f8e9611d3f821deb 9.0.3-p3-25446 9.0.3-p3-25447 9.0.3-p3-25448 9.0.3-p3-25449 9.0.3-p3-25450 9.0.3-p3-25451 9.0.3-p3-25452 9.0.4-25401 9.0.4-25403 9.0.4-25435 9.0.4-25443 9.1 9.1.0-27191 9.1.0-beta5-25477 9.1.0-beta5-25490 9.1.0-p1-27296 9.1.0-p1-27298 9.1.0-p1-27302 9.1.0-p1-27309 9.1.0-p1-27330 9.1.0-p1-khil.un-c49044ca59c0bc1edf7921109c15878ad8d6b9ff 9.1.0-p2-27361 9.1.0-p2-27365 9.1.0-p2-27367 9.1.0-p2-27369 9.1.0-p2-27372 9.1.0-p2-27377 9.1.0-p2-27379 9.1.0-p2-27382 9.1.0-p2-27395 9.1.0-p2-27400 9.1.0-p2-27401 9.1.0-p2-27402 9.1.0-p2-27403 9.1.0-p2-27404 9.1.0-p2-27405 9.1.0-p2-27406 9.1.0-p2-27407 9.1.0-p2-27409 9.1.0-p2-27418 9.1.0-p2-khil.un-50de36250e4d05c520fadf4c780da5af8f82f52c 9.1.0-p2-khil.un-713fe3c6fb797ad684383ebda90a00cbca5e2531 9.11 9.1.10.0_92772 9.1.11.0_92806 9.1.1-27295 9.1.1-27297 9.1.1-27299 9.1.1-27300 9.1.1-27301 9.1.1-27303 9.1.1-27305 9.1.1-27307 9.1.1-27308 9.1.1-27310 9.1.1-27311 9.1.1-27312 9.1.1-27313 9.1.1-27315 9.1.1-27318 9.1.1-27319 9.1.1-27320 9.1.1-27321 9.1.1-27322 9.1.1-27323 9.1.1-27324 9.1.1-27325 9.1.1-27326 9.1.1-27327 9.1.1-27331 9.1.1-27332 9.1.1-27334 9.1.1-27335 9.1.1-27336 9.1.1-27337 9.1.1-27339 9.1.1-27340 9.1.1-27341 9.1.1-27343 9.1.1-27344 9.1.1-27345 9.1.1-27346 9.1.1-27347 9.1.1-27348 9.1.1-27349 9.1.1-27350 9.1.1-27351 9.1.1-27352 9.1.1-27354 9.1.1-27355 9.1.1-27356 9.1.1-27357 9.1.1-27358 9.1.1-27359 9.1.1-27360 9.1.1-27362 9.1.1-27363 9.1.1-27364 9.1.1-27366 9.1.1-27368 9.1.1-27374 9.1.1-27376 9.1.1-27378 9.1.1-27380 9.1.1-27381 9.1.1-27383 9.1.1-27385 9.1.1-27387 9.1.1-27388 9.1.1-27393 9.1.1-27394 9.1.1-27396 9.1.1-27397 9.1.1-27398 9.1.1-27399 9.1.1-27408 9.1.1-27410 9.1.1-27411 9.1.1-27412 9.1.1-27413 9.1.1-27414 9.1.1-27415 9.1.1-27416 9.1.1-27417 9.1.1-27419 9.1.1-beta1-27328 9.1.1-beta1-27329 9.1.1-beta1-27338 9.1.1-khil.un-bce7cbcae9cc06a03b1f888f0ed88ed6818c2d66 9.1.1-khil.un-dcc75475f02643571e902b5c2c82c25fce65dc63 9.1.1-nagadeesh.nagaraja-a9b923254f67e1ed0a2f9100900f73985854cf55 9.12 9.13 9.1.3.0_92242 9.13.1 9.13.1P1 9.13.1P2 9.13.1P3 9.13.1P4 9.13.1P6 9.13.1P7 9.13.1P8X1 9.13.1RC1 9.14.0 9.14.0P1 9.14.0P2 9.14.0P3 9.14.1 9.1.4.1_92329 9.14.1P1 9.14.1P1X3 9.14.1P1X4 9.14.1RC1 9.1.4.2_92345 9.1.4.2_92359 9.1.4.3_92414 9.1.4.4_92466 9.1.4.4_92470 9.15.0 9.1.5.0_92545 9.15.1X12 9.15.1X15 9.1.6.0_92628 9.1.6.2_92634 9.1.6.2_92636 9.1.7.0_92666 9.1.8.0_92706 9.1-dev-25121 9.1-dev-25368 9.2 9.2.0-beta1-25971 9.2.0-beta1-26005 9.2.0-beta1-26033 9.2.0-beta1-26066 9.2.0-beta2-26101 9.2.1 9.2.2.0_94322 9.2.3.0_94541 9.2.4.0_94650 9.2.4.0_94654 9.2.5.0_94689 9.2.5.1_94697 9.2.6.0_94722 9.2.7.0_94752 9.2.8.0_94809 9.2.8.0_94811 9.2.9.0_94890 9.2-dev-25813 9.2-dev-25878 9.2-dev-25879 9.2-dev-25920 9.2-dev-25946 9.2-dev-25947 9.2-dev-25948 9.2-dev-25949 9.2-dev-25950 9.2-dev-25951 9.2-dev-25952 9.2-dev-25953 9.2-dev-25954 9.2-dev-25955 9.2-dev-25956 9.2-dev-25958 9.2-dev-25959 9.2-dev-25960 9.2-dev-25961 9.2-dev-25962 9.2-dev-25963 9.2-dev-25965 9.2-dev-25966 9.2-dev-25968 9.2-dev-25969 9.2-dev-25970 9.2-dev-25972 9.2-dev-25974 9.2-dev-25975 9.2-dev-25976 9.2-dev-25977 9.2-dev-25978 9.2-dev-25979 9.2-dev-25980 9.2-dev-25982 9.2-dev-25983 9.2-dev-25984 9.2-dev-25985 9.2-dev-25986 9.2-dev-25987 9.2-dev-25988 9.2-dev-25989 9.2-dev-25990 9.2-dev-25991 9.2-dev-25992 9.2-dev-25993 9.2-dev-25994 9.2-dev-25995 9.2-dev-25996 9.2-dev-25999 9.2-dev-26000 9.2-dev-26001 9.2-dev-26002 9.2-dev-26003 9.2-dev-26009 9.2-dev-26013 9.2-dev-26014 9.2-dev-26016 9.2-dev-26017 9.2-dev-26018 9.2-dev-26019 9.2-dev-26020 9.2-dev-26021 9.2-dev-26022 9.2-dev-26023 9.2-dev-26024 9.2-dev-26025 9.2-dev-26027 9.2-dev-26028 9.2-dev-26029 9.2-dev-26030 9.2-dev-26031 9.2-dev-26032 9.2-dev-26034 9.2-dev-26036 9.2-dev-26037 9.2-dev-26038 9.2-dev-26039 9.2-dev-26040 9.2-dev-26041 9.2-dev-26042 9.2-dev-26044 9.2-dev-26046 9.2-dev-26047 9.2-dev-26048 9.2-dev-26050 9.2-dev-26052 9.2-dev-26058 9.2-dev-26060 9.2-dev-26061 9.2-dev-26062 9.2-dev-26063 9.2-dev-26064 9.2-dev-26065 9.2-dev-26067 9.2-dev-26070 9.2-dev-26071 9.2-dev-26075 9.2-dev-26077 9.2-dev-26078 9.2-dev-26079 9.2-dev-26080 9.2-dev-26081 9.2-dev-26082 9.2-dev-26083 9.2-dev-26085 9.2-dev-26086 9.2-dev-26087 9.2-dev-26088 9.2-dev-26089 9.2-dev-26090 9.2-dev-26091 9.2-dev-26093 9.2-dev-26094 9.2-dev-26095 9.2-dev-26096 9.2-dev-26097 9.2-dev-26098 9.2-dev-26104 9.2-dev-26105 9.2-dev-26107 9.2-dev-26108 9.2-dev-26109 9.2-dev-26110 9.2-dev-26111 9.2-dev-adi.kris-33a772ca61f67a24283d4e71a63650282d6bd073 9.2-dev-khil.un-6ec1bfcc230e848a0e8f1d776d0f05a35a9545e6 9.2-dev-khil.un-c35b47d1656fd20c0ec0d6cab8583ffbf6041937 9.2-dev-khil.un-c54da2af2e5732bee11b720c199e16fd70438968 9.2-dev-michael.sun-ec36214183ee10fbe28d86a55b3aa46b54eb4a04 9.3 9.3.0.0_95721 9.3.1.0_95994 9.3.2.0_96093 9.3.2.1_96098 9.3.2.2_96105 9.3.2.3_96127 9.4 9.4.1.0_98030 9.4.1.0_98069 9.4.2.0_98303 9.4.2.0_98396 9.5 9.6 9.7 9.8 9.9 9999.0.1 9999.9.1 a Accops ArrayOS Aruba bookworm/sid bullseye/sid buster/sid Clawhammer__9.14.0 Clawhammer__9.14.1 Cloudstream__9.16.0 Epicor FFFF h ip-12.1.6 ip-13.1.4 ip-13.1.4.1 ip-13.1.5 ip-13.1.5.1 ip-14.1.4 ip-14.1.4.1 ip-14.1.4.2 ip-14.1.4.4 ip-14.1.4.5 ip-14.1.4.6 ip-14.1.5.1 ip-14.1.5.2 ip-14.1.5.3 ip-14.1.5.4 ip-14.1.5.5 ip-14.1.5.6 ip-15.1.10 ip-15.1.10.2 ip-15.1.10.3 ip-15.1.2.1 ip-15.1.3 ip-15.1.3.1 ip-15.1.4 ip-15.1.5 ip-15.1.5.1 ip-15.1.6.1 ip-15.1.7 ip-15.1.8 ip-15.1.8.1 ip-15.1.8.2 ip-15.1.9.1 ip-16.0.1.1 ip-16.0.1.2 ip-16.1.0 ip-16.1.1 ip-16.1.2.1 ip-16.1.2.2 ip-16.1.3 ip-16.1.3.1 ip-16.1.3.2 ip-16.1.3.3 ip-16.1.3.4 ip-16.1.3.5 ip-16.1.4 ip-16.1.4.1 ip-16.1.4.2 ip-16.1.5 ip-17.0.0 ip-17.1.0 ip-17.1.0.1 ip-17.1.0.2 ip-17.1.0.3 ip-17.1.1 ip-17.1.1.1 ip-17.1.1.2 ip-17.5.0 jessie/sid JNPR-11.0-20200922.4042921_buil JNPR-11.0-20201028.e1cef1d_buil JNPR-11.0-20201221.5316c2e_buil JNPR-11.0-20210220.a5d6a89_buil JNPR-11.0-20210429.58e41ab_buil JNPR-11.0-20210618.f43645e_buil JNPR-12.1-20211216.232802__ci_f JNPR-12.1-20220202.9885091_buil JNPR-12.1-20220221.2b3c81a_buil JNPR-12.1-20220228.82e60e3_buil JNPR-12.1-20220817.0361d5f_buil JNPR-12.1-20220817.43c4e23_buil JNPR-12.1-20221021.a9737e1_buil JNPR-12.1-20230120.6bab16a_buil JNPR-12.1-20230321.be5f9c0_buil JNPR-12.1-20230821.5fbe894_buil JNPR-12.1-20231013.108e0b3_buil JNPR-12.1-20231013.32ed862a0f7_ JNPR-12.1-20231122.ee0e992_buil JNPR-12.1-20231220.32ed862a0f7_ JNPR-12.1-20240103.68b4802_buil JNPR-12.1-20240112.32ed862a0f7_ JNPR-12.1-20240119.32ed862a0f7_ JNPR-12.1-20240228.033525_kahon JNPR-15.0-20240118.32ed862a0f7_ JNPR-15.0-20240207.32ed862a0f7_ JNPR-15.0-20240209.212337_yhli_ JNPR-15.0-20240221.32ed862a0f7_ JNPR-15.0-20240224.002811_kahon kali-rolling leap-15.0 leap-15.1 leap-15.2 leap-15.3 leap-15.4 leap-15.5 Libraesva lighthouse-23.10.0 lighthouse-23.10.1 lighthouse-23.10.2 lighthouse-24.02.0 lighthouse-24.02.0p0 lighthouse-24.05.0p0 Lighthouse__9.13.1 Linux linux-os-31700 linux-os-31810 linux-os-31980 linux-os-36200 linux-os-38790 micro-5.5 Mightysquirrel__9.15.0 Mightysquirrel__9.15.1 n/a NAME="SLES" ngfw-6.10.11.26551.azure.1 ngfw-6.10.12.26603 ngfw-6.10.13.26655.fips.2 ngfw-6.10.14.26703 ngfw-6.10.15.26752 ngfw-7.0.3.28152.sip.2 ngfw-7.1.1.29059 ngfw-7.1.2.29102 ngfw-7.2.0.30046.pppoe.1 ngfw-7.2.0.30046.rnext-g02c2c7f.2402121309 ngfw-7.2.0.30046.rnext-gf1bf778.2402120824 ngfw-7.2.0.30047.rnext-g030ce90.2402141429 ngfw-7.2.0.30047.rnext-g2e7c78f.2402150842 ngfw-7.2.0.30047.rnext-g3f3db02.2402211419 ngfw-7.2.0.30047.rnext-g58dccd6.2402151047 ngfw-7.2.0.30047.rnext-g5d6e00a.2402212007 ngfw-7.2.0.30047.rnext-gbd58266.2402140855 ngfw-7.2.0.30047.rnext-gc7730bf.2402151240 ngfw-7.2.0.30047.rnext-ge9c5065.2402192008 ngfw-7.2.0.30048.rnext-g237a2a5.2402222007 ngfw-7.2.0.30048.rnext-g9219487.2402260818 ngfw-7.2.0.30048.rnext-gbfc76a4.2402261313 ngfw-7.2.0.30048.rnext-gef6caea.2402260525 ngfw-7.2.0.30049 ngfw-7.2.0.30050 ngfw-7.2.0.30050.rnext-g4152526.2402281323 ngfw-7.2.0.30050.rnext-gb6d2048.2402291318 ngfw-7.2.0.30050.rnext-ge84f515.2402291054 None PanOS r11427-9ce6aa9d8d rolling Schipperke-4857 SonicOSX 7.1.1-7038-R5354 SonicOSX 7.1.1-7040-R2998-HF24239 SonicOSX 7.1.1-7040-R5387 SonicOSX 7.1.1-7040-R5389 SonicOSX 7.1.1-7040-R5391 SonicOSX 7.1.1-7041-R5415 SonicOSX 7.1.1-7047-R3003-HF24239 SonicOSX 7.1.1-7047-R5557 SonicOSX 7.1.1-7047-R5573 SonicOSX 7.1.1-7047-R5582 SonicOSX 7.1.1-7047-R5587 SonicOSX 7.1.1-7048-D14445 SonicOSX 7.1.1-7049-D14628 SonicOSX 7.1.1-7049-R5589 stretch/sid testing/unstable trixie/sid tumbleweed-20230902 tumbleweed-20240106 unstable v3.3 v3.4.1 v3.5 v3.8.1 vsbc-x86_pi3-6.10.3 vsbc-x86_pi3-6.10.x6 vsbc-x86_pi3-6.12.2pre02 Azure-WALinuxAgent-a976115/tests/data/events/000077500000000000000000000000001510742556200207515ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/events/1478123456789000.tld000066400000000000000000000006271510742556200232430ustar00rootroot00000000000000{"eventId": 1, "providerId": "69B669B9-4AF8-4C50-BDC4-6006FA76E975", "parameters": [{"name": "Name", "value": "Test Event"}, {"name": "Version", "value": "2.2.0"}, {"name": "IsInternal", "value": false}, {"name": "Operation", "value": "Some Operation"}, {"name": "OperationSuccess", "value": true}, {"name": "Message", "value": ""}, {"name": "Duration", "value": 0}, {"name": "ExtensionType", "value": ""}]}Azure-WALinuxAgent-a976115/tests/data/events/1478123456789001.tld000066400000000000000000000006701510742556200232420ustar00rootroot00000000000000{"eventId": 1, "providerId": "69B669B9-4AF8-4C50-BDC4-6006FA76E975", "parameters": [{"name": "Name", "value": "Linux Event"}, {"name": "Version", "value": "2.2.0"}, {"name": "IsInternal", "value": false}, {"name": "Operation", "value": "Linux Operation"}, {"name": "OperationSuccess", "value": false}, {"name": "Message", "value": "Linux Message"}, {"name": "Duration", "value": 42}, {"name": "ExtensionType", "value": "Linux Event Type"}]}Azure-WALinuxAgent-a976115/tests/data/events/1479766858966718.tld000066400000000000000000000007571510742556200233100ustar00rootroot00000000000000{"eventId": 1, "providerId": "69B669B9-4AF8-4C50-BDC4-6006FA76E975", "parameters": [{"name": "Name", "value": "WALinuxAgent"}, {"name": "Version", "value": "2.3.0.1"}, {"name": "IsInternal", "value": false}, {"name": "Operation", "value": "Enable"}, {"name": "OperationSuccess", "value": true}, {"name": "Message", "value": "Agent WALinuxAgent-2.3.0.1 launched with command 'python install.py' is successfully running"}, {"name": "Duration", "value": 0}, {"name": "ExtensionType", "value": ""}]}Azure-WALinuxAgent-a976115/tests/data/events/collect_and_send_events_unreadable_data/000077500000000000000000000000001510742556200307705ustar00rootroot00000000000000IncorrectExtension.tmp000077500000000000000000000017101510742556200352620ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/events/collect_and_send_events_unreadable_data { "eventId": 1, "providerId": "69B669B9-4AF8-4C50-BDC4-6006FA76E975", "parameters": [ { "name": "Name", "value": "Microsoft.EnterpriseCloud.Monitoring.OmsAgentForLinux" }, { "name": "Version", "value": "1.11.5" }, { "name": "Operation", "value": "Install" }, { "name": "OperationSuccess", "value": false }, { "name": "Message", "value": "HelloWorld" }, { "name": "Duration", "value": 300000 } ] }UnreadableFile.tld000077500000000000000000000017101510742556200342620ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/events/collect_and_send_events_unreadable_data { "eventId": 1, "providerId": "69B669B9-4AF8-4C50-BDC4-6006FA76E975", "parameters": [ { "name": "Name", "value": "Microsoft.EnterpriseCloud.Monitoring.OmsAgentForLinux" }, { "name": "Version", "value": "1.11.5" }, { "name": "Operation", "value": "Install" }, { "name": "OperationSuccess", "value": false }, { "name": "Message", "value": "HelloWorld" }, { "name": "Duration", "value": 300000 } ] }Azure-WALinuxAgent-a976115/tests/data/events/custom_script_1.tld000066400000000000000000000012351510742556200245750ustar00rootroot00000000000000{ "eventId": 1, "providerId": "69B669B9-4AF8-4C50-BDC4-6006FA76E975", "parameters": [ { "name": "Name", "value": "Microsoft.Azure.Extensions.CustomScript" }, { "name": "Version", "value": "2.0.4" }, { "name": "Operation", "value": "ThisIsATestEventOperation" }, { "name": "OperationSuccess", "value": true }, { "name": "Message", "value": "A test telemetry message." }, { "name": "Duration", "value": 150000 } ] }Azure-WALinuxAgent-a976115/tests/data/events/custom_script_2.tld000066400000000000000000000012351510742556200245760ustar00rootroot00000000000000{ "eventId": 1, "providerId": "69B669B9-4AF8-4C50-BDC4-6006FA76E975", "parameters": [ { "name": "Name", "value": "Microsoft.Azure.Extensions.CustomScript" }, { "name": "Version", "value": "2.0.4" }, { "name": "Operation", "value": "ThisIsATestEventOperation" }, { "name": "OperationSuccess", "value": true }, { "name": "Message", "value": "A test telemetry message." }, { "name": "Duration", "value": 150000 } ] }Azure-WALinuxAgent-a976115/tests/data/events/custom_script_extra_parameters.tld000066400000000000000000000027411510742556200300060ustar00rootroot00000000000000{ "eventId": 1, "providerId": "69B669B9-4AF8-4C50-BDC4-6006FA76E975", "parameters": [ { "name": "Name", "value": "Microsoft.Azure.Extensions.CustomScript" }, { "name": "Version", "value": "2.0.4" }, { "name": "Operation", "value": "ThisIsATestEventOperation" }, { "name": "OperationSuccess", "value": true }, { "name": "Message", "value": "A test telemetry message." }, { "name": "Duration", "value": 150000 }, { "name": "IsInternal", "value": true }, { "name": "ExtensionType", "value": "XML" }, { "name": "GAVersion", "value": "WALinuxAgent-9.9.99" }, { "name": "ContainerId", "value": "11111111-2222-3333-4444-555555555555" }, { "name": "OpcodeName", "value": "2099-12-31 11:59:59.505791" }, { "name": "EventTid", "value": 54321 }, { "name": "EventPid", "value": 98765 }, { "name": "TaskName", "value": "NOT_A_VALID_TASK" }, { "name": "KeywordName", "value": "NOT_A_VALID_KEYWORD" } ] }Azure-WALinuxAgent-a976115/tests/data/events/custom_script_invalid_json.tld000066400000000000000000000012731510742556200271160ustar00rootroot00000000000000{ "eventId": 1, "providerId": "69B669B9-4AF8-4C50-BDC4-6006FA76E975", "parameters": [ { "name": "Name", "value": "Microsoft.Azure.Extensions.CustomScript" }, { "name": "Version", "value": "2.0.4" }, { "name": "Operation", "value": "ThisIsATestEventOperation" }, { "name": "OperationSuccess", "value": true }, { "name": "Message", value: "THIS IS NOT VALID JSON - this object's name is not quoted" }, { "name": "Duration", "value": 150000 } ] }Azure-WALinuxAgent-a976115/tests/data/events/custom_script_no_read_access.tld000066400000000000000000000012351510742556200273650ustar00rootroot00000000000000{ "eventId": 1, "providerId": "69B669B9-4AF8-4C50-BDC4-6006FA76E975", "parameters": [ { "name": "Name", "value": "Microsoft.Azure.Extensions.CustomScript" }, { "name": "Version", "value": "2.0.4" }, { "name": "Operation", "value": "ThisIsATestEventOperation" }, { "name": "OperationSuccess", "value": true }, { "name": "Message", "value": "A test telemetry message." }, { "name": "Duration", "value": 150000 } ] }Azure-WALinuxAgent-a976115/tests/data/events/custom_script_nonascii_characters.tld000066400000000000000000000012421510742556200304350ustar00rootroot00000000000000{ "eventId": 1, "providerId": "69B669B9-4AF8-4C50-BDC4-6006FA76E975", "parameters": [ { "name": "Name", "value": "Microsoft.Azure.Extensions.CustomScript" }, { "name": "Version", "value": "2.0.4" }, { "name": "Operation", "value": "ThisIsATestEventOperation" }, { "name": "OperationSuccess", "value": true }, { "name": "Message", "value": "Worldעיות אחרותआज" }, { "name": "Duration", "value": 150000 } ] }Azure-WALinuxAgent-a976115/tests/data/events/custom_script_utf-16.tld000066400000000000000000000024741510742556200254650ustar00rootroot00000000000000{ "eventId": 1, "providerId": "69B669B9-4AF8-4C50-BDC4-6006FA76E975", "parameters": [ { "name": "Name", "value": "Microsoft.Azure.Extensions.CustomScript" }, { "name": "Version", "value": "2.0.4" }, { "name": "Operation", "value": "ThisIsATestEventOperation" }, { "name": "OperationSuccess", "value": true }, { "name": "Message", "value": "A test telemetry message." }, { "name": "Duration", "value": 150000 } ] }Azure-WALinuxAgent-a976115/tests/data/events/event_with_callstack.waagent.tld000066400000000000000000000042751510742556200273100ustar00rootroot00000000000000{"eventId": 1, "providerId": "69B669B9-4AF8-4C50-BDC4-6006FA76E975", "parameters": [{"name": "Name", "value": "WALinuxAgent"}, {"name": "Version", "value": "2.2.46"}, {"name": "Operation", "value": "ThisIsATestEventOperation"}, {"name": "OperationSuccess", "value": false}, {"name": "Message", "value": "An error occurred while retrieving the goal state: Traceback (most recent call last):\n File \"bin/WALinuxAgent-2.2.47/azurelinuxagent/common/protocol/wire.py\", line 715, in try_update_goal_state\n self.update_goal_state()\n File \"bin/WALinuxAgent-2.2.47/azurelinuxagent/common/protocol/wire.py\", line 708, in update_goal_state\n WireClient._UpdateType.GoalStateForced if forced else WireClient._UpdateType.GoalState)\n File \"bin/WALinuxAgent-2.2.47/azurelinuxagent/common/protocol/wire.py\", line 771, in _update_from_goal_state\n raise ProtocolError(\"Exceeded max retry updating goal state\")\nazurelinuxagent.common.exception.ProtocolError: [ProtocolError] Exceeded max retry updating goal state\n"}, {"name": "Duration", "value": 0}, {"name": "GAVersion", "value": "WALinuxAgent-2.2.46"}, {"name": "ContainerId", "value": "ebd8bf98-8a26-4607-b82f-41333b65b587"}, {"name": "OpcodeName", "value": "2020-03-19T15:30:15.326391Z"}, {"name": "EventTid", "value": 140524239595328}, {"name": "EventPid", "value": 3264}, {"name": "TaskName", "value": "ExtHandler"}, {"name": "KeywordName", "value": ""}, {"name": "ExtensionType", "value": ""}, {"name": "IsInternal", "value": false}, {"name": "OSVersion", "value": "Linux:ubuntu-18.04-bionic:5.0.0-1032-azure"}, {"name": "ExecutionMode", "value": "IAAS"}, {"name": "RAM", "value": 7976}, {"name": "Processors", "value": 2}, {"name": "VMName", "value": "_nam-u18"}, {"name": "TenantName", "value": "1feb87ac-f8b5-427e-a5b3-563b34a3931a"}, {"name": "RoleName", "value": "_nam-u18"}, {"name": "RoleInstanceName", "value": "1feb87ac-f8b5-427e-a5b3-563b34a3931a._nam-u18"}, {"name": "Location", "value": "northcentralus"}, {"name": "SubscriptionId", "value": "2588be01-bc36-4aa0-bf22-db6efc2b58ae"}, {"name": "ResourceGroupName", "value": "narrieta-rg"}, {"name": "VMId", "value": "06a6333f-410e-4a8f-a93a-c5b6075dfdad"}, {"name": "ImageOrigin", "value": 2}], "file_type": ""}Azure-WALinuxAgent-a976115/tests/data/events/event_with_sas_token.tld000066400000000000000000000021111510742556200256730ustar00rootroot00000000000000{ "eventId": 1, "providerId": "69B669B9-4AF8-4C50-BDC4-6006FA76E9375", "parameters": [ { "name": "Name", "value": "Test Event" }, { "name": "Version", "value": "9.9.9.9" }, { "name": "IsInternal", "value": false }, { "name": "Operation", "value": "ThisIsATestEventOperation" }, { "name": "OperationSuccess", "value": true }, { "name": "Message", "value": "Fetch failed: [HttpError] [HTTP Retry] GET https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.vmSettings?sv=2018-03-28&sr=b&sk=system-1&sig=8YHwmibhasT0r9MZgL09QmFwL7ZV%2bg%2b49QP5Zwe4ksY%3d&se=9999-01-01T00%3a00%3a00Z&sp=r Status Code 403 -- 25 attempts made" }, { "name": "Duration", "value": 0 }, { "name": "ExtensionType", "value": "" } ] }Azure-WALinuxAgent-a976115/tests/data/events/extension_events/000077500000000000000000000000001510742556200243515ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/events/extension_events/different_cases/000077500000000000000000000000001510742556200274755ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/events/extension_events/different_cases/1591918616.json000066400000000000000000000021641510742556200314710ustar00rootroot00000000000000[ { "eventlevel": "INFO", "Message": "Files downloaded. Asynchronously executing command: 'SecureCommand_11'", "Version": "1", "TASKNAME": "Downloading files", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "tIMEsTAMP": "2019-12-12T01:21:05.1960563Z" }, { "EventLevel": "INFO", "MESSAGE": "HandlerEnvironment = Version: 1, HandlerEnvironment: [LogFolder: \"C:\\WindowsAzure\\Logs\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\", ConfigFolder: \"C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\\RuntimeSettings\", StatusFolder: \"C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\\Status\", HeartbeatFile: \"C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\\Status\\HeartBeat.Json\"]", "version": "1", "TaskName": "Downloading files", "EventPID": "3228", "EVENTTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:21:05.1960563Z" } ]Azure-WALinuxAgent-a976115/tests/data/events/extension_events/empty_message/000077500000000000000000000000001510742556200272135ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/events/extension_events/empty_message/1592350454.json000066400000000000000000000011501510742556200311700ustar00rootroot00000000000000[ { "EventLevel": "INFO", "Message": null, "Version": "1", "TaskName": "Downloading files", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:21:05.1960563Z", "BadEvent": true }, { "EventLevel": "INFO", "Message": "", "Version": "1", "TaskName": "Downloading files", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:21:05.1960563Z", "BadEvent": true } ]Azure-WALinuxAgent-a976115/tests/data/events/extension_events/extra_parameters/000077500000000000000000000000001510742556200277175ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/events/extension_events/extra_parameters/1592273009.json000066400000000000000000000030551510742556200317020ustar00rootroot00000000000000[ { "EventLevel": "INFO", "Message": "Files downloaded. Asynchronously executing command: 'SecureCommand_11'", "Version": "1", "TaskName": "Downloading files", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:21:05.1960563Z", "SomethingNewButNotCool": "This is a random new param" }, { "EL": "INFO", "msg": "HandlerEnvironment = Version: 1, HandlerEnvironment: [LogFolder: \"C:\\WindowsAzure\\Logs\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\", ConfigFolder: \"C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\\RuntimeSettings\", StatusFolder: \"C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\\Status\", HeartbeatFile: \"C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\\Status\\HeartBeat.Json\"]", "ver": "1", "TaskName": "Downloading files", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "Time": "2019-12-12T01:21:05.1960563Z", "BadEvent": true }, { "EventLevel": "INFO", "Message": "Hello World", "Version": "1", "TaskName": "Downloading files", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:21:05.1960563Z", "SomethingVeryWeird": "Weirdly weird but satisfying" } ]Azure-WALinuxAgent-a976115/tests/data/events/extension_events/int_type/000077500000000000000000000000001510742556200262045ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/events/extension_events/int_type/1519934744.json000066400000000000000000000005071510742556200301770ustar00rootroot00000000000000{ "EventLevel": "INFO", "Message": "Accept int value for eventpid and eventtid", "Version": "1", "TaskName": "Downloading files", "EventPid": 3228, "EventTid": 1, "OpErAtiOnID": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2023-03-13T01:21:05.1960563Z" }Azure-WALinuxAgent-a976115/tests/data/events/extension_events/large_messages/000077500000000000000000000000001510742556200273325ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/events/extension_events/large_messages/1591921510.json000066400000000000000000002132161510742556200313130ustar00rootroot00000000000000[ { "EventLevel": "INFO", "Message": "[MESSAGE_START] Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 [MESSAGE_END]", "Version": "1", "TaskName": "Downloading files", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:21:05.1960563Z" } ]Azure-WALinuxAgent-a976115/tests/data/events/extension_events/malformed_files/000077500000000000000000000000001510742556200275015ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/events/extension_events/malformed_files/1592008079.json000066400000000000000000000003721510742556200314660ustar00rootroot00000000000000[ { "EventLevel": "" , "Message": null, "Version": "", "TaskName": null, "EventPid": null, "EventTid": null, "OperationId": null, "TimeStamp": null, "BadEvent": true } ]Azure-WALinuxAgent-a976115/tests/data/events/extension_events/malformed_files/1594857360.tld000066400000000000000000000005131510742556200313040ustar00rootroot00000000000000{ "EventLevel": "INFO", "Message": "Enabling Handler", "Version": "1", "TaskName": "Handler Operation", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:05.0950244Z", "BadEvent": true }Azure-WALinuxAgent-a976115/tests/data/events/extension_events/malformed_files/bad_json_files/000077500000000000000000000000001510742556200324425ustar00rootroot000000000000001591816395.json000066400000000000000000000000301510742556200343460ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/events/extension_events/malformed_files/bad_json_files{ "BadEvent": true ]Azure-WALinuxAgent-a976115/tests/data/events/extension_events/malformed_files/bad_name_file.json000066400000000000000000000022041510742556200331170ustar00rootroot00000000000000[ { "EventLevel": "INFO", "Message": "Starting IaaS ScriptHandler Extension v1", "Version": "1", "TaskName": "Extension Info", "EventPid": "5676", "EventTid": "1", "OperationId": "e1065def-7571-42c2-88a2-4f8b4c8f226d", "TimeStamp": "2019-12-12T01:11:38.2298194Z", "BadEvent": true }, { "EventLevel": "INFO", "Message": "HandlerEnvironment = Version: 1, HandlerEnvironment: [LogFolder: \"C:\\WindowsAzure\\Logs\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\", ConfigFolder: \"C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\\RuntimeSettings\", StatusFolder: \"C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\\Status\", HeartbeatFile: \"C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\\Status\\HeartBeat.Json\"]", "Version": "1", "TaskName": "Extension Info", "EventPid": "5676", "EventTid": "1", "OperationId": "e1065def-7571-42c2-88a2-4f8b4c8f226d", "TimeStamp": "2019-12-12T01:11:38.2318168Z", "BadEvent": true } ]Azure-WALinuxAgent-a976115/tests/data/events/extension_events/missing_parameters/000077500000000000000000000000001510742556200302455ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/events/extension_events/missing_parameters/1592273793.json000066400000000000000000000045411510742556200322430ustar00rootroot00000000000000[ { "EventLevel": "INFO", "msg": "Random names for param keys should generate MissingKeyError, message missing", "Version": "1", "TaskName": "Downloading files", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:21:05.1960563Z", "BadEvent": true }, { "EventLevel": "INFO", "msg": "Random names for param keys should generate MissingKeyError, message missing", "Version": "1", "TaskName": "Downloading files", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:21:05.1960563Z", "BadEvent": true }, { "EventLevel": "INFO", "Message": "MissingKeyError: EventPid missing", "Version": "1", "TaskName": "Downloading files", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:21:05.1960563Z", "BadEvent": true }, { "EventLevel": "INFO", "Message": "MissingKeyError: EventPid missing", "Version": "1", "TaskName": "Downloading files", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:21:05.1960563Z", "BadEvent": true }, { "EventLevel": "INFO", "Message": "MissingKeyError: Version missing", "TaskName": "Downloading files", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:21:05.1960563Z", "BadEvent": true }, { "EventLevel": "INFO", "Message": "MissingKeyError: Version missing", "EventPid": "3228", "TaskName": "Downloading files", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:21:05.1960563Z", "BadEvent": true }, { "EventLevel": "INFO", "Message": "MissingKeyError: Version missing", "EventPid": "3228", "TaskName": "Downloading files", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:21:05.1960563Z", "BadEvent": true } ]Azure-WALinuxAgent-a976115/tests/data/events/extension_events/mix_files/000077500000000000000000000000001510742556200263305ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/events/extension_events/mix_files/1591835369.json000066400000000000000000000000261510742556200303220ustar00rootroot00000000000000{ "BadEvent": true }Azure-WALinuxAgent-a976115/tests/data/events/extension_events/mix_files/1591835848.json000066400000000000000000000073511510742556200303340ustar00rootroot00000000000000[ { "EventLevel": "INFO", "Message": "Starting IaaS ScriptHandler Extension v1", "Version": "1", "TaskName": "Extension Info", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:05.0950244Z" }, { "EventLevel": "INFO", "Message": "HandlerEnvironment = Version: 1, HandlerEnvironment: [LogFolder: \"C:\\WindowsAzure\\Logs\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\", ConfigFolder: \"C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\\RuntimeSettings\", StatusFolder: \"C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\\Status\", HeartbeatFile: \"C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\\Status\\HeartBeat.Json\"]", "Version": "1", "TaskName": "Extension Info", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:05.0950244Z" }, { "EventLevel": "INFO", "Message": "Comamnd Executed: enable", "Version": "1", "TaskName": "Handler Command", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:05.0950244Z" }, { "EventLevel": "INFO", "Message": "Enabling Handler", "Version": "1", "TaskName": "Handler Operation", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:05.0950244Z" }, { "EventLevel": "INFO", "Message": "Successfully enabled TLS.", "Version": "1", "TaskName": "TLS", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:05.1106498Z" }, { "EventLevel": "INFO", "Message": "Handler successfully enabled", "Version": "1", "TaskName": "Handler Operation", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:05.1731503Z" }, { "EventLevel": "INFO", "Message": "Loading configuration for sequence number 11", "Version": "1", "TaskName": "Sequence Number", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:05.1731503Z" }, { "EventLevel": "INFO", "Message": "HandlerSettings = ProtectedSettingsCertThumbprint: C8F2B56B0E79B592334BA0AFFFE172EB3DEF6753, ProtectedSettings: {MIIB0AYJKoZIhvcNAQcDoIIBwTCCAb0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEGPedX2PQQm0SE3jOKJHGtUwDQYJKoZIhvcNAQEBBQAEggEAqCIhARxqObDt9esAHszPzPonOyNyDpS2f/bsaA6TiZXyUg08JARIAZYAvR3TMJaH+Q3IkTC2tcQHmlBFVx7v9Dxm0RtwGM3CJve8Xq/Jf2X6gsq79nKPiFrZ1BRDp/TB6lZdC6ahIiRA+DOeL9p1wap8R7j67s+oQiEVi0nI0zqSOPH+nXsKNhi2xaW466zdgXdfsy2amp9pO/p9/mg+W/qyMFKcnFI8d26bqaWxYBQVYqxWnXSUE7Ul7hHEKyXQeF2QwE7QVBEMJIgLyx6u58M2FYtoWopU37gOMk8MWfTgIumP0WZOPHS/n6AffPgaypStiu9Q3HYTYnqH28OtjTBLBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECCzHZJQzFhdSgCgV25G1ZQKeyJPTLZu+u8bIjtPQ5yOOLFQZj8XrJj4HhQNTndIDwyNz}, PublicSettings: {}", "Version": "1", "TaskName": "Handler Configuration", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:16.7047734Z" }, { "BadEvent": true } ]Azure-WALinuxAgent-a976115/tests/data/events/extension_events/mix_files/1591835859.json000066400000000000000000000005061510742556200303310ustar00rootroot00000000000000 { "EventLevel": "INFO", "Message": "Downloading files specified in configuration...", "Version": "1", "TaskName": "Downloading files", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:52.9247262Z" } Azure-WALinuxAgent-a976115/tests/data/events/extension_events/sas_files/000077500000000000000000000000001510742556200263215ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/events/extension_events/sas_files/1591905410.json000066400000000000000000000014711510742556200303010ustar00rootroot00000000000000[ { "EventLevel": "INFO", "Message": "SAS TOKEN https://abc.def.xyz.123.net/functiontest/yokawasa.png?sig=sXBjML1Fpk9UnTBtajo05ZTFSk0LWFGvARZ6WlVcAog%3D&srt=o&ss=b&spr=https&sp=rl&sv=2016-05-31&se=2017-07-01T00%3A21%3A38Z&st=2017-07-01T23%3A16%3A38Z", "Version": "1", "TaskName": "Extension Info", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:05.0950244Z" }, { "EventLevel": "INFO", "Message": "SAS TOKEN ", "Version": "1", "TaskName": "Extension Info", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:05.0950244Z" } ]Azure-WALinuxAgent-a976115/tests/data/events/extension_events/special_chars/000077500000000000000000000000001510742556200271515ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/events/extension_events/special_chars/1591918939.json000066400000000000000000000005201510742556200311470ustar00rootroot00000000000000{ "EventLevel": "INFO", "Message": "Non-English message - 此文字不是英文的", "Version": "1", "TaskName": "Downloading files", "EventPid": "3228", "EventTid": "1", "OpErAtiOnID": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:21:05.1960563Z" }Azure-WALinuxAgent-a976115/tests/data/events/extension_events/well_formed_files/000077500000000000000000000000001510742556200300325ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/events/extension_events/well_formed_files/1591905451.json000066400000000000000000000073031510742556200320170ustar00rootroot00000000000000[ { "EventLevel": "INFO", "Message": "Starting IaaS ScriptHandler Extension v1", "Version": "1", "TaskName": "Extension Info", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:05.0950244Z" }, { "EventLevel": "INFO", "Message": "HandlerEnvironment = Version: 1, HandlerEnvironment: [LogFolder: \"C:\\WindowsAzure\\Logs\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\", ConfigFolder: \"C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\\RuntimeSettings\", StatusFolder: \"C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\\Status\", HeartbeatFile: \"C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\\Status\\HeartBeat.Json\"]", "Version": "1", "TaskName": "Extension Info", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:05.0950244Z" }, { "EventLevel": "INFO", "Message": "Comamnd Executed: enable", "Version": "1", "TaskName": "Handler Command", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:05.0950244Z" }, { "EventLevel": "INFO", "Message": "Enabling Handler", "Version": "1", "TaskName": "Handler Operation", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:05.0950244Z" }, { "EventLevel": "INFO", "Message": "Successfully enabled TLS.", "Version": "1", "TaskName": "TLS", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:05.1106498Z" }, { "EventLevel": "INFO", "Message": "Handler successfully enabled", "Version": "1", "TaskName": "Handler Operation", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:05.1731503Z" }, { "EventLevel": "INFO", "Message": "Loading configuration for sequence number 11", "Version": "1", "TaskName": "Sequence Number", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:05.1731503Z" }, { "EventLevel": "INFO", "Message": "HandlerSettings = ProtectedSettingsCertThumbprint: C8F2B56B0E79B592334BA0AFFFE172EB3DEF6753, ProtectedSettings: {MIIB0AYJKoZIhvcNAQcDoIIBwTCCAb0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEGPedX2PQQm0SE3jOKJHGtUwDQYJKoZIhvcNAQEBBQAEggEAqCIhARxqObDt9esAHszPzPonOyNyDpS2f/bsaA6TiZXyUg08JARIAZYAvR3TMJaH+Q3IkTC2tcQHmlBFVx7v9Dxm0RtwGM3CJve8Xq/Jf2X6gsq79nKPiFrZ1BRDp/TB6lZdC6ahIiRA+DOeL9p1wap8R7j67s+oQiEVi0nI0zqSOPH+nXsKNhi2xaW466zdgXdfsy2amp9pO/p9/mg+W/qyMFKcnFI8d26bqaWxYBQVYqxWnXSUE7Ul7hHEKyXQeF2QwE7QVBEMJIgLyx6u58M2FYtoWopU37gOMk8MWfTgIumP0WZOPHS/n6AffPgaypStiu9Q3HYTYnqH28OtjTBLBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECCzHZJQzFhdSgCgV25G1ZQKeyJPTLZu+u8bIjtPQ5yOOLFQZj8XrJj4HhQNTndIDwyNz}, PublicSettings: {}", "Version": "1", "TaskName": "Handler Configuration", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:16.7047734Z" } ]Azure-WALinuxAgent-a976115/tests/data/events/extension_events/well_formed_files/1592355539.json000066400000000000000000000052361510742556200320310ustar00rootroot00000000000000[ { "EventLevel": "INFO", "Message": "Starting IaaS ScriptHandler Extension v1", "Version": "1", "TaskName": "Extension Info", "EventPid": "5676", "EventTid": "1", "OperationId": "e1065def-7571-42c2-88a2-4f8b4c8f226d", "Timestamp": "2019-12-12T01:11:38.2298194Z" }, { "EventLevel": "INFO", "Message": "HandlerEnvironment = Version: 1, HandlerEnvironment: [LogFolder: \"C:\\WindowsAzure\\Logs\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\", ConfigFolder: \"C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\\RuntimeSettings\", StatusFolder: \"C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\\Status\", HeartbeatFile: \"C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\\Status\\HeartBeat.Json\"]", "Version": "1", "TaskName": "Extension Info", "EventPid": "5676", "EventTid": "1", "OperationId": "e1065def-7571-42c2-88a2-4f8b4c8f226d", "Timestamp": "2019-12-12T01:11:38.2318168Z" }, { "EventLevel": "INFO", "Message": "Comamnd Executed: enable", "Version": "1", "TaskName": "Handler Command", "EventPid": "5676", "EventTid": "1", "OperationId": "e1065def-7571-42c2-88a2-4f8b4c8f226d", "Timestamp": "2019-12-12T01:11:38.2328180Z" }, { "EventLevel": "INFO", "Message": "Enabling Handler", "Version": "1", "TaskName": "Handler Operation", "EventPid": "5676", "EventTid": "1", "OperationId": "e1065def-7571-42c2-88a2-4f8b4c8f226d", "Timestamp": "2019-12-12T01:11:38.2338180Z" }, { "EventLevel": "INFO", "Message": "Successfully enabled TLS.", "Version": "1", "TaskName": "TLS", "EventPid": "5676", "EventTid": "1", "OperationId": "e1065def-7571-42c2-88a2-4f8b4c8f226d", "Timestamp": "2019-12-12T01:11:38.2368192Z" }, { "EventLevel": "INFO", "Message": "Handler successfully enabled", "Version": "1", "TaskName": "Handler Operation", "EventPid": "5676", "EventTid": "1", "OperationId": "e1065def-7571-42c2-88a2-4f8b4c8f226d", "Timestamp": "2019-12-12T01:11:38.2448190Z" }, { "EventLevel": "INFO", "Message": "Loading configuration for sequence number 11", "Version": "1", "TaskName": "Sequence Number", "EventPid": "5676", "EventTid": "1", "OperationId": "e1065def-7571-42c2-88a2-4f8b4c8f226d", "Timestamp": "2019-12-12T01:11:38.2498176Z" } ]Azure-WALinuxAgent-a976115/tests/data/events/extension_events/well_formed_files/9999999999.json000066400000000000000000000043521510742556200321020ustar00rootroot00000000000000[ { "EventLevel": "INFO", "Message": "This is the latest event", "Version": "1", "TaskName": "Extension Info", "EventPid": "999", "EventTid": "999", "OperationId": "", "TimeStamp": "2019-12-12T01:20:05.0950244Z" }, { "EventLevel": "INFO", "Message": "This is the latest event", "Version": "1", "TaskName": "Extension Info", "EventPid": "999", "EventTid": "999", "OperationId": "", "TimeStamp": "2019-12-12T01:20:05.0950244Z" }, { "EventLevel": "INFO", "Message": "This is the latest event", "Version": "1", "TaskName": "Extension Info", "EventPid": "999", "EventTid": "999", "OperationId": "", "TimeStamp": "2019-12-12T01:20:05.0950244Z" }, { "EventLevel": "INFO", "Message": "This is the latest event", "Version": "1", "TaskName": "Extension Info", "EventPid": "999", "EventTid": "999", "OperationId": "", "TimeStamp": "2019-12-12T01:20:05.0950244Z" }, { "EventLevel": "INFO", "Message": "This is the latest event", "Version": "1", "TaskName": "Extension Info", "EventPid": "999", "EventTid": "999", "OperationId": "", "TimeStamp": "2019-12-12T01:20:05.0950244Z" }, { "EventLevel": "INFO", "Message": "This is the latest event", "Version": "1", "TaskName": "Extension Info", "EventPid": "999", "EventTid": "999", "OperationId": "", "TimeStamp": "2019-12-12T01:20:05.0950244Z" }, { "EventLevel": "INFO", "Message": "This is the latest event", "Version": "1", "TaskName": "Extension Info", "EventPid": "999", "EventTid": "999", "OperationId": "", "TimeStamp": "2019-12-12T01:20:05.0950244Z" }, { "EventLevel": "INFO", "Message": "This is the latest event", "Version": "1", "TaskName": "Extension Info", "EventPid": "999", "EventTid": "999", "OperationId": "", "TimeStamp": "2019-12-12T01:20:05.0950244Z" } ]Azure-WALinuxAgent-a976115/tests/data/events/legacy_agent.tld000066400000000000000000000027231510742556200241040ustar00rootroot00000000000000{ "eventId": 1, "providerId": "69B669B9-4AF8-4C50-BDC4-6006FA76E975", "parameters": [ { "name": "Name", "value": "WALinuxAgent" }, { "name": "Version", "value": "9.9.9" }, { "name": "IsInternal", "value": false }, { "name": "Operation", "value": "ThisIsATestEventOperation" }, { "name": "OperationSuccess", "value": true }, { "name": "Message", "value": "The cgroup filesystem is ready to use" }, { "name": "Duration", "value": 1234 }, { "name": "ExtensionType", "value": "ALegacyExtensionType" }, { "name": "GAVersion", "value": "WALinuxAgent-1.1.1" }, { "name": "ContainerId", "value": "AAAAAAAA-BBBB-CCCC-DDDD-EEEEEEEEEEEE" }, { "name": "OpcodeName", "value": "1970-01-01 12:00:00" }, { "name": "EventTid", "value": 98765 }, { "name": "EventPid", "value": 4321 }, { "name": "TaskName", "value": "ALegacyTask" }, { "name": "KeywordName", "value": "ALegacyKeywordName" } ] }Azure-WALinuxAgent-a976115/tests/data/events/legacy_agent_no_timestamp.tld000066400000000000000000000025611510742556200266630ustar00rootroot00000000000000{ "eventId": 1, "providerId": "69B669B9-4AF8-4C50-BDC4-6006FA76E975", "parameters": [ { "name": "Name", "value": "WALinuxAgent" }, { "name": "Version", "value": "9.9.9" }, { "name": "IsInternal", "value": false }, { "name": "Operation", "value": "ThisIsATestEventOperation" }, { "name": "OperationSuccess", "value": true }, { "name": "Message", "value": "The cgroup filesystem is ready to use" }, { "name": "Duration", "value": 1234 }, { "name": "ExtensionType", "value": "ALegacyExtensionType" }, { "name": "GAVersion", "value": "WALinuxAgent-1.1.1" }, { "name": "ContainerId", "value": "AAAAAAAA-BBBB-CCCC-DDDD-EEEEEEEEEEEE" }, { "name": "EventTid", "value": 98765 }, { "name": "EventPid", "value": 4321 }, { "name": "TaskName", "value": "ALegacyTask" }, { "name": "KeywordName", "value": "ALegacyKeywordName" } ] }Azure-WALinuxAgent-a976115/tests/data/ext/000077500000000000000000000000001510742556200202455ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/ext/dsc_event.json000066400000000000000000000015631510742556200231170ustar00rootroot00000000000000{ "eventId":"1", "parameters":[ { "name":"Name", "value":"Microsoft.Azure.GuestConfiguration.DSCAgent" }, { "name":"Version", "value":"1.18.0" }, { "name":"IsInternal", "value":true }, { "name":"Operation", "value":"GuestConfigAgent.Scenario" }, { "name":"OperationSuccess", "value":true }, { "name":"Message", "value":"[2019-11-05 10:06:52.688] [PID 11487] [TID 11513] [Timer Manager] [INFO] [89f9cf47-c02d-4774-b21a-abdf2beb3cd9] Run pull refresh for timer 'dsc_refresh_timer'\n" }, { "name":"Duration", "value":0 }, { "name":"ExtentionType", "value":"" } ], "providerId":"69B669B9-4AF8-4C50-BDC4-6006FA76E975" }Azure-WALinuxAgent-a976115/tests/data/ext/event.json000066400000000000000000000013501510742556200222600ustar00rootroot00000000000000{ "eventId":1, "providerId":"69B669B9-4AF8-4C50-BDC4-6006FA76E975", "parameters":[ { "name":"Name", "value":"CustomScript" }, { "name":"Version", "value":"1.4.1.0" }, { "name":"IsInternal", "value":false }, { "name":"Operation", "value":"RunScript" }, { "name":"OperationSuccess", "value":true }, { "name":"Message", "value":"(01302)Script is finished. ---stdout--- hello ---errout--- " }, { "name":"Duration", "value":0 }, { "name":"ExtensionType", "value":"" } ] }Azure-WALinuxAgent-a976115/tests/data/ext/event_from_agent.json000066400000000000000000000037511510742556200244700ustar00rootroot00000000000000{ "eventId": 1, "providerId": "69B669B9-4AF8-4C50-BDC4-6006FA76E975", "parameters": [ { "name": "Name", "value": "dummy_name" }, { "name": "Version", "value": "2.2.46" }, { "name": "Operation", "value": "Unknown" }, { "name": "OperationSuccess", "value": true }, { "name": "Message", "value": "" }, { "name": "Duration", "value": 0 }, { "name": "GAVersion", "value": "WALinuxAgent-2.2.46" }, { "name": "ContainerId", "value": "UNINITIALIZED" }, { "name": "OpcodeName", "value": "2020-01-31T00:06:54.074757Z" }, { "name": "EventTid", "value": 139628215564096 }, { "name": "EventPid", "value": 10681 }, { "name": "TaskName", "value": "MainThread" }, { "name": "KeywordName", "value": "" }, { "name": "ExtensionType", "value": "" }, { "name": "IsInternal", "value": false }, { "name": "OSVersion", "value": "TEST_OSVersion" }, { "name": "ExecutionMode", "value": "TEST_ExecutionMode" }, { "name": "RAM", "value": 512 }, { "name": "Processors", "value": 2 }, { "name": "VMName", "value": "TEST_VMName" }, { "name": "TenantName", "value": "TEST_TenantName" }, { "name": "RoleName", "value": "TEST_RoleName" }, { "name": "RoleInstanceName", "value": "TEST_RoleInstanceName" }, { "name": "Location", "value": "TEST_Location" }, { "name": "SubscriptionId", "value": "TEST_SubscriptionId" }, { "name": "ResourceGroupName", "value": "TEST_ResourceGroupName" }, { "name": "VMId", "value": "TEST_VMId" }, { "name": "ImageOrigin", "value": 1 } ], "file_type": "json" }Azure-WALinuxAgent-a976115/tests/data/ext/event_from_extension.xml000066400000000000000000000023071510742556200252310ustar00rootroot00000000000000 Azure-WALinuxAgent-a976115/tests/data/ext/handler_manifest/000077500000000000000000000000001510742556200235505ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/ext/handler_manifest/manifest_boolean_fields_false.json000066400000000000000000000006301510742556200324470ustar00rootroot00000000000000[{ "name": "ExampleHandlerLinux", "version": 1.0, "handlerManifest": { "installCommand": "install_cmd", "uninstallCommand": "uninstall_cmd", "updateCommand": "update_cmd", "enableCommand": "enable_cmd", "disableCommand": "disable_cmd", "reportHeartbeat": "false", "continueOnUpdateFailure": "false", "supportsMultipleExtensions": "false" } }] Azure-WALinuxAgent-a976115/tests/data/ext/handler_manifest/manifest_boolean_fields_invalid.json000066400000000000000000000006261510742556200330100ustar00rootroot00000000000000[{ "name": "ExampleHandlerLinux", "version": 1.0, "handlerManifest": { "installCommand": "install_cmd", "uninstallCommand": "uninstall_cmd", "updateCommand": "update_cmd", "enableCommand": "enable_cmd", "disableCommand": "disable_cmd", "reportHeartbeat": "invalid", "continueOnUpdateFailure": "invalid", "supportsMultipleExtensions": 1 } }] Azure-WALinuxAgent-a976115/tests/data/ext/handler_manifest/manifest_boolean_fields_strings.json000066400000000000000000000006251510742556200330520ustar00rootroot00000000000000[{ "name": "ExampleHandlerLinux", "version": 1.0, "handlerManifest": { "installCommand": "install_cmd", "uninstallCommand": "uninstall_cmd", "updateCommand": "update_cmd", "enableCommand": "enable_cmd", "disableCommand": "disable_cmd", "reportHeartbeat": "true", "continueOnUpdateFailure": "true", "supportsMultipleExtensions": "True" } }] Azure-WALinuxAgent-a976115/tests/data/ext/handler_manifest/manifest_no_optional_fields.json000066400000000000000000000004371510742556200322040ustar00rootroot00000000000000[{ "name": "ExampleHandlerLinux", "version": 1.0, "handlerManifest": { "installCommand": "install_cmd", "uninstallCommand": "uninstall_cmd", "updateCommand": "update_cmd", "enableCommand": "enable_cmd", "disableCommand": "disable_cmd" } }] Azure-WALinuxAgent-a976115/tests/data/ext/handler_manifest/valid_manifest.json000066400000000000000000000006171510742556200274340ustar00rootroot00000000000000[{ "name": "ExampleHandlerLinux", "version": 1.0, "handlerManifest": { "installCommand": "install_cmd", "uninstallCommand": "uninstall_cmd", "updateCommand": "update_cmd", "enableCommand": "enable_cmd", "disableCommand": "disable_cmd", "reportHeartbeat": true, "continueOnUpdateFailure": true, "supportsMultipleExtensions": true } }] Azure-WALinuxAgent-a976115/tests/data/ext/sample-status-invalid-format-emptykey-line7.json000066400000000000000000000100241510742556200315320ustar00rootroot00000000000000[ { "status": { "status": "success", "code": 1, "snapshotInfo": null, "" "name": "Microsoft.Azure.Extension.VMExtension", "commandStartTimeUTCTicks": "636953997844977993", "taskId": "e5e5602b-48a6-4c35-9f96-752043777af1", "formattedMessage": { "lang": "en-US", "message": "Aenean semper nunc nisl, vitae sollicitudin felis consequat at. In lobortis elementum sapien, non commodo odio semper ac." }, "uniqueMachineId": "e5e5602b-48a6-4c35-9f96-752043777af1", "vmHealthInfo": null, "storageDetails": { "totalUsedSizeInBytes": 10000000000, "partitionCount": 3, "isSizeComputationFailed": false, "isStoragespacePresent": false }, "telemetryData": null, "substatus": [ { "status": "success", "formattedMessage": null, "code": "0", "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" } ], "operation": "Enable" }, "version": "1.0", "timestampUTC": "2019-06-06T23:52:59Z" } ]Azure-WALinuxAgent-a976115/tests/data/ext/sample-status-invalid-json-format.json000066400000000000000000000101271510742556200276240ustar00rootroot00000000000000[ { "_comment": "This is an invalid status file, it's missing a brace at line 37", "status": { "status": "success", "code": 1, "snapshotInfo": null, "name": "Microsoft.Azure.Extension.VMExtension", "commandStartTimeUTCTicks": "636953997844977993", "taskId": "e5e5602b-48a6-4c35-9f96-752043777af1", "formattedMessage": { "lang": "en-US", "message": "Aenean semper nunc nisl, vitae sollicitudin felis consequat at. In lobortis elementum sapien, non commodo odio semper ac." }, "uniqueMachineId": "e5e5602b-48a6-4c35-9f96-752043777af1", "vmHealthInfo": null, "storageDetails": { "totalUsedSizeInBytes": 10000000000, "partitionCount": 3, "isSizeComputationFailed": false, "isStoragespacePresent": false }, "telemetryData": null, "substatus": [ { "status": "success", "formattedMessage": null, "code": "0", "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" } ], "operation": "Enable" }, "version": "1.0", "timestampUTC": "2019-06-06T23:52:59Z" ]Azure-WALinuxAgent-a976115/tests/data/ext/sample-status-invalid-status-no-status-status-key.json000066400000000000000000000077441510742556200327650ustar00rootroot00000000000000[ { "status": { "code": 1, "snapshotInfo": null, "name": "Microsoft.Azure.Extension.VMExtension", "commandStartTimeUTCTicks": "636953997844977993", "taskId": "e5e5602b-48a6-4c35-9f96-752043777af1", "formattedMessage": { "lang": "en-US", "message": "Aenean semper nunc nisl, vitae sollicitudin felis consequat at. In lobortis elementum sapien, non commodo odio semper ac." }, "uniqueMachineId": "e5e5602b-48a6-4c35-9f96-752043777af1", "vmHealthInfo": null, "storageDetails": { "totalUsedSizeInBytes": 10000000000, "partitionCount": 3, "isSizeComputationFailed": false, "isStoragespacePresent": false }, "telemetryData": null, "substatus": [ { "status": "success", "formattedMessage": null, "code": "0", "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" } ], "operation": "Enable" }, "version": "1.0", "timestampUTC": "2019-06-06T23:52:59Z" } ]Azure-WALinuxAgent-a976115/tests/data/ext/sample-status-very-large-multiple-substatuses.json000066400000000000000000016264531510742556200322520ustar00rootroot00000000000000[ { "status": { "status": "success", "code": 1, "snapshotInfo": null, "name": "Microsoft.Azure.Extension.VMExtension", "commandStartTimeUTCTicks": "636953997844977993", "taskId": "e5e5602b-48a6-4c35-9f96-752043777af1", "formattedMessage": { "lang": "en-US", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "uniqueMachineId": "e5e5602b-48a6-4c35-9f96-752043777af1", "vmHealthInfo": null, "storageDetails": { "totalUsedSizeInBytes": 10000000000, "partitionCount": 3, "isSizeComputationFailed": false, "isStoragespacePresent": false }, "telemetryData": null, "substatus": [ { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" } ], "operation": "Enable" }, "version": "1.0", "timestampUTC": "2019-06-06T23:52:59Z" } ]Azure-WALinuxAgent-a976115/tests/data/ext/sample-status-very-large.json000066400000000000000000004276061510742556200260340ustar00rootroot00000000000000[ { "status": { "status": "success", "code": 1, "snapshotInfo": null, "name": "Microsoft.Azure.Extension.VMExtension", "commandStartTimeUTCTicks": "636953997844977993", "taskId": "e5e5602b-48a6-4c35-9f96-752043777af1", "formattedMessage": { "lang": "en-US", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "uniqueMachineId": "e5e5602b-48a6-4c35-9f96-752043777af1", "vmHealthInfo": null, "storageDetails": { "totalUsedSizeInBytes": 10000000000, "partitionCount": 3, "isSizeComputationFailed": false, "isStoragespacePresent": false }, "telemetryData": null, "substatus": [ { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" } ], "operation": "Enable" }, "version": "1.0", "timestampUTC": "2019-06-06T23:52:59Z" } ]Azure-WALinuxAgent-a976115/tests/data/ext/sample-status.json000066400000000000000000000100051510742556200237360ustar00rootroot00000000000000[ { "status": { "status": "success", "code": 1, "snapshotInfo": null, "name": "Microsoft.Azure.Extension.VMExtension", "commandStartTimeUTCTicks": "636953997844977993", "taskId": "e5e5602b-48a6-4c35-9f96-752043777af1", "formattedMessage": { "lang": "en-US", "message": "Aenean semper nunc nisl, vitae sollicitudin felis consequat at. In lobortis elementum sapien, non commodo odio semper ac." }, "uniqueMachineId": "e5e5602b-48a6-4c35-9f96-752043777af1", "vmHealthInfo": null, "storageDetails": { "totalUsedSizeInBytes": 10000000000, "partitionCount": 3, "isSizeComputationFailed": false, "isStoragespacePresent": false }, "telemetryData": null, "substatus": [ { "status": "success", "formattedMessage": null, "code": "0", "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" } ], "operation": "Enable" }, "version": "1.0", "timestampUTC": "2019-06-06T23:52:59Z" } ]Azure-WALinuxAgent-a976115/tests/data/ext/sample_ext-1.3.0.zip000066400000000000000000000040371510742556200235730ustar00rootroot00000000000000PK |cSRH]exit.shUT K0`bux #!/usr/bin/env sh exit $*PKlQ.h`HandlerManifest.jsonUT F_bux }M 0}N֢D=)G2$J]}4)tNhUb=>ȏ&BII]Ux$BOW"[PK |cSRH]exit.shUTK0`ux PKlQ.h`[HandlerManifest.jsonUTF_ux PK NXToN Spython.shUTbux PKwIXTC% >sample.pyUTQbux PKEAzure-WALinuxAgent-a976115/tests/data/ext/sample_ext-1.3.0/000077500000000000000000000000001510742556200230435ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/ext/sample_ext-1.3.0/HandlerManifest.json000066400000000000000000000006231510742556200270030ustar00rootroot00000000000000[{ "name": "ExampleHandlerLinux", "version": 1.0, "handlerManifest": { "installCommand": "sample.py -install", "uninstallCommand": "sample.py -uninstall", "updateCommand": "sample.py -update", "enableCommand": "sample.py -enable", "disableCommand": "sample.py -disable", "rebootAfterInstall": false, "reportHeartbeat": false } }] Azure-WALinuxAgent-a976115/tests/data/ext/sample_ext-1.3.0/exit.sh000077500000000000000000000000321510742556200243460ustar00rootroot00000000000000#!/usr/bin/env sh exit $*Azure-WALinuxAgent-a976115/tests/data/ext/sample_ext-1.3.0/python.sh000077500000000000000000000003551510742556200247260ustar00rootroot00000000000000#!/usr/bin/env bash # # Executes its arguments using the 'python' command, if it can be found, else using 'python3'. # python=$(command -v python 2> /dev/null) if [ -z "$python" ]; then python=$(command -v python3) fi ${python} "$@" Azure-WALinuxAgent-a976115/tests/data/ext/sample_ext-1.3.0/sample.py000077500000000000000000000064451510742556200247120ustar00rootroot00000000000000#!./python.sh import json import os import re import sys def get_seq(requested_ext_name=None): if 'ConfigSequenceNumber' in os.environ: # Always use the environment variable if available return int(os.environ['ConfigSequenceNumber']) latest_seq = -1 largest_modified_time = 0 config_dir = os.path.join(os.getcwd(), "config") if os.path.isdir(config_dir): for item in os.listdir(config_dir): item_path = os.path.join(config_dir, item) if os.path.isfile(item_path): match = re.search("((?P\\w+)\\.)*(?P\\d+)\\.settings", item_path) if match is not None: ext_name = match.group('ext_name') if requested_ext_name is not None and ext_name != requested_ext_name: continue curr_seq_no = int(match.group("seq_no")) curr_modified_time = os.path.getmtime(item_path) if curr_modified_time > largest_modified_time: latest_seq = curr_seq_no largest_modified_time = curr_modified_time return latest_seq def get_extension_state_prefix(): requested_ext_name = None if 'ConfigExtensionName' not in os.environ else os.environ['ConfigExtensionName'] seq = get_seq(requested_ext_name) if seq >= 0: if requested_ext_name is not None: seq = "{0}.{1}".format(requested_ext_name, seq) return seq return None def read_settings_file(seq_prefix): settings_file = os.path.join(os.getcwd(), "config", "{0}.settings".format(seq_prefix)) if not os.path.exists(settings_file): print("No settings found for {0}".format(settings_file)) return None with open(settings_file, "rb") as file_: return json.loads(file_.read().decode("utf-8")) def report_status(seq_prefix, status="success", message=None): status_path = os.path.join(os.getcwd(), "status") if not os.path.exists(status_path): os.makedirs(status_path) status_file = os.path.join(status_path, "{0}.status".format(seq_prefix)) with open(status_file, "w+") as status_: status_to_report = { "status": { "status": status } } if message is not None: status_to_report['status']["formattedMessage"] = { "lang": "en-US", "message": message } status_.write(json.dumps([status_to_report])) if __name__ == "__main__": prefix = get_extension_state_prefix() if prefix is None: print("No sequence number found!") sys.exit(-1) try: settings = read_settings_file(prefix) except Exception as error: msg = "Error when trying to fetch settings {0}.settings: {1}".format(prefix, error) print(msg) report_status(prefix, status="error", message=msg) else: status_msg = None if settings is not None: print(settings) try: status_msg = settings['runtimeSettings'][0]['handlerSettings']['publicSettings']['message'] except Exception: # Settings might not contain the message. Ignore error if not found pass report_status(prefix, message=status_msg) Azure-WALinuxAgent-a976115/tests/data/ga/000077500000000000000000000000001510742556200200345ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/ga/WALinuxAgent-0.0.0.0.zip000066400000000000000000023355171510742556200236750ustar00rootroot00000000000000PKdRebK+  "bin/WALinuxAgent-9.9.9.9-py2.7.eggUT $@`$@`ux c1.۶m۶m۶mm۶=\ܺ]gQuy;i`{9?Z@* vtIw_QItnNU"ꢲ9"u ][;?FR*pL{>OipqW;~'1ʁre(:oq!URx0stҖNpU .yAp*ͭBs%F^ ]""PqsO{oXGh{uPO.\wtbܫX^HX :!0_;n4ǹzse8TJѓ'~r"ãan4eKհkAhKgu,&*xA|clF )ߞSw,]g_β!_"c7stwnl+z]HdiNnY'̵[Xb#*({]x?nVL г@eG@#000010000130010J31iA"UKE+93=KQ#/*=I CPI4;"3=D#df%&h ͊+h$g+e%[,:A# qY-͋/WT\VRNL^APbWq0w{lnIV=/*%Mf{%1K^9 UyH}vh2w3c63+hcL"u U!;( m$~ li#HSÁg(5ۺ҂Հ Cq 2-6WeuZaIЅo E6} uO?.3dTK1\5Xy "#2#Ta).&a+Ȼ.@*)N�˿,9J¢t..[lcw4 vAJ&diD|_w I-qSF3>x__Sk}Xy~@-IXx.#QSsxL0;df*S,:>`pFatAs% =="BpkSoUdb=ź%U R7Mb8ߠ g"ME %R P&b83TjeV_6]t!Y;e+O #@IxP*^nAֆJjL~ 8D N[v76#ㅷўu^r@$p)q(GgZe6R23㘫ts(:6xOy/9eñUL'b"IaL({ny*௠T1v:uD%f#Z1j2M[cg"\i#LLLLL=m,X/3qvwз1u3fsgR5TR3Ե23kwS,h LCrg8ϝ/A \LahnjBoigqoDFZ"9QDpQMY&֖$VR}|ҙK(B59meg˿O2batkQ/Yt#*USi%%>6tQhE2Y7ćyk3o5-Ϭ 5xjd+5Z] с|z1*2t εvuo"/O/ڨFl3A9_jhRDJunY<]YD_kL1 9? Q/Z7>v v?O۫PJoFx)VxV`WiV=I- 4냹@Hr__aRgOg[NNF*e Pb3'KV֠qU ߛ-b<9 oSl|y}h{HvzDY<-$솙7Eanm>5Ha9io Ϳ3Į5xBgͯ 쓪,Җ 7?vo[(^_M-//ǖ{PB#$3vQ0gؿBߑ߯d6)J l>T9"V؝=…1'>-a×Of;/~DwT@e0rNB`< @qEJl>y& 1e(7nOl-(\ؚkׄ9wdH}5c]st||R&l3\5J*"9z8 l4BS:|W6Gf#-~ <i-[}b> A @@n GGdb$yΨ0Mc>НH+Vl|xS淡 c3=~~4|d:OTrp}H {0<\o܃`{Ә S$}7A 69DAEr2N٫ǽzzgہ yu~!5R$i߲]k?R*t [` S$ۤ3GM{vXqCh2faTё: B"6~pNO/k${ O!9d?yqw?@qDI@ 㙀f|Ǭ4'C}UKakZEt|aE#l2=QWEr'OhbRޟ,MDB(6/RH=/:4G/T E~~0GG ;sZhNs-G-@ )bzcr1 W标^ E ]" .$M-.b=R+jkT?ifz)Syݼqv@Ѡw .bsX~*ɐs6  ̞u Mm:&i b8{ O&[Z[)Rӏ$%D/UJ|8$A,[8~oG;uʋRΡ Y]SQ!h<~@ X ED`ɿҚ@JsN rluV}-1Qnw.נ#5*5zHT/x0~i6(i-YD$*o'EaNgVKФ?|7l2h9ʾ2 ؜hr FUF]nZug=ǻZE{%b*.>wT2Z5O&.3j4"KPD'`*āGr/ں Yr'BDU ̦{$Xg ^ka^eh@-0 Av`0͙V\M,`>LN4wU伭[`x$reⅪF^=#lʯk_;g,Нl_Y*(TeV=@= G{9< čUrB"KՓFww@l?zI4Y\U qQJ~Q=7>n[#\N ,8+F8\h~eac$(hSۣϹ/u7O~QV v *DИsU\V> _B <, ]1ƀi4E9o v،mv+" ˃~vYl֜Xz]ΑQ\j*F֬,9{퓺0ڐkV]acCӵ@kw^R62ow?{'o8,?BR(LZ~߹)(nJO!ZUJG ElU!Y?0IҺ:aϩB*`ݣE!EhR>c*z7QuqiA cق7>EylX tO9Klz}DRo rJ,0X6@@DRûcIp c6#۠sw#/t~,ļO; #V9x4xGCU\ ?ɒDggSNH̡&ܤ(#qEw{- HHQkh!x q`12>Kc&^x۳h(QD<\-fW~z7o~wN'CY.$B@&Sb"/ioOTEΈ^OZPs 2Qڲ BsO(nF(n]+z8?h+bZYvA?@SZqY&f@kB?u[ݕGCs)]Vf ΁`%OYYR\mӨª .BXGkDb{iͮQ` }V.kP0fSUiԴD#JfQ!%(V|(%™#*.mf /\)})4 2*-辷 G^eJص ($RtW({Ru/ߔ2ٖd[%'_^#TQiahpq~I99h:}7x+®WR9yٳ,Xkn ?M- ڪHcԨXYYt]~aBf3{pPs_RkN,0:==r"yyrͶt/FnXi ]֞"4QҽP*}icW4+d4NcBVT"y͹,BwY?Moݑ@J&TEZvݎ( NA꾛}c aΧ uRAF'&]RJoȧpCڻ"dD%q6GW; I#|/x`x]n.Z_,tݧӋϯ:qy[%S:]j:.Ns@ L|X1ݲO.K)wg)`pewTa}~Okl1e"<RSCHl r/ '(a 8=z0ICHaJstjۃsocQR$e&tr8)Uaso}dQ{Rث.ڞ&' 1Ẫ|!,O!pX wPl[ʘ-[Rl>lz1opx^x^CkEʇ2pf:IFmPtkޣѩy#yO_7-Eí15_ y夡 3ߑ]øVKG]RZոa,=OS)7Jґw$sEyRL)rU`&4*ڃ* =0 0מ}Bws/ggW_L/aľ=,!d ֠dDa QNewnI\ē<o^nKNE|Bݤ$CEURb&b$XGf   "Sp#r?2ѧf593ME>jEEy'Ez)seez'E)Q$Y-zI&#OQ{EVYt|b!ZyQ6ʕ )!?\(-VDv.2A=IeaoǸC,dC* 3`D׎AHVwE˰Y?pV;¦Z{SOR"aN^QޞEʼ}xy߅!֖G Kב޾fm_mb!X[0Ux)1ʎ=wkk76y R8U7w0&x(ΣpR.[!D4i7ԑ%+$<}͖4ӡnxyϝZRз)gˠMC6߹ݴ}<-V݇4!ݧlY*h"E6ɻpizr!h֑ۤ,3ęzv3BLLLDZ/ K3Z_%VodJEQҬ V|Íl"1bi4X9iKZ'Pif.l-:~8uwKn{AS+oYrMr]+%X5?Rfl_qKTuq0M4\Vv3{7Yunɠ9_T?Fgn? 4FZqmRYXJz["x̒n>-A F}@BZfk8Y2:ߗM6'r['J`rZޫ(-ݡtrIf1?+TQ<<8_dP9q,pזtsNag8A El(KU`n.K1d"p GR0 T%"5?$\ Q[JSښ u/_%o&,T33Z{'Eul~L7Y!p2_!+>HzRX<2C*MDfxk.=(TBEk*(:)7bqfRAS5-Qf E:7'#_1FaJLvXZsԍ"Ǧx:=+4`=N PD@f$&uڔAX G֨bħAwgvxFH4nVd<"CTChmo!_6UsBLn/ ڭi4o(Y7mA3ڝ{!'+̺H<`]>v9K;0 n(0hH/oɚ)hJ߶n%d4t7Х҇'?+:hSצ]q.#s3U{Ӧ{j'c̙BNj[EW* |ECl%WjtZE6\g[|h%{[]ƊR}[w3Bڇ vz+zZ섪^9Y6< jva.(q3 ̏@{np;t٩ rlm lerHDT]$ȌaxkM4vGYnk떅\ ;NSlW1 ږ/٠{ܲ9`_imפ`O[gH`NN-T\sd.uxW$1˕a:8 raG<2|b!R~dqH k]!SDzp*mFq[.bTM'LW[MQ[P ͍= ^,`]2 YVA 69G~ߋR1#vDj`&śRu=&<^|K/"?J`P>cpw3]{v 2{lK@uk:sPYAQ;[ 7@K7iց Q4#~3wԝ/#dwn^HU,a,r>v!7Ur!7 P/FsF8g}:w[ggEoVjgƾs& T 5B6,<01ӱ ;l* w9(.w"=ACr}Jb-ZO:G l]mȻaAiU jݍ7F;՚;JV:r 55Jj%oo AR<l;7c! 6%)yV*5 s#Aӣ:xe&\'$CNwb26A0}嵊UaP!r>W\1*`8S8:™z0>/Y@PC eSȨ4߫ȇ9ꨔ".Sy y|1O:Y O[aUsi"ݼ:K|õR΃TIAJkȺnx  2OxYYb1lY L )N][:NF;5Yz usӥ[ѺO:r]O]߼j6+at00{"}XA9˜T6Ij }ip2%h5E5ԅ_Q='[vcVrN-[5 `v(UVnvTAT'5c CI& tUS"hLtT|ө ߺVQs:Kz:G{1Gʠt"cf d| #j6@nY5TvnUl`D#tޏ7v ')vԬ7S>.i+"V>/x,{Wܢ?4 8I+wƈ(D> Ggd!mU^Q"A&ooI.bOs~ sQvhtM[R\SvpQOu1e Is Hv<*q"kɍ|_Ibge-8'GKODbVH)ɍSL$'+1N$d?&0N)JQ-<$q 1ldjF \nIn\jeƓc稖-16 )=_҈UPuV?Rs{G6mU@G o'ϒ,x8.!4s' r Yv9FU ݯPCFhӂjYx妵FY(@{N:"* ~-z*Ѡl#rRV GQZsVy)T>.Yȃ^i m_QaBhsnKPK6YLntxXd-gB]ޏ? O$uS  0_]ߦʫܽ_!!Ci*YٻLoړ7H>5IQN{21e:y"]S{W d9BŨbS;Enh/.Y1A;O{8r"aw,Znfp QlKj= `.\>iMf`L'ѡ?Q9l^u/Y$~ f fʝ,Bqw`Zr ~,U*O$i<]!,c"p̴ҞF6 yaz%N3PrC恛YM/Z=33NN%}`^S9wټ1i.ql/W!vKxm|Kqn,=dڶ0 _g"K\k/p03= 6r,)>;,au# t+XL$gERR/az-,;|p/*֓a#7ϽS~Q<{Ϣeȣ'NRug3  K0,eNض8Xԟol}EgzFCb$=%/O㳿C>|_Z[u%zsh  #ncodhL4_`OjAۍp*.Kj)eEPFԑ:JypxJ8sp?Aئ+լmBs );&\Zň'N\R&S`h RAP0ܻ0Y 0 LPtzHÈ )B@+3HW-$ N ! :A@t~*vӧM]MLwormC#HfՂoTAƛsRhK\"a.ٓ(7I&a.ۙIpmTD/Tq}vwCq ʭaϾvcg4#ɾ,5Ħ+Fmy(/T#FhPEΒ [_F ڰ\HW2𳡂Ҫ1fhhzdZHʼ% ݑ!MT֗ϋV+NSZ%t:.)]VVr3R+"WvE΁ȻdNHrţ+7}.;e ]5!i_l R\ *G037&'˯5\ p39O$6y-v?V(fZ*oY R NR RO"8*Pzu1J+suIM)xB~\/+¤ \IUgt[?Tj}WY5HLMGPP:;:8;蛙o3{b~g08 nQ,*;UTWGc PN?#:xRc662zqHv {^uH Qi#XeHns\ChZ0Ji )H2?a:7jv M #A3EHCx?Ay6JOlQlZ)Vc;SkK..ܺZ2ɍ\N3;gAm:S@~+wJC C%@S#GPuT*!ILK2 dFy UG De.JH<``=Z+wCѐcx^jq Ce3P 2B4N9=Uhܓm,/De"J1Bz>"M(}D jwhq?D.F(Lo uq32^~R=%=|N7!6-Yqt1P+|0ʖP Ug]дOWߟLu!3v5ZVҫ0J 9mUTi/Z9\S:K9dElPj 'HBe/%-Og=<Ú@Yv\BqqP0ԊOqPwDEQ<˧1͆óoj,ݱfx*{xNMy~:~cMU\u|yxœzJ򽋖ZKW -(O nɛaFb#I+c!y m~K{n]n5e;&QT. ɶNlZݷI{"+Hw.(eCU}t7{ !^tW᫭2\0XxyQ:H:~jql`hbIGѲX)oM#8-"dy1YL=RpBA{QAJ~~oERu):)2W]z!9tCf85lS%)40Pްez8wIzHnX*u$މV[yRk4J8Pӂ w7g_4vJ+OA.c[Y'>¨IZK3ZퟤE&LOcj|boP.8G"=ґwϦ 0cSg4D&Z!-% 2ȇ\PܠgUg4gm/5=w 哌`1 {s>K[v7B6D,9C* t&H=( :ՙ[PʭFd݋! i[G_r-l$].fhX?m;LuV_Wi> ] hà0& {VaXzl.8ƥ,k<ȜHTLn>MiSQ D_-dޜr3]1oob= 6+| V6h+y%{f0_`lElLf\[ qp=0]`17 i!S"G{Ś ))邆I }sA|Q($tq$l|Cr1G*_T.Ŝ~-BoBTlӘHjL?~='~`٢'3pG|Cae3i=X L&Cpn -tE;~W,mT&ZS%w#T1T6xj|4I1Ŝhȇ6j/xkj賏 vRy@GJqd8 Z4`.8 *!p_VH`vpLVU?SxPR~9+(R:3m-;mޯfW쮉G-`@npybE]5,L % ɞTmٳtCIe]"[f5.ɍ|\S{5=R%eIXj~\X(M/&+ˏN"q|ZaK:2{iY >H<~UoJԈxAA}hb5 Fd2"li@4H)^F? d3}' 8 #wuvzƒoy&AL:9Z$֩7pG֙O3:ce(tpPg]!ZV?wp;h|:aldoѲMC/ȸeioE@~m_i6Z2-n9Ic몺sKٛw)[6?Iz(1 Ǒ†ׂO|hP$R&ظ {E>'hP՛\8!$F=; qD>8,o\{+F P)ƯhqS֖v8an,ŬX#/DMk7$0YL<96P[@V®4+ujovɀFKA,i>̈́\Õˋӏ_ Y)4Sޜ_^]^ySgH`>s ~BM,6e4wzyZs2!㑷~3hD]`[i⪶E;L p"<&G3x( ;aR*S$Uf>ƙa_dLM!8LMd9  C= Bn}n=HuDE zz_wVjO++g+qJ`v#лuMu! Na9.o;wۣ\dMϥWnl'+|ף*0?-[7O(lB*KC,` 8m(rn/4] I#?Ns{^⢜pF@W-iG K^\U^8F㫁9#u;8HP'1!oiL HD{7X싕>f8Wjo+JR!wDA/y`hj?" qj_ 1r+eV]6Ա&ZnoXYp9VM;\UsU^ 8#N%`Hl[hP׃cOH/wUܥW9PIMG#)a>H*N' R釟ftL\S**JHxm8fcmLl`Dٴ L(#4`$e{Ial-j7P#<|NIiI^B jyD4(f)h4#oSޞ ;NeүRHα"5q"V%n[N](Ggk͖}H&tzCКؐQx|ӌE}zrQV$s(w3NvH|C%r:,Ə^W뗑B4\;9d(s)^yM& u\4pPdRT4DvIPOi**pI!tp^k˱ڧJș{q1YKv +`ߋ* eÒ\V:wYwO 'א.xjOKĆXE3yne~~)k~>}]yUZ[k.,Za,, Va oS.eMp{jscnGc?G"FDž*$eD xuH>6F,-\5cV#\Gua!Ľ%%mtux PI(Fy0~>dR:p{:y$.e޷zknz f&*眈o qz1VUXxfJ%_$F,oi2E䛋Q.)$B^ll_ج,Q 獁SHF%cTWnR2 Nb "9E#"*ʹbbr*-= _.e!d,1@mch/>)g@ OȢL,D:JsB87G"YP!*,z47:]*$xg >*7r` j sWs<^~ ѸM#K iCoV3y[,*CMZ96Y1o)0X&c%rYI/%|Ck4 ǟf'Jz^k٦ukUGf wUʀ2`"Af*'%5aKo |n Ah?% a? {|cAg}R*k ;}>PC)o.ߋdrҤ_-O *bޞޝU 4l媊U=t P,a;6ꖯ_^6${cXtщz8@*#}\4Gᔮ/-ni+0ê7^ZKmL+3]DT)&ϊQ{}r6Zh1S P >]'V׎*hϤo8Wh3qPA a<%팣ڻ FoJMEicKUB*@:OZh)Ų{q*V!]oJ VPDՅj M.c7$b<]2mWR\EBΣ<4h-~|IVSƻK7ۯٌ(Hz߭4.~%)_,Zqne\m5)y{[/nd;ζ36UC/cNY#X !0vUiŔ [5} S|J?-&!ԫnF}zu\#N^ΑZwm};T)*4wmOܻ1XÕM +]7bN<-`UuгQuI1'v].W LQH@4gX I8wS &lC7U/Ʃ[ A jNGM&tK%_[7!K,655\-Ms_)LA6gUv=l8\Ɠe|ޫ\7;C!LVвݠӶ}h{1@q&s2H& zM~+\7qҐ Hoҩarnz1g58ZZ0A˔-zU(Z T-3-UGpV|PhѨ%#)镠8S)h &A"R|YG:Kfdb˒W;ZmnJ[d=)«x:ݿ wϺVf9+gL,3(rK@s Kj17Y< ʡTW,p:rG'3O!l(N52MQ(Tw0,IRBй+e;E-pV5tɯG+NeheNɂvM<(:R^^<#a\'ҏU!zȼ#ddL%.`bF[+zll( EmA{DR"C mm [nש `C50͕vw i`QFp9o,x\ન3N>{Ia(9b%I:l_C=ݴ'wP@Mx-e~s$w'XWoÔpF*k &U6}maOIrT rKm1ee$[)1hrH@+ơ/8w̑R^ LMKفln_5Dizۿ"v 6S="l BsvS-@E%F^3l`:UV(Xx[8#9I[[NKbQwDM>ȋʞedrAntvuQKélg̞#.Xs>, ;j'1m3$XfCD}RzF _x{W8wl7}֠u懲2gթ#J!lNTZA~w"1<K䞀NбZ, gU:"0%Ǖ/Xrb>dvs ֌2h3.9HVy(kHcZ̤J9sBblToç3go?=lWLJ14KYVy%.Hmp \>D47c>R2|8EGPh1fǭg_9 jN8@s muF< "9_ߐ_ZB8#R~+? 'DJ UdgLPu98/d ~>:/8doc{o(HQn"T" IŔy'8z}\=w RWDEhT{C%g綽k/P$^i$ffks o  #< mVl@o&T_Z:otv.F;Cpd\rqb>u&fH^ Ȼ@ ߲. K$`%jqсlXT"OBEKRJM$V3N+a=Y)Ԓ5sbEjm^e%[J";9ROFÚ'DLjy_z1Nq6y9HIp3[ͮ]Oِ!1);^'h% u.;:pAwCWAH.Bp(âMf. )ceW5"$g~K&9`ƙ?7d%v 6`ΑyL,eNkoj4[U>>A"B5QUSl?YŞ^G/+QWc}qxKCU2N!+ic\iAX# 9UY>P^%$-@!TI0a̰z0ͷZR 9rъZxh\8Ja^N8\kr$Fo< 7 {jWNF-I><WS|K{5y&vw8r$L)g._D0k'G>\;9b3yXsl#q i?.G'З#wq!Z?6@e{Ft0"XFZMn omoszNC;[|1#裞=rʙC(<\G@D EGf ֜C k7F2īG(:*ܗE6ٟ.*&5_NCJ?LQ Թo'Y!:}'zD4>^^1IRt\VZ X7eX ):^Z}P]zϨxy"<^wd6gh|>biom2d㺣*<[Q[ReU9q^̒e(| YvӐu.h䆈(˅; $1v^L!G{暉2j=W(4ŸxZZȅFcRX"YXm4dW9FX mG{t D Z !0⣍*|4L:@R>m#;!F._V?e->VˇQtJpYz9=~Ho¿~;Dhlqzz97Dk"m?x%oő,y%H5{a%~^ɖG?rFn vV^@ACK:m[vğ:@L)]:)+uy\nct$p2L(([ fg}9HZ$ ד ]<91֌Frd?lTQ:ΠQz0j/eT7Ǭ1a|]ҵיʟ{@Z edö0ic"0%{! BҀ0RIs|15rs?ru{Һ8?.H}ojnN'9A`ϰӎ.eŗ._K'#kSKM;+/Y3MNto7f6nnNŻŪi 8b8uluS˛]G~z Ld/˒f60 M 2"Ԣ\oJtfgd0_H1S7 hG=yp=YC TO"%0} %8lh1aQu-i5LsúIk WKVkJ gȞh6]RyR=-Jb _6`bHx*U@xmJ@3[è&h{ #Y#`Rh[y3u*`Gw}봴0ib=CLfrbQ6P?5|` If):?K@Tlo`Z6҃Mnp"t, GmRn+AqtR0sqG}TВpHj"YsI {&mSUS  ߹]L/o{裒Z@ $eSO<7+ I!Ϧd5MՠW7X 'eI;#zJ{O۝ VcRPo uxd=.Lm/אKBˉgRzSnAi=4RgT-H)'N#:"s3.[Jez<1ghj̅W"%\,33x]xe^Yk#-9wHC[ڀ,g74]-N{wApP*!g!ͥ D'`*`xD7N_]\fV{~ʓz&X?p(~K/ [9#Í`tJX.9['N$P>ASNʪq6]H@}-~r/̟ ӚċH/@b& *O+{V4;YH9c/ Ud4DeEVT@+Ws\/гfV :]mD6X䒇nd$CnsUώ Z嫌 -}'G+l(fP0Fwac-2Sv%,4XLmpRذ&T^q kt2ؘfv V9'wп?>?͸ vPPf7n,Mw~.N;sc^O"t35L/*$VZ T'eZϰ"\ mQf+S.DnKMo3tW獴Q~yv xM;w'0PWqݣKjī n ?mGFnU[<>;%!wgk5L+eN!+`² [X?Ϳ17k ;d^cU/li/n%t-Z,qgojĴ3uMq}?4J'.f(;swM]Z)M (R"\URjX̧ K0MCV [`1fzo%qptqIJxcXla2v}­Twn6hN5J_ͭ5q}z ]wF/j"G5i4%!*z @o¨#uFN 5 ݬ͑E}#D.fzH*Z~łhp:Ei+LyY1ٔkT!m:- hva4p.0C˵^0)Kz_nd ]ӈ2(QIڰf 2sJfH". YWQa[[ftDW?̱XmLAi0:O 7)<8r-@*6/}V᾵ej+#)Q{])4Uggw~]~%]d~$b]LL[~62hY`ga JEa`W"8F5ߐӺLu3>/HDXB#mzBoo4,׳aYEj2*ofq\@ %q"C%&%c,R2 8y-ׁө;N'˿@+8˝H ǎ{vbʽ::db154Y͡<[ ϐXN=!O=1lU@P+,@1T_#F U|1eVdʡb\ As9 s--dP# U|A4sF-ϻbU(D Ѱ c9_` X_zr9_-8d  **7AX]}0읮*_hl:phNR"ba>(ػz-(>2.TE]{Q_D'Z6sǐrz,$bo=rkH{xNE-wa߲g͔moRѷ[5#ёX~{*BZk<ikv9sIty[`kL6!ΐً4mH鉬@qF3ܐ.d/XT9U0rjڹ 5/U_"*ZML2!\5$$dc 2Wܹ//a?!'[CF=+ŧ&0J+o\FKA!L l,ikyntDXP$dp<+;[Օm3MBɀԑda?Z[Kk6q "kDD}r\jj&j8Ƶnk9@GETn~o4?u {Co2T泒>o$ kWT=sow>XeF~P"VKsR鶘pi*qJyzO-8Aᘧt{f :[*{0V[LgepW̳7Z U`%SUO ~z Ӌ8TIZKJe}Oajׇml6Y^|w_cnhjk| @[kTJ$U*1$H n{-I'`OǓNŌv`BtIXK!FO$0<#~?a|Ch']'s&=-EDcpf|}{e$=<%)-,u NH) ` nByHiVQu|z*./2soMq7<"W~⎃r)K0N@ӿg cw!|(W0'Sre4ZLgY=I&'yPiq2ZTA ,5/.\6$ؙVGv+-) ,kyo !=bl9{>y黶`ivR_Y1ǺdXud3*^-'-p}n 0Ɓ.k(ݢy١%9Ҧ~]]xuR1> fJ%c}u^4qM_ J,joJEg*tǻ`*oErN. Ht<:w- + ⰳ+>1#iv8"_x '<ȩxyTWK"?כ)q™ @FU+ K{qvF:O޺Yn(l|hS\d;JI[ zswzIĖ*H.|f&hrpcG9=~=g桧F>agxhJ~i-縙ET.qüZ|6`ϴ0ʘ0OV$Vc.UָLʪCuFVj!Cܽ_QVjQw9 Xna |}hqP9,^J:$ `sjy~\Z2EG'A!CKxhſۤ|V2fsBCԳCR Qh7jv+%]>^k,rnk IgM+(Y$R6pk!zELؘ҂W=8/-\n.(TC-'Vnchhx}WSH $;f"`[I#h_}Oٌ۳nE|v;?|摺+0ԾӇ̧Fšfs]&OrE6 z c.51JռhoG iOXHoHYvr2JU3k뫹=qNaT`x o@I,}/`gNR[ޭqq#a2|1 O4!{+F-#ʯ'N؛6jd)d gu@ūn^ki[g"j&礧yYW6Ņ<R$t8 xQOAP x۶m۶m۶m۶m۶ssj滋=SܤRսIp(ROhoq#q@=^lB2)$t<*m*Շ|XBvJߑ(}%GԔo*n, r,ށNq;?e/U>/D > 0hu9#ls9³TM / *.깼~6 8TN^X{MbߜĚIZM1B)ytX\?RӸ|t1xA_ Prؔ>ɔ0@M}H7 Җ}C7`7z- m>5=FbmL O= o8p0`2p,a1l t~ Qqvoejd qtJ굵+>۔3j5006b0D%):_Y8U}1;#M$}x'ɟ1J$alA܉ c09un#ޗ1ϵyVvn}basU􏝗 8$8!1a.,[ b)"^ 0Xw!\n2G}*m=+ A|d,?t*LXiOأg ⴾa ڏ)w^ o0^\LhdcvKsA&7Ej@_w>a}Y)?}|C{>`]o7}bZ3l<typ.0vչ(g22 ]دH5vwva|8s1#;7 v\N^֩7ntsPl~2=@avX}͊ozInw=t)!<+<,xF]FGӴ$#vR1Q/y6ޥ !\}o#Pݜ,.6ژyb/AO:k: 2zL F^@>ȿiV`ZNZ=X4ytƪ:?Fd24o :--)5ct%(A,ygu} n^_IK۱NKPUB*$u=ڃzw3O GQbg`ڤ)5;|FAžZSF+>+'[p'oY pT'gW^qNURj2|[e;'+3i&UZ.[N-kB_DęL`BRc 6X@ݿ4H(7ބG j+L97y1;i6PU a,d[P5A2\qi*-i/Ÿeز,b Q7qm(d(9GARP $',2ϼ~ tF=EyEW s-\o%tFMSP{4*p\{lj/\r"~&Frug;QU^WzPL78NƸ\M#8k(jc$gr6ޜ%DY)7xʎӒQҷU Pf%0 O)Fn5_Mw1;=/$xWx#w"t:~ :J̺cEF4e|tAu5M78ok#ۗ-#YSn4\//$e*#'>!S,l}D]k:D.z\uqYpڢy 1z  J9jӋۛu\Z iAs`Jf+ywWQVM=0.l~gK»fTxԳG?ŢyI7hUy/dtdF\DU,A7,:v2oM@2Nf^sBF :]$3R%7m[9j~HMtwlm E>e 8؋vCQI'v}r*8A….D W Mlzml՜Em&$No^Mد_YΙ ì.!QFn ]T _@k6k*|ޏ.bXd6 0?Ip+8W-F $ sƮM&$H[Jƪs hR"ni} Hɞh2 2$nn{*mV{jFU|<|dD<'p'5o 2z@'>Ld`rH*jFq%^"MBEoq/S1NaL%ܹ]QmDgT($5ݝ?$ Nϑj{Rܷsx4%#Sf8ȿS V&wK+RNAg4ٳQ`8x>w('qS$g &}fwn~䴺UryѴ) [j1v LVx.Ys"tTfųR'm"KW2URuQժEݛBfM_WUJJ5XUV%0GчІܨ#snWGcN{~3[NCT~{c&Q6ҥt.Z+̒cT X)3='˯ȳ3࿎Qe gYx-HFf2ga6J卑*A1q\iK-'Oӫ] cxR26Z)מB^uWG\j&VT',[^P O`2Lj*FZm*Ĥ6Wx]s3*1dVepȂa-q,FPgY sj)O_Q?nYϭӽㅀ)ˉUnᬳGAInb\ ]tp@Lݞ N[.ԻgAefCWt?S'd0?P,#2Iٵn|,~ bNO'(9S/[%U/?87R@b;סl,mq@+V(f&3Y*CVwG9|;$@O$}{>3h  }u-xm`)j&U~~O1~J{{ΐ\*X')"& vL۹|HJ\m_)<`'Db$U9khCp1yFRۍ#ND' <(|(Ի+ xy2?QҔGqP?ΏHهS\߹[~4q()00b$^6>l:D+@HsZsEX*h5dP8 FyUt\~8ADUpLrrQJy) Fegk3"<3|`Jmc|Y ,-+{\@R֑B }>\+ZN%Uxs SCLwو1L;t˨l$K4A2uS6f.WSXE踣&CWy SyċI]/;O-כ&i˫1x&tEp̃I,1&ʪʤJSB?tJIq塠?%/8A?S(OtފKjfm!|yzSL iEUE9}]#fyueHD֊\՜ ,P'C23(2iǐގW;<=>7's!pT_v3cftE5?dKXA`+$Kg) 13,/E+:둘&E\y =YPƸ7pK bK¡QQJmI"bǵmkĠN[)>mIĊ_ jRT*?R  hgX GB TN-XZM ZO Z"ez|ʜɅۤ H}|}~HهPwoUqwTwEW#kJ< )L9h+ӄZeU'tƿ n m} ܜZuQPўğs:\. f17\x,@o7/jIA \nBCnCJe+.nWP)\X߮n 1'Ie/6?ݪp{ BڷzδY cZ<%2r9x<;4 Ja=.~=^Ls1Lg'%@g9Yν]ܥ ;nN @ɨ[{#e4iXD♉d&Zd&\B:!%bpCٍEE&p8@_ڷX71mPܟ,%) ~cSB2 D[Zu-{ۉl)kNox2&fS‰uuW7Z\h n!tEk1HK76#C#ؑ9Nˡڿ'{vDBܫf{wDZ|\s7EBMGՑuTtZJ\H}8Zr5 ^*AZI,,>\>se]>;K4-;H#M=4N 1E.Q0CY.ms(ƵR O;qm,%"ֶ55wt&$G?rTkFUUvq9 Z%>cؒHDZ|<(SZ4A{猘v"?e0l.c3Lރ'{V9Y%oBy[_niDMs?рo!-|4Qb )J20bY$!ls0vN˕J e=Qj8Ųì{WKTYz[0m[gtE@#86LL¢D/WQR(Lvh[ڹGel!su"F˸θ^6=b8AN֚?OfIbl|3#τ!f3f_İ+H98EGf؆O1?:u@ĐW bd~:|rO>8Sͪp'vY(/1cr#<%5}L~N*P,2$Y>,Ҫ Ox <# rXQwWuߍҴ-~]əG.d;cvOUEߛ;I0W=ⵁW3pϧcud1NTGZx7&˃ 9fRwdIU~B_9f?y0;"r7}@P\,8'Rxqm-j'' >_'#U*7I-3IC[y#@q EK稗9 7OgAb9L\zĽ6xpGJx,"*#LAqDY;1%{3>+ gzo7nm$XgCp-EJ;9&[_ E 3uu,za`>;;VkJn;r_0YKY5Y@*RXHsb;77=RJGmPlQvA 3i\qhA漖-0!e|:wlw%nGisJrZdoaxq}Hzԏ0O6JeJC; I@G{iXWإɥ>̪6va;qJ̉V:%QB<Q院 :Ўg5x hF%u<V@h ` lФ ~^4)m*$ \/"*~_He+ DzBJ?wFND xXpGE4S븹O&ߞKr6Ǵ2Cnv-TcS \]lb09[en?*F02D=v?ꎲɥ< OU%[pIgv5t5K" Ppi,<; "IcH߯}5\Ѹ^9Zrtzc/F<+0g:w\Y.r׆VH+/KT3z]iNxЇieMOYE ϯ˕-+nGzONB3^cea'3= }3eCMe#1T' NRl9WE\fpYdt/f gS`2" 20@OץCNFG?߰̑v9 5J ir5)eW,@KG|ws^`u# FAt#֧S͊4Iii#AnH4r͝LӬ,u B' JmDB~Y{PC \w^ tb T)gn<gi,7J.BpQTڥNb>(f~>lIʹ=.E76@Ag! (BQ7-MhU^p*J~*AدĆ4c\^ W˥A@*^iԲgjM6sXC2`(2")LlH~rX:(2s"ipoE_FK>3b;{!3y$c$hfZ>|durZm-/?#B_H-j%;Y1FTGu2дU#]I"k 56c-XMTW?1W.`8 o"$@أPFM*Z4Y|V ͐[+=P\.@F%}U9a|v #Vw A5=~Qj*s U-C2mdҤVV[T&ˆndOOcϪG/J7,!%)*!Ao6@Gj.Ll>Z ϼGq3*+=D:ڪ٠lJܜ`[rW4Awix1mn_{;cf.2j.nFBoOȭ87 kGE `S1R`d2=IfcPk!TlʀD$`L|Cc̀g׫GL`Y2lEr'vJ|HFIH HGj­֯LLּr.蔼_P9eRV#!ˢꟗN![A|ȉ c~p]B|zi_:PwRXshUeRn9yS8 I%w&u5@B+Q#ɱhqcY 4)ţ2Yjo7q`%K!/Y81 RKbF4CJ5vle@eY3c-%=[Cz$nȦ9GvmX!N#lLݸQD=4"Q6Ji~g=:Ĝv6B=?kV+s0AGsSUw5'J>k-<|-ଫp}v#SNRP땤uګIJAOjAA\L(HR9 S9RٸPVfu9_JZR{h&,~BRf(AI#!4rC˘"'2ؐOY[R3^?(zzjq(yӹ8V-QR%Q%|wcIdI2o\&AeCNz!Qї$$OIڕ+qCo 'XWӺmqMMOg%Cj0#Wyɦ4ܺy6 O} N_aL|0Go^/4Ht!%<mrnp>ūoGc7" G̸Nz0gbc͕P/c>ܟt*GAqTօ#@Rt4&)t,qrC*ә2C̦ZؐKw|b+ڑ!g ԑdh _:hb م5Q,Y)N¿Pt3%Efb+V!1( ,ˏGsR ܤwQJ8#w8a Y>;K ^#L=ܨ̊ڞ Cul_G-AQH7zjw"ѝSW[. :AV)lɕo c] *5T1䒫qIp|q&a_kJZKm)8U9Ԉ5t7_52Ii yr`4 lxr\ ["V/wPdb:7I:?!D’Ij(f1JcP N~'캆{ͼ%r=4 yTK7׷ykRjwy||a2>N$ "B6 oב?NS9,'ͿׂqT{1n^$9DǩΌiʾwuef|c{9O'O!q'r"'s/O cװ"y.ɹg*p<<5oK+YUP)cL/_ )F>D1M “Л}I.p*A܍CF|, P`J P Y|#7̶0早w  ΖdZb֬-B8֌S)q;&%9{moi; HHṕ$6b\`K"*{'˻!=KYi3ͮ4 hkK<2+̇*Gb8B'3O}3 0(=He|qc?<`U]rUcdBs&*5Ok9F߁:DƬ!$ta5O&HmB{j CbcBJW}Jr~ZaCX*hiM Aِ 6g = cX+qSˆpIi:pngv$@&T8H]Dz2 Qa?bd`YadobˠpL~bu#(p˳)Ic`wMs3u(k8c"m~8Ub}]>paϑ 9(r~ UQ˯\?eΊN>~40HO}TV!|GtAT ?b!(WK" ))P W_%#|AX',5'5~#C@ӊBT5&\(> |jrwX ªW4h8ʖԑOu cňRp[b5!rrUdnDhFa-&3d#x>aX^h**q,"( V *+(h虺ku%ðU氿NkKB3#^ls/i-Gz,9Hs803ɾD \vZ/qn/R$D '/͝*)~TҲ8ސ&* 6"wLs03+,?%nQ' USpu 6JHp9:WTe_ +.}*oi`6BcԲ h,cj k9{P΢:ɰjԕ>"z]Htb-`tה OwC%ay9{V<J'LmK^UZ?UK-H0P!|dr3= JٗK {q}}%^Jt 23,DH.d%*Wߕ7|_M9^~zjjZv?wPP4/'U<{׳-4MWߪ.άا52Ba?t<rnd؍.ap.nnm?@a =%OKa)CR_:dT x_. U(x Q%st&ǹC1s),h(fSD-( wx+P|h>>thybs5`Y%53㞥{NC]'.ƪ3Ybbwϒa4}Rɒp.LJ /ퟣ<1CG35xm P55zX2 HDali`J$w3"rdҍƥ"T7`G2P :F4eSxE"e HۗO`#D2ރ[3?ڦ%Lg9 3 |G0^=7#|v Y'@["ǞDl!;«W3By{/yo$/\)%)ߎ5%|n~c|uT3A[![I1Cpi8=LJa{ߌcfV%9/'z׸dgּy홷%'#tF쬥9= [ ^ ƒ-~\EY$xv.u^8,\?bƦM9'C~/ Q@6GЯnH@6AUt2DPx*$5/oƸ5dV. 5 rdi䈩R\l{G Y\ykb86Qj FM\>~q%ma"TrWt: wb p ټ%Hܵ+a~6 &{o-yCީ^en#k 3T=x'R|pBy`Q¹$,ϰ\߼Yܸ|GYNHR|3pI' ƺ/fLK駿FlZV N3\o]MN(Sv@|Wᗊg< iByԯ==kߎ-lO1Ry6$,Ydoz}^O%S/*ajRb륧0XKuZPsu!exhs_Lv"mG*};-WL#/ -mןBԏ jKohõuў%PMzm;T`dqr661qvhQӱbEykw5M|6@FOjnQ@6Q `]<~DNJ{;?ܲ舥M,PFu.mW,I%( %\ gHȋw"J_v6Ғ-Фm7ɕ$zM8.9q'8K=,2HoAes~( h… JzcӶ^xUULϬV't f18Xr+ ]dBZbZqrǫ6>bM|,Yu5P90,U|=glk|V乗'ua^4 0͌qEu7W0 f_M$Fkt@H"Msf8ӬVƸe/NeÒ>$Dh{m}=ff&ÈlxehSy8!/i_߿ȩpr*оJ9W D1B&#|ߵes}Z۲hgӵe/jƋuUge\8K8"VQZǰdi7 .Kz*gI|G/[-שT0w,'aZAIid+C/cy k4JI)4T\1ѽyˣV~ T W^u#@z0A+:ghs &=V޺~xp4N>jL=nKh!}R%وo-Al[Kǁt;5E$*{g5 r%ע;L& σ\ږgd o(Ut Gٚc9d{%$ēi~6pXl}ZkF3^KoVR.zp{ ȥY\O,RO%c~bY/edl(zh#9 {[-xw)g$ $!4.z_*Tgғ@ E 6{ӱ?Γ.n'`<Hǹ$+ɬGJw|&d`0œ{ Υ5I,.gζ.ӹ &%im" Y( ohbja^_ycDi$$0tmP5ܼy]DjDUheL14(L gJ$k֥U5q;rݰTڨEuqK.є%{V)<*t t2rR&&9=\ EyBNDvʗ=r2~b~b>FFg׍oS)/(@E,  %x-zK Uy(GwX_0w{}֫,壿 9rKvs$mސMb-,։2gFeJ+%AO90dʍ,(`W HA90a˒ #R#MT#Q/A+lh(aiR@O44q@0]soҞ)췭5B_᯻ҙиuZ;.@||?ϙ>719f5 Sr*r=X.ԁm2FpዌX3Q$99BR$A sF1bZ ptuAet7u+IDc̉,7>؟?00Z\$BB#GG*LPSeXg]!Q߉x1mgx8"u )b P3"9~6nB+;Hb6!ɇ=wҤOqNN}/Po0+%kIO~Ӣo}c.7!a|5\Wmw<䪡X/;nѶ.Gc৓S:!-r9I+G4V9i6Le8f Q:[i#iasw VM6ۨ gT{"6 w'@3:\r,}ejJ:Z9W8Ϗ1=\Z Bqv1 H@߆I=bDt>JS`pPX9ec^7@M]Rsg|rT/u`ȁ}|- 5q%X-҈Ci-]0H3"{ybAU!! c~vx8p1-%nڰUd!%ջ}_ \u{XEY&cάС2KULbBh8*@xoz3lIh3'Hq8}J:ζt")dGh,*q}W~ !8/kb$ԑf=E eb:5qƠ}_es067Z>e-儞`峐{ò^OO7 X)"ګhϋvu\LlʴNf~Mϓ#r5jANdTOθ4\+L.Ӵ2dT|"mm\l^x2S4{}Ry?212j\!x ;TBPֻ'+}*{n8ƥ]q ׌r~|6jxiAr%x+4ȕSOif?AP PQ#?²#U>#gFFSWS BCFZ5 eMw V)TZpS:Kq6_`olVayHi=|z:90 LQGu*Qװ e(ȵm͊A0~i|V"޲ho2+x#G¦G;iq+ 1©@ƂNQ@<s@5 }V 9nFlx:jCm.Ѧœϯ"E)NGUGolQDMtPM:QI-|e󕭜VPh_H#WHD! ȠKz\ o?4e2%XROz>N 6,Yr`H0;U'Ӧkv$ω`RNzŮ%4B_.R-}I3#47ϐ#kB֧(OpEMWq&?Sƣґ %LIoZsNaKaz=NhalsJ7Eۜ +U _?]^UrǪC!?j!M)速 e`]9vҀ0ENk#sV8Q Sj$>Ma/|㲫xKx"*Ub%f9\S^S†JMDNrenGgTQަrug4L2y& ^ҚOEpJDP#-ZJcsik١CaeM '$)5vH5oS#]C=ftf[ynElRk'B*+y;˾j6M}!H;OhWUQ Lp| +3c [Ec[D sX8v;:.ZKg\1e_$ e@_Y=TZ$h8}I@&ӌh٣:^man'f7К%PD * 9K96a%92 [KZƪO,7w3 C-suMOί,ԧÇFBBk޵>tM§%=Δaqt5y} pSy=oaf5ѰPvֈAr#*A.ϤOSo1xS4NbF5R2  y q= 8VkcS!;\Fj`8T؎{ פ)ŕ7T2j+h.l( by0kuE^q}`.u*N/j>c5$\M8m{>!wYօT/J5 /akl*GI/\ YeN7ۃkcǤc(ciD14tqDТ m O/#(B^xQj38t޻eoӚ~N]gaeKGquܹBc;6"L&ed<)g91b%c) ClX]vȧh<5F:X0ږtiCȧP HF=2b"Ϳ<4S䉉aֵ=7 ֍f;B ֈwI։2@8 Aq yxqFVvv^TϹ~AMD%.wJAP #C`N;J*_%w jI,ool"VÓ}X2T+%h W΋9,(ŹY^Ź)5/iӻYY{mgygugeAg'g~prXA$Y#I0ȜSb 4QNYa\\R$-[zJP܅`om2 %@x&>F}!R4H!N-{d\p1d<F~J呠`=p pb?`?. irb,BQ0(G԰q.5@dDF-@h__Hed?ve-@-fο]g-v_(^f_f8>^g_e߫(v-if>.,Ay7Q<<+guR6iǝ^QW2h>\C*uΞBy'SQ$~af*٣b?{¸(8xn(N|%1jiPq*gKv) !avI̎(p0gg蛀wUJ3KaT<htw\.PNgA$׵q`! y=N~c;!dӰd%Rw u:Moz)A=>ЃGN|4W1hN/" 9+*zKΞ="W%?@?u$5Q?pIee")TSJgA0h-x MjJwDDJ:]"RhsGCҕyhT1Qdz_[>JUzi6c1&eKdt` /:]\`dιDҽ!8 :I+Pz!0 c?YP\bG}:c|D[YuU&Bp u$e04P+Oq{ۂB3lKܯ3;O; *`uL󌝊sZTbA>?G 7E hu$gbsdW[]%fgyE]G1gժ;I_BP h=}ky!Գ* ! cN"FҷHӢσoSWTh$_*o/Ӭ{h5>K^8bі5Q#:vk=U'9Sۺ5r-$]_\IXqẂ֕߀aah܋:4}ԒpP1] 4悞/|WЗahKRk';A_ц % 'ϽRhpe "rl|MH ɔlW%<'w F9W;ejp8>=GS,53< yt&M4P 2P=^xJåa0OqDzX gˢ[.=mT!ml";׫mIr)No6{/K_ 9QɧL=Myk"Cp KI1qۉL탐 i3ό*@5M Sr)(6lX>-?߼Ƌ ,R@| iq6&bbCs oMɀW cA!܁θ~·<О/KGV@y| }t; =w&wS㐓4x˫@nmFh9?VfEcO \5/4Gǚi[ ~s(Y5 % ln>gΜ\xܼ_zkHЄcA (ՁL#|~ZkKu ʽ1d-K=[2ӏn}M\`k M7/;lUK,G ux/&[X ,cLK:X&kz-o`] S`oGL vbIž窤ڰ!Y* %=Z ݵtMTV fn^34`9R%d^"%+;06?c" 8,9$0Q6!*⴪Bk,l: l0TRj8Z퉘h:xUbS D{%Ta;B]r ȨqHB۶;5f.9ˆmH}kqo|HJqaΨ!fx.V@[Ħ`(]W<][j.$qu-RNl#Rl BQUнYXMP THZ压S(r.~W-hFK16{TgƵY'drȳUT#PDXNzI:|pɂ"Ec[YZ(ŞcԹ}Q)y>'3UΦk z𭏄PGXnTqmVӠñZ73ufp:e5خF-%K_V"v諜㵛A8;}<2OӺMss[ߋQ!IP]?LXam8|yGh)ٞijEdh )VWEH\'* QFS#VAM68VIQz IvQ.,Eywu:e̯`JK"nrӚ1.oz/dHĴ%2Xtŭo()UjMbNK o(sLƴP4.rݼJ2nӚ(ӡ(b\{iHQ'$#O6@$soppFc-w'Xc_TɛO{/()J;Wk%5rhH9]][f\Ux F푣S{SX)x|OO攍K0Q*Z0snENWu UqPR_B\8C!zDήVsOUQAњhB&z"{I+b O.(WbYg)Mǎq&եWw6U7lEleZFG`۶>맇KJEK <(e"aqwtS^h_JؽPk]^lu9kQ?w5ڈ iR=4$ٔ,KЕ 2a34LK/}Dxov~R D ĢrWoR*o3ʼn7y(r󥵲 'h XﰊGcKN i;0Q嘰8'q㤌_n^&ȩl17-ŕ6)xg-J`4g ׌Gc"b܋Vӱՙp[*gl:=. G8n_f~}9P\ Z e4'-s\It!YVW "w@={aX&/-hZ FmM nǢiC.{@EI1Cl˪zR-A8Cj;&d5FQ(K(j.7t`tMK[T"i&ģ2pӮ~7>6~ԏ{sƹv+ v<BW l i=rTEEg1q^+T({W* .;sjy͓Y'h?z?ͰQ݇EǴz^UDXոCϨv ohA<c5q l_70!oC:ҔrֿN̏gEAbYVSڂyg7F&x_n?fIӣކ&}~}{o򜆾#12e@R;OG$?!c ]Œ$+ldDt{Y۰Yߟ e/}Wa@LؠX#%Am1o-Te]1x["Ty 9Ԏ>!vi kV$ͤ60^A@!l˶|.(b6Y7^YC0UwjpYӊq ^ua:Pq5@%O@̳a|UpeԬt}p]UU(dK>vUF(;8#DyĮuy%3 ѹ!y=8Ak1jQqwqs98I JU U :6)RH>lF{3U$[}?.t*E-óRGG71s'Dɇ3g/4'K"!FQz.p('l&wF(6ӶJQiа_Nɘ!7zq|PͤNwzjNXB / | gs 1%@ΕÌ~.;6=a/RzM{?3"|+`КȎw_tny?BfrC،P[?xįV717wk '4 $S9qxQp=;$Iq~hGc"e 1&&B`TVlDāp49*Eqg3*D&PT`g\Dl<МaZ|2 +~nxD( T\7xd jU  bзwM:A.!6Uia'ZL_2m˅!(OabnF9:9[`RZ[,M2L`XQ**'(Ij_WT K(ˤfH_* %ZCo9HK_{|\ɭ\~w|j}̦!%23i=TTV%&KYאikm] z!uG Mwt GV# #dУD8 fK?.u##Ejԛ#Qo3GzEtg oLV`2ɬL$%  M;hľ,hZnjkF%jopkex8[\Ԓdi.TŲ\+g[_LFVOcs#zr8m6}'U*KEpO, &e2ֆ %hlع&HPH$Ӿ~o6V]6bE O+!pU   yōQ8Fc*ѼR!z5 dVoLn4[Y߂g?Q< F ˞;>,7N+ruqP Dd͘.dcEזxCSi~1sn1yj)͆FsDS\3$oFp?|lp(s_!kˎcQ`a]sݼ]ɱ]gَQtG]UM=̼B6N_6C)D?'F&x7Q:@J~:ljط A}p?!+گN&`\!)^G|uȢgWDz:Grfwmos)DWg~ ĺ>)|ODqR;8/8Us~ 4c:ջ Ah70(5C-kuz&wdAL.bF {ܕC)YO)?@1] ^5`L\2O%PSx)pyʔ7\%8 ~1H3dL~tC#| 42<"T< ,;"&iu 0)@2G'Π6T%m;;:#zzGW>L禹Vmj}Y2Dcl֯Pt+l ?BVL)#${\mY֊Lȯ*V1NZZe78z'MW넒t UK12P$t`:C4!?4j彃Z8Iӯ:EwiCk6>~u7 Y*G|zd(??XR݋^ kGJmԮԤa%ƈՐPJ[$#>d/% ӥ" ?Dp䶼Jl|ex&ʺo9k1z#s$dzJ@h~<\hOMEa׾]7Co5E tb$(3:N`;U3-v}tӖSq)Ĕ:E 'CRŘX.g|JC[Ua/S@аݝ 1#FgUv3tZ(7Ҡ]qv$6G qyX{@ujK5bZ%+ x7d#m%uUEsnqגsm Ya ф\ ~) 房v(;5ei7@pD"WmDC_`iO81U$2ZMRer詤۸+AC@x f ;#!["aR(pg.z=q }nְMǔ*K!S#Agw0}JGSgnmɯxa\t/ xPzAJm0Tz4|Ivi;2HP.N(>-K=Ȋ-cyh4u˴b1ߨ0-)sta-H;vHURL(-V琹s(Ќ'?] {t5X2i~-{4 j&Z FkiOˠ% Ha}fe} `R-$`r=J8w=o EfkT*Ξ6m#-[L▬c+ l ۶m۶m۶l۶m۶m۶=$ݛ̙NVJG S >>dTBe{{C_IsbIWGf7Da;41\ok 98Ք-za:駯bb3HCNOԪ@T]hXLDy!E/2mBL)JxLsMŠ&WɤLs_a}|GgC|V|Dx)DQh|! +u=$?n4)V޷mf4 ʯk}F -}g;lOcޗOODAV$p$@eSmOYf>3IUY،e3p8F%y&uT1_i LMsޤmЭp4"4|ti!]/} PPJq[xVF3Ϳȫ1]2L,>A3a+< aS6C߯\3~Ƿk\?c~J]=YK(>ߴ/Fo7{h/)-Wi`_YWXJShTܺU\Vx. My& sAR\'`=:yHww/:@@8.ymwdqL.bE -)(Wh-xK 8tWc#&ne14d'Ҟj;RxP퍂6| ,`1A,yA[]WQjok >4` .1 nwW'|X>$d 8#!?~9Kq&]a\GƶA NgjacFhiAP^CԱ@P#vr&O$}A龰>B5 &Ŭ8ӆ)恕`sMryѨoOgWlMfg|CC]it-G/2\C){4;I ՏNYJ%jokyu+KΊ=[Jw4SEjDBJT#onЅ&Fn!X6z6`6{5y=fDx.p-`d^tOY@օ(^/RUy< #!PO;Dgxj3tDڶ] eIpU)5ړHfRq ^@w=nmorc%YTDR?([dzCnHxֲHoӸ w .m~ʭ;PX(^}:ưi{zxjۼN޼;v24r3\k= l]-XyҚy'L-y><ϔ'Qۈh&y<đɫC=piZ&}n1Ww>6Lweȟi #sɨ(}\&_ddy)Aw#$V28mOvٗ ́>Bn z )=9fEOZmp)^ybNX̬ErJ+OJ)Fw 5[FgЛLLLt[9ֲHֺD܈:8p~Peen?|]}]C'8ZC45`lfM/tF^MPcԃ_܅jDlyWbLo-;v{/2RKH?A8R>r<.[FzڍCda6yܰ6AD:}4Vb,n/lg*"1BĪc|$3NSA^0g\ì|{RpP*7y%-M[ֶ4ihOȔ,3(\c3bJq8I"1"5{QnvsTv{+V+j*՗&eb<²J+a/ s(Dg)vc3!*G(_<9*$43kL2q2B7,OIr򇷯Lsgepf\#(|珛W}cN=% ֯%q5% NK23Oʰڏ|$7-q; Amއ FF? cAjz|d4rp1/GF~Nb#%l2q 9&0t&}"@E')y,*"$EjW$ƹ |u=mp"HDrmQۙ sN u1bBB66Ocb5G;re?g QA;c0A-UR@d~\ܫd4RuW{JeZ ~z\BizD۪xtu6v`9M"t0ŝiujz%b ;I{O{d=+cu\&(o~z5got%M!*:LP< f3L~WPNtvr4q6L!OyP/>}~J^k9ivE0THя-a2yEc =W">c}{W4[S'fSeSfg|PeQwO+[?w:j%c(*P:@*PsSrmq9Jesh2|p mGGQ?_<@n!1ńEJ2͊* 5/7F/z=gmJ$~{sÂ!#G0L:G]-w[Wgߵ ގ|X|Ʊ׷ԋ61saF>}x|Q8q^c`7Eωi؆5뷧ݏ[$>;4?,ɭr?jf}l~t:9gOG$xnkNqV~84sI ֭t Rd]v1?t!+hՔ 'Jw|xNR4VWO"Z߰/p(ky&B"ީM+}Q ) =6HJb~y0/uSHRqT|6]FdQʲqqUwW6Jsד9j_!jtųxT hWKI0چ%o­Wqu"٨8 !WU]T^7oM7S(7$Bīq_m]^"((*^oY/QҜ1MNu >֧ԇ6l&)WYż@UA5rJ3Ĥ[d=ar?]M{<8gCg^uL?J-b <#+q% _擷G!^lj[_} 6'DCiCJXU\hl| 1d8Msj՗2{+^g29st+y}=*iUgUw k^" ;edbbJY-Ā{WwE gBi0,ADjaZ%ԕa}vN?t0^mxp+/(iB̨ǠTW󰼏Uw],b.fF )wqD^e+\$s $@w(0Qx-RƦB2G2F% ŧNrb_M\ eE64P;?:Ҩ';ZPr T޴n,K%0CE$ s_A 2 Wxv,y]& jujɁI5n(lgj >o\\=mwŽQ47Mt-*Wqf&F#Xz@UMp[? 4TTV8v{9Uc|9rr6v?v]"5boO`GRX ZE|P'uo8=:IZuʥok\e/>T1bSCjr,=h)oNkjl"nVV_2CҕR O$}a\nMa}Rލ*Q1e`OEPO\\I;d*_[m gte,5AҸGkkFo!Qe'%-O z?F SP5vxKυw'U9< o邜Wݩ;Tv!9&F$a)EѤ*3aESQ$>M[ݳ!go8-MMIFb(raˍGf$H?B!"?83=҉&B٧JO!161侀N^/J@ |PQCN9 py3W<8lۋ#?l^\Np`D-T-@vF$} Gw$00N6.+}<=؜3pQ.3֣RcZUO^4'{xHX@ P!$K/lJ1?҃F!&A?_ROV<sCsz}c/ho_ƩR1% gcuHBC~aSdL˴zOtS%崶2a_;u9[f0b['ȨccP+̴0dLofcJY/ZlsȈ96=KDHRv(/ԑjÜ,3@?xg*2)| .ܯ)5 XQb,/2Y [&HGCy71倫*s~vq`r܃c{y Yo?3Gz8vQ񝘺xYvX2oIeyHDhdZn6~3ZϋC**Ʀ;;ZKi&6't[Apz2~w,y DjbāOZcl++r HtZ]kmýjv;3:S2F{s{]xwzx2X) 9~j2ŁVQ8Oi~qd4V6H7qaaDg<`X8 I:ᔙJEeh$U-|EA!>dp#'f78,%^\\`nIB s b6&8v`"Ț-8 u}(FjqȲs} *O?ۺq:2i'?1ȂwمU=@]dN+Rzd-"QG)@ĠЕl"bI]C` 1L)Ff%CM}T2BjTKIxn2:Uї)iP\J);r$I$HJ^Is jr(fsJKP,#O{yeK4][Zn&($g̔ndxZ@1$ĂٓNסwžGsiZϥԓOF_=6'I]xlCf簜} ƺC 5BZ Z/UM-jaQ~aZ.E.IC9Ie‡`Lݛx;nW'0"{ 0,U5EѦƬh  c'NJ褮AɠYL#O;3p!rC 㾰s5Xem"~M ydP`oܮed4**d^;XXXjr]?~qH&MER%HXs#Ia٦pYlY %iļs]E?KG{\{qnd89#K%Y8L (4Bc-<̵~ԨfТf`K حO۶\.b C%`D 9p`hN0ྐD{ՑX(GWљm/'2+8TE:sdXzlԗ趬"zdU Y3G_(ƳFx"5O؈`Yi"s)[GQ'ɨѪL^<~Ph҃iR0׬ >< c *B87@͆1Kd. Q;e!&1DbE7.jkT)1 6}"ZNpi9 uvws3{l 4Sxf SG Z+;@UE_&9T.!|d1(89,Ix^{AF@P >$B^? S6/$ozy?Hz-M՞Qĵt1p`$^`ךD({scau08f28p :A 7΂L^@P3gF Gy[9/3*N:41x ZqĐ-nsO^0qiu0^AD?!Co&"ojj锲>"} +_u%`̍Gs} [燼GnꌍPu(_ C?ot/'OʩD$!+.e&~ 4==;8~h>?DۂvdJqKm@)$Nbnx-JFˬ(b6cN*abjm;B!%Ơ]ٜ)s$dyw#G,<QUSqHQ_\a *+TD}}^@K #X@)1mM6#!̓  njs)J aꡧ'Df7uk57I-&!T9hjƉqܱ軧h *\民az ע!rs-uIY0y 2"hpOwi,ז;,19= C$*unLB5h2UGo}/VC|SMSdq!*X!\{AW%y<|(wzst/]lg=fr l}U=JOVczN65w"٦nQEcӤMZWN!t.xtxrKcZP_+6dt"i< pQS 0vTm ^! 3H@ƍZ s{UåީW&Q45zax764b5UT2*oC9Րn[ hJorH*c2 !yr)KXC%k e\n _~ML7mcyTܨg)\^KuU CӦ^ժġ`(G}k H+uBejAiqVq'Y0z2I^g'"_tsfaɎB]*fs#@~A! Zuo쨺e!<>M;i'V. "?"{hkBlok v3z=-\`ļ3Ș_tw"uHևGɁ$-# CprHyURfTmdY|t.w(r8!(*{TWCw]MNA.͖߼ !z̔K\ۇ2ܭXK"[gZO:Ѷem4SVjÍ:iq*ٽ6Ab14w@Tv:Ϳ)UIRqre鮽\[}/B`!@c~ :0}x9-2?N@arWmk,sZ-E&m.~}+?0\KAov 9:t\я^Js3=UW<}KPmVh&Ȳx#i=W3vaÛ*Bo)gbA[YU!94(S~P #DajrH$>KzZ *g JѰe' O̵NKfr:8- J6ZRl+Rq7[p2!Xԉ\._1UM~ȜJq+@K-?VhYh$Ljv2^2(*xy_ ZV|ʟwa1tqLW=]0zA9$Ěw՜ ǨS Ґ644aP}TtRΫV-RDΎ~]xXv)r1oqx_R/缧ufbn0*+ 85)/F^7v'@2 'K/e~'o=6ZLI{ B'"0A,E,̛bhcټ D0Y(j;-ߢV kbebXa=Xd!Ø[L@ $e`ElV6کqq :?i~1X[kZmf_hp{Z 67[+w+rMf:1&&%$r'Of{};=y&NM6&uLH}ad^b` E`D #`zdQtCg:Vz ]C{4uWEI~ caɂjSFYJ.Xcv!/+J/g'K'Z=XTZCIHl]%Kk˸7 õT6k_f˝ tklkBZ ffIc gfS@6N\ϸab$Ƣaoˆ4H$58[I2S(gPF6cD+Tѩ!tg,ñEImє>=i_kuH_"P`ӛ{\kAmp SumC=Ӟg\(wreid%viN"TΉI:-^ReX >fi|\Kq:zhkn]_>lDDq ]S [}Pb]"c8^fc[XsR)I &vV\ʵ^î 7|*>S6)zH#|Wjk9ā$O=6tњ,ɹ,Ru/N֜X-c9Z1s $ݟm}Ě'}@#An/?{Fo6tulDDO-yOxߣ_zf=QGtݒ27pchڠЖ8*#J崈>vh(hzliθdht^̆Xz$8ENі2=y*Ra4t<\bWht\&A, Ywf^kk}_HN*3i|bFvۍXi[gCGbDs*"ЉXW*Y-]2k\Inke})~9nnX=jrpf+Ra}n{J1PE6G;uK*R QԱ][ͻC=lk k 3=(?MOOkGi5q֧W?w*yeзƬczqRk]((Gl`j]mEw=xQzO`JKfʄq;7G?Ů{_2%{h6q?n'cJSSRQWpej7*U Mjy'd^-ұ8wg\8;аZs*KT#˽_͎-:45oXY`'QTQ6\Fp(ﺧLT!Xe հz9^ߢ)-Fk俿'gכ9T6Xbጛ+K՛m Z/~|;*s4-_YYǐh+c# x/^j?S!\iO\?\::"fye?5Nvn2Tp3!ukHq*VzϮ9)TPȱJF+Фt M 0Q`" #՗UWjYJ]ęԑ.~ML+H;&=;(],' _Gt j[4RnP|Fm'feCjypJO!Yc7'!ʐנ :ٴ2z}U:>[qG:vR-mUolYL<Ƽ73{;+wSJ\M֡kh)鎸 \L'wEmKx{%Z;H~8($cL>330܍ X`!Oλ@!*@X{ B0|2'W<%qxhNXHlB@-Kz6l@$ xU%V3;*YU$wL 6<)k x# X8:sXeTƠ#F~C#[sń?M 3!A&fkRTek cHDǼpY;Tt6"g?kgD"Ռޗ_Uc~DWXwB&[C|YTal|"w<^$r4Nۙ6jP`{zP>2m.7iVuC]0o*ŴyY--ay:0QWgI31!^O:um50zN~`ք(*g~m~^+cGĒggз)Ľ!$M #Rk2k!m+ngCX{yL9ahWfA` ȹqjG</=~%fLO1ػ'LwxR)90* 0),1. HUi*=e(W.;/_fB{Im%*xMPCׯv^Nˠ+A|S?hpx_1yp Rϟu_@Cr@S r# esv"01oܒ. #Y>:leMVc5#,cb4%Lfac`J0~y,yvr?feTV1aGY;UKM6޲5`v&`/CXRy+Ps?qcO˧'`mIrw&c 1YJ3J#(!-ר''C]'I+aYKdͰS .[Ŕc_-5sh غJ1[G&A|{ L3o4<1~[{4O_Uo6^c-K4?~}|+,Tک9c_閳߿]DR|ۢKC.rQ" -ϡ<[&dW)wH/Hz{۷XO0i Cys1u~Л$ANp^/lE˄p<;Mkշz{[BuE9Z^/A{ 4gȢ[A../ <߮_u'jؽ툠)y v̳/ }ps>!5JWn/LF{1.X3m7_qݘLWv/Iר.=V f͂]O3 𳝯+Cל0.σK+ 5p&́ hxn3 V*ux|w.wvPiP; I6I_HEOTg;j͛6gU3Z5bg::]{.z;WJw-10\,9S|-|/YZnY~[PFX_=1-,ucvDtlnVŠɖSvZ/J82h/0i9+j@A(Ǭbat~.`;}٣עOIu#<*Ʉhi6/[)b> c%M,دnoDʈu-:UT sM"Mݪ. GK(5Wӳ1aW^jbީ,buaH580KG3r%ӊevyٮ0KXPa̿^Uݲ^Vj6DqgYpZ="S#`&;d<ץa"`WBՠlek-AH܁g %)-Ho(@ohJSQtU!e-FeۋJہOrh[-ͼq⛳=v7YxS T{b I_\뎸9gWjB>9B0(8S^>9d?&aΧ 5hY)ؠ1tdqxP lmG E9ReU}j)plkޜn ߮&p/F`4 ]3A8@LӇJrx=< Qr56]/9/3qk2rGdY0 +zOzq?8;@1\ w |>Y.l:NK`aRӧ5OUF4E = ݶ]vд?-}k :NleS=Uc<;e)p瀛a jZ٪@>Z-?ػ=A 9Q@jQt]2=uV.%%c W,i1+ud:jlĸElN[Cawe@]ts>õ6 x2O,XO8-0XF?gx!&nsQZ%/_ݕ^qG\'GJ)ٴYT=R-& 4?4UB4’lp<i`[O^:`,6EPEcx(+23{Խ摅S\!cXİ`0{5IZ]^˞L63Cm6I&Y +X *d"ч<;vzF_sP5[ d[KeC >A)?yސ82+Bq2a,.㷐X(-ɪ^h c=41XY`=@Y':+zX8lH(˫ϏڂFvϸ`޺bZ ϐMTBԵSTMQX3>8M~% Ֆaٟ0^/jQEn$!v7ؙyURHYWg;~EqUb~'25Y+r- DNt P댵h]7 =ݒ)eb(fiʹBoGw Ř$̈ vKHo&3 п3TW쫸BYZ>7(xچR_Ib/5?htk tΌl2\w qs6Ա~FeLX"`#^ YM]C#Qj5Gdض.y=!"f!UG{%좖-m71.c 1\B,,~K֍k'_l6x+=nA Ze{?8j^)TlXLۼDaFQ0Èn4ضzqr~m3_<-3VϿp1ɓPah93'!<6Af]17ݦtc82l4]˪ Zdf? a8~΀P*n#s$5_v7aե:X ;JW  d,͕5\)MxfWa2Dzai^Idamb =BCh7I0XKh^1U@.혶\&0t8}ol)pȁêbhmM)&q[ؑW,"o~xc fOѹhT;m)f%\2;c,:`|O$3F7D~2%+S3R\t}J4tVc[ yė# 47x~=#cUuN9q@nUC _ ecM樒fhlPHutq\O!k!\t@3x w8QBցu7# vRAbǔ[|-V[g]Dd4ѮXg wj馬}[द; 褚a|P^V :訵{*a衶.Ap*ڠxY2tb_W#(ӹަM)i=[;Rseњ F+07c"ACrMat>w< TSAbGY=HdA\>[&J&yJ[ѥؔpFXpBYFI  `he{\6>M&c%"3H"bjJș*XͶ@jmR15fG6?v xTm[sW$C"[r6Z(;"YuJ\aHv |neC7WH{Xn2l؟ڱ6Q[mOv*xWq;.q"]ع-b*R\=7'ܲks9JV:&Pn^^x}~OdDH?M]R{n Kv %. cx{bc6hnI,%3!f_Aw9tgzԸXt:# a(gεі v9eMH[ͲF|I_ Te0o@24{+A![*Ml-a_EXo KiF۱OZ"l&LRSdrxpb^ÒK*f}15͓8#Qy+ PU /*X5ZC\FX{]X 2EWsڍU*>ĹDx2Nst™G2 %u @"}MCKygؒr4*M~:Y)D46Ļ8gy? SW?1sqB1ىZ JeH-9W6BP,, "Awۅj#q܅SAv׬tɍKi,q$h(2/'w{Sa*Z(!SՂLJv >C[ 8(Jn 8'e|FNf A=`_e[o+[n;i 4QJI< =wiУvGaժΈ:oA<1ջa~FG:.2DL#&QPUǺ3۶ C\H|O21. En<<87^a? qGl-srf^LDs!$.aL wcXZK@;t ZE 4JIݥƟAECo َw1bs:) 6T{Fxz%f#zmVsj$Eɮ[My⠝$$h9ݿ6h'eL|K x)V_WL|bu VڒlfhHiЁnQ+{($ Kt I4~ 5ȷI_1 #0N]8+rO\!"@z38&s#}MOi'n2ֲC߷VtiaR,jJ+XKvͭX']1 ek{s[w\DN20BQ '6vԾ#8% Wy\zqǸoMTb>uxc/vjPp-+fQ"B3J(HI5_P.Y!Ae3Ky6qRprog5tTjl89DREمn.ԕT{[m`oʫ6sXޑޅsמ)l#u_޵Y#l4Sc<F@wW-<ϟ<@ө[(Qk#2C,Y?P痝7nuZ7ֆ#kO{ޞBQ)DB_/ Յ`tyܹ(,->+:Gg5k(ɦϴ3a7z)ǔ/[yv jq 0TCNOK1; kC!vz"5~AGXUiԎvoSlH39# m[dOSB8\pwo/mm[qB?t3YMJ%~d{O@َh{^"!$ԊK]孯0&@p]TCd0~CVUulGY x 7 .2SbεK>u%pC$Ҷhe9'IA~ap>D́{^W9x3@1_t #jYu./-;.@x~O v#A#@߀?]Ǚwxw鎰4,2AGg XLwdZP=i%28xrW-1m;hʔ.U/=s He'O%FP/@J⣍Mg=aW0QwLϯhu)oF*_Ǥ xPs5,\̚lc,5!~Q2j #ţ?C]%1oKٌ5Y 椑mcʉTg#Z/H}s^~OKItM)K1ѸoiL t\jN+x}P6 Kr›|rO :8j  67شQ4k-[罩 0ʟ^WP}:f.սь gzH<5W"F}c;Iƽ{>%[SCHo&0 ihSr=| ?'V3ec$^` [b2`ݻ$8 9&Ƀtf M.!qư8;p]М{k_:.?͜IO?^Ay3򼝨OQp' ?L'Xɐ82hw9]!&6RGFD Q6~:GD?(>Ҳفug>|Hq=vaNFQ; 8xU0QLp^A/ӢqV+&eI( G>85'^w(: h+֗]ϭ9?2/r2>cx9u@ &`?C%O8AOZT|o<=|Cjr3/QPI|㒊^7q8\T8D$UHKN*| WYq~:ǺK(6g.Z'4;"*-5h>϶J&;a;&Qi0LtF΁QA,# ȴۖ~({%? Z$Dv vmzob"Q4z-H^l!Eck:]9FTvVbe1r+EseASqpQ vyR4Gxx땑FޡCktcv_5ʂ0]Ahhh>cR.cH5cFX[ѧWvխt6)y^h7hufv)*Tc)F_ /,<ҭ<-)?︀ea&! &Ks>`pȇUa1%Ӄof8Q+`\X1ƘQ|S~5L:`#I|M5n$L,<*ί|”5 uG@]BꌎXm1nHi&<ф8fF~`gEj`/99Όr h3BO1S&t^ 9WMu|4b~%C_G t+pzc-.vوZh bu?=*B]o̩ \REۯc. k_vKs5 mtV{ VoSASunՈƾiMˢ;^]Y՞DvjTkL3뺌 X\^^ _o =u澩zƯtG*jlu`3 zk|Hje*gKwg~Glm`4IO0)RQ֗+E,T  //0#Z8⨷5)&܎nPJ$졃Rѥ3r&5ʠt8p::N=?͛+Z7Yk?_/W.Տ_qs>c{-Ljpp$P!S SR!Y q l%մlSc. żƥ !ql%-CK[Ը֓/( ǢSb%.͢Sb2iyKsXJsf/ |5{,.{LԸDI}6 7Fsg/TyY.ո׫ټq3q"'Ӗa&ywz!LEUfWP"vGb9 l.((ozS#\iO_#) \|N߻r<Z;d T+C&vYL qBUI[*[+m+8mLE_@Qn~ilbHw'"zkLp eoG"7k}63Ei\H%jJ\ G?A2ws./9!O 'BP71TFnl;WFO WR_ "b t9@l* hfNzʋSJENEN2f4ؒ鏽<.YVdFOG+XcxJPI'VKo?a{  A)DCWyQuvX2g)d&RWoї- f:rzSmB)pS}pS^h:U qAY]|-{yy"-_j_Ѣ{tkP~T7]G‡.ͬ']\qBon:ݬ.nyr ޳YlwL+ey̖xZRN$-[gHx+g}WR35<;7oCoM[/|ի7+U^ðw ^F=$+Эv-/o0vЇȝ\ .* Vd'D^pQ/vNwQujVotO=\.9Iɲ3XXe_o 1{#Z 8ZN[6l{oح^>(df9ufҨhX_a#tMX:T >YsxJ6m[N@PW2CШ]P]#W֩-Cd>Ci0uwUJ~٦╗4Zhd* '9ӟ=l<>_ADg#@^`D`U\'ۼkJ2-%Y.IfI0)s^շ ՒާEtsݼCc9D[N7 [51ZpE5+.::2񲰗2&^&(zp_B>L|[zmڮ,`UEɦ4jMKepW؜E.BD+DH.( p1. ~.n{G潛WxM1\Vzt6vOkG^&t )eu*{մyenȷ"~8H$=1͕$%Гk!0OWDl6M7{Q.s) :L{q&x_u@t iFzԌH[z,ԇΧ b Ã01KDk{}lMLc^NՐ&קI8fWX||vhE}uHIME2m-k4eڔzZԗ N"3ɛ/LfgM/_s#>U3ׂ#wz͑G՞vtE)/Ssa(.]!<;3yxhӬx?aTbs}K*ٓꢰ Ry^s $fzxaZ!XlDܞ%YZ1VJE F*hNc7 5zCϢ FmxO-rҔgd$j+I~]օsx0+'#lTz Qo \oWc"DsJZ" %'dO)LSՆ)N;]XAZ*􋭝HJF V/mM׷i^WGnלyOmn@W_?D ,]m۶m6c۶m۶m۶w ZZ=("wĎv{km&AG>jLkGO^>g,d`} !cAi Ut#^-I6w$x$_9c4Λ!;9EoV:~%(y57mД2.$7NR<9N!2?+9ONe@)^B_;^^ !UA>ׁogs>gX}ʞLH9D5e )F_X@c+>!KI76>&!F.gA3\ī5w~S&-f!,Gc >9Nx| h^Tg__vdwgny|[o/ٸFM[!)g!6TKp^HIQyFo P\"=73"]0n?{ancS_DW} wH\` Yt= z=<ՀW?fHV<!K8Fͪ SFP!@^-laZ`B^F`ydO!;r,8j^yLCxrN"%8m=$[l}.9\ /۱hIH<0wLu`^1EɷCsA,yRvDI4ѫ@` /M_[ȧU|?v㦟qr 7'OZAd1@L L n5Ð ܒ #CA L3$6̊k5 GRzKUOPt#EBH] TCLLɉyaҡg'Nubʃ/+B @Kv mʭSeѥwfoޫD$_+C]ъ6И3YW\dȰ*W> ޴Y~B+b`R|}y0A.LN|Đs*Y1=)1h$UǴQ,W&c,B3e9qRÿܙ7x, uOFhSh k6eT醵]U)G ^9D,0)[Pmy6F?NT{ FYFzETTźl4R &Q{> EiUoqcҰqpxNFM:$Cpif2&dTmr|aQІ4FQB-\k2e  Lyl qOb3di[οL~ +߂HqjLp؇QJ^cA̻!ف"EwŔqd_k : ߫n>1rs7bVbQ:@|"J~, j jj ;bEvd;gte<| YFl,Ҩ۸n_W\LVC.Ű=4>_6*bv)_>O{;!e$ц OxĥIy`2 n YuE%,Ȏqf9[RbX.'3&!x}ʳF[F#JDuY2ߙ+Û  "w1ƥ2xP]H; L Ue$͸ 7ey 'G馤;&]w`4qƉY;2պIj l EJ]!pg}؅Vc. c{,n'Ƴğ0C0n~ M7jC\9 ?2L^fFB9d^EEDzP^0eN.SdwzPiYAS{o@Tb?WBh9L܊Bg{><ބ9X`eA+RELv$4f@{ 4S<Ё?KB1AD!: ,)HZDΓ)>Y$&;xrZʿ*F%N3#Λ.MFZB=gQ$݆C>1ZTm_ FSt {G?ً# f9&R-6;Mx;8i"ÖE,r1kSYbUGPSiݱiy|i*`XŏF҈y3l$s%Qj(!Ul{k;uCD[ j6B B-“! rb`x^_=um/Vzp}Cbm}/[?*b/vLmXnGH2QF+uErah˗ #T}ʮT.q u ?񥜑}~2.[&GlF%EِNQ) y̎rHZٵRAxꆠlT +ZZ0>Nf&v6=%z ipU plBDK*-\Kt\,zzF ڕ` (Ny?68ch^lFTQ=,%(^(@WVlJ0` N Uel]b\yug(VzᵤEC|̬ NLkb(_t}8{`Ց1SG+DCDgtw!߿~W6~UGev'P~#:wF;ǘ|=R ,t J#o'MA;%]R>ZG0YՇo>S%2Ug7 n PͰUKsdz?;%>]eQ툑 1+D7?:yCy? si H/_]5]s~>=G 1NOYwC.MvgT#WٸcrFY X\=}ݏpcKYx"Htfq9ZL$Pͣ~C{-!lSUUA 9A VK4:/æ' < qD4ߤ%oi s g?!z+!Դ0w nAwWE@r$J+>'~="5qn8j%%wPpVӀ_\r╯7f?)8q? J<'ht~::z}xk:ڠq6p!F'aO8^Sj2K ^i:} l>^S!VEb$p]Ih exxIȳ>T[Sn,b +Ax2Ra P@Y`i(A X>%(+1L#7kk5/$2f!P3xl3"ZЦh,ṷh@g'^pu 2J-va{K̪ Ka?OFвy G,bxɬK!v.lP'|t^4ПIa/Y- /AӮӜ}L?$i7{N-ﶒ(tqNY`x0f0` ?Ջ86&\앤b ]1O忊.=9lAPtazpE>?mfw[4mO'=L>F}a;+D} K* J$9$$ *kgv =RCc]v5r!U~0]X@B/ŎrN2B3Pi?|cҸ a 7+!Ԯf_#Ck? Th=s~)(XL'-/Nb?hJ{ջ4L}lEo -!]8&LyeA?%8kC*bt!Jb9xbpV(f.㔠n͈]-`ZDR/ߍr;AO2O,b9%{b\1Dd>'Ty3 ԛj=QSfCnd;}EolQbh| Sm %/ YN^?g'ec~#+\TEaaN 9|+v?]7L\[m;չQ7t.W 2yjc!8&7at)UQC\(Z!@&&T-TR- W7ccQI¢NKTmG߯j$VRc[oR9st 9]mOv~ͧ1"E1QI`8&`w}Cwe4AMo2oR&} !Q@ %NRAɸ8P+]][Mv2DxJk ouĿ⏢T,No+D1@÷[7+ 8 jKN ۾ Q`aCtu'⏾&=XDpw0!`z.6!:S|`)Ōþuht}8-BW8mXe(ph+A.el8 vfMC^});׆`學i $yEaJw63T³)i+AN}+Qc3IO3˪cq@oR?<Ʊ<}Yg>i^r8-߶oVA[+/.Gb,k7v sG,lT,`+6,TKzh/27 c( R'kt5Y_H [P{bj^vh01((N./>Sfnmimie؅AHY۞ۼzI15xY|U߀b2LY_^`hy4,&2ϥ$ B\~Z#Y*]pMuNE +GP %yKCڦ+唲Cª);DqHV;Mö_ F#M?Ҥl^:EppQ'5CX*( @F#QҞ_;EhAI14*/ u1Z q1߫==!l#vԞY20yL`@fWH:m},x!7 }ʐ_T( \9ԃq:_&'2u.LS9+$n"{X z*#-*QU+@W9!Nߔs EdzTB{LvmS422# Z~ uaI9g|6 xJ0I[ +r8VD22}ab/ CQְX3i~>7_aH$U1eR_R"U?j 'Mpԓky6 }E=`+|5lzk~"JĚT6qxAL4Ǚ r6T9xBl3$BA(B h8<dz ZX_NTPs=K ksʖ[.n WQ3W鴵,enځ%yϻ*֘A[ 0 @hi[hj4trO%Egdg6s[_cpg܉tmT^Ϯ$@2JP#CfB ܏HܷVma:)}qfa~.A~q*CbZbR+5-Tt[vf%P>-/\BD)T&T.d##Cj4a4%"N\h*%ن.tlOYh ~#]J;we:dQK/B |]Jj];|:LG^-^e?0EsqK2 "9m= mtID#Q״@wf4o<=? B+5yCƁ8'F." 6J Ҥ@=]`:@s(t Pi=fω]ȅ0(pm?y r4!RN Fl4 OYYI-sD!R*Gh羽I#q-8x{ժzA7ۺm[3:m 1D,F0)Spe")돕MBxतF>s7g{ p}(52 P4,dXv8!ǶQIuϔ_ ;b4;!|MW.u7o-\ .Q]e( 9؋#Q@0E>Oc,S_6 J"wVR8Ȫw!SeArKHV#nnK nvO?nƠ/0)}i ՘^F5z4~yЖ7֒`pg> -Նem:nj[p8H ͤm)70^/\Zl,,բit1W1|(,L '3I#H_Ü)xp396M'6qXs)=$ܕa,_=!8:&( cjƑ{Q" ! /{*`\Ӗ@0ntXYt'By ADm'ҵ[n ?|,e6 MHdzՠVKOa܊UB`+R19,ͬDm1Rn7ჶOd;\wrMXI<>$C42v Qֽ {h^QY҄>]̌ṳrHې!щBS屢,\]E6SXk"unl^l 7(|Q/^)a*` _5uRjdUNrP^q eНj6SA,;2]?6)o\M8w?VtXrb<~Ww nn=^ph~VDW3s;G'y27q̸Q_- [3RBʌ)X2h,Ъѯhz dq;5aK^PC{|XuiZT4w͝B4$H 1T 5j.0|~-Ă!6&rR$]qԘ R`~Sba~@NLKPC`N)Z[IV^ hW*bO.:<<iZοciL1ځ&'3-nW~\%q7Ӧ5B{P_ ۙ$ 6V8e˺gOp=㲲 crݡB7϶w@7GMeaAiK\NW 1LݩJ (pk1Z77?>=?{r*4I9HD$F]I:ӧA8.u/ u"q ޤSNa2uew (.W8oG_6 a@}vy5s_k)8]Ah%yCǞ#O^9rLo(.aHX=ܟ=NV%yy5(>F @DQmBk"n}p?6J+4Рi 1dJ~[ƴ:ˬu\v-SI7֊tDéH&Tz>etzs}0^}n?߭\/'[(_l8c|}ҳ.}C?ڰ]E젍XnN SDC DJzZEZx`NdK"&&اlf1(xۅc .@H˔H*I-D^XZ@o~}Lc:dٷmALIA6f,=rS~ˀ8Tywqؒ32Gfwk!FwojHbŃLG\,V?\gkMla^LWt[)), ܗW 3;:hQ!d/B泌EVCm6;Ʌ_XEGEL<6YS_|pKojE>5͚;c< c8?߀CEI#`c3Xe}*!FѾW!dp.6aP&q ױPy%8GւR\8>qG OFA 'DL"Ա4D@Kk<3oY94N&&H]6zA UbB«˪P<*`tYPb%'ЖD@O&r{ %%c  ޟ}0}T>]SHż˘cŅ#e$I̤^XI[%hxԦl3n]Qi}nBls8aw.ɜCcψG攓qaO8cP8:W'aZJ[knsN>cP2GrjS7egɷZoCR|_͇O˂o@V&xj\x4zS-|g NLq k+xxlAz*h֏AE暫H6Yknsal\:Ccpc}%nWWR$){tt_QG[t˫NҐ&Wn#R7F8ko;%JSku8]>G sMb8sb;!& 1gSkӺH<'8tauN%֘Ӣ jB93 ތN ѠWq55U5y8)%#|{;MDܫR/JihHMja%;Iv2IbsCI~ a$wJc"1zu(L&5?1)ǦNOO ht?{F0zM _ =bD)ާD$ (UB6 eKg޹z7J"PzWi9HfLaE[\jj7~ 7Y{zHS\?9|=M~3QHFjQa>ͣ1xS> f' ni[Rh@ ǢZ:9/ڌ!JߓAXL3opV2w<&!Uۄk8`R$ASiqjaR]Q;QF'#SΡɻvd(lZ!,p!I b@q24fepLoХDx0K?Q;|@?r6Df#F|bVpXvE;^![&dCw#%t_lPE82NAn9Vd1?qMOahT%@Y Xmr9w?( |&|6fΥIK3i Qh{Dw"L@CW(rLi9fFLid8w$= 0%M loX WRuPV s)l*(ˑdzE tGʛɲ̦O}Ky1~hi92e$F+R:)ajCg;:j~y% ">*w bU?azg;gp6#U"bYl:eiYŐx=iC{#_/= 6.³.] ::Dxn2us|zAaҌ d>ӄ W99|5\xeGdG?^3Eh X֕DlImOyx4VJr$C/> 'J;;L־aP;kY;VB[5Sqʦz3xBmEݻh~U ^mS}r\gZ4ߑhroEh:HX>YҺaP6 V1+arͯ&MV%֊N?{sQX5, ]k._gF'mEveչ*]r{zPQZaoX`+8:>zͤt!g-j7ԩ-\]Ω[6*VP~ήa/gMu"Zu zASO !ĀqۖWpmXV8b3׶9.2vmq5Vc+C]FS;0us^.v˚]8=&)v,/W}ȹġgvlPԴaPn%耈 h]a͸svG$e׍_H?"kSk:4r}z|!/}ǺC4}c<'%J:eε Sd\"6 X|Zy]ٸd1޼V *5Hu V#4 ̀ OUm<;3Y d0vOhf3YQmcT5ۼ?qсn8-Y(^[O*7!WO L^9aO_@ڼ/tNBW iG''^MR?;5:[*-|7 $Ooe\ҺCK4d X@w>-h ?Iޣ*';3f@B6UX qLCj+LM#]$ަ!U" Y}gNdT[yA[h.ܔJ=iNe7CYz1eR.\&o\2 'g56kSr|95q yo߶la7aVz+dꙦ+F%S,hCz4v[ 5d<.= *E^<ư _6ͼ}*|ācgQf+*;] ,7HӡY_syG9yBeB%5!Udqc*vd v渐;pb7Ȗ?x|ujZ;zh ܡ3nϧ aC̺?j{jC.`@L{ܨ,G( f&Z}yuGf H8?knʵ7Zv[;~g\kG_y+tZ,2_3%V^>bMQʠ򁐲yveO⟖Ʃd<9]{fH`iF$Ks W& vn/`+ik͎_Ԫy$Q91|`o\\i%$pHi<$ɠעcwHQ 4H^J 1&iG=CS`Aa[ㆴAji d 8{rzUyRTQg]ȕzIÏj>.-EzbÈVߖy*RWU|CYL Qg,tXDV ʯ$d+6u8>8{?.Ǡ<`|PK6HvHCȸ00!h C ^$>w&>}=/{SB"=sĎhH\"sxoo#Tq2|/\+cza S i^p"%ziU I(+F' JL QX;D/(EQG9ц a ùm" ASPcrKE&ZP;n8${7[DaNk8|*)nI"h.۽50Fz^I*@ϨdJ-oh˭w& g<h$H7J,IxÝ r0SO*_;2;<ކB[.} —!;dX cq RBo'Ֆ-M~fGmm0Yؑ>GbIzNÈc@ݲlXWf ~s+ .sjO@C"ΐ 1I`:.d֑iTB$>8aƼH*.C1[CN- 9j7%0yL$eJ )fB7hd>Vk)>R jiU܊y@ `(8PMpҚe[a>4 DV^ Ll<#}1ħ)a<5OVy?U$Ӽ;LH ?X$?dAM/'wPC]Z)㧛x3sG _1؆-?\(DŽZ2-}k_M_wdžؑRvghONqC8B#e= Fuݗ#qdʂu`i| ! Ja;yVA7ʜo)0aTMp EYa%gy!}Z6G^qq೾x /&w [7@)+A$,N#i8 $U5R7XL B$qllUTi6Q j^ S̫GKwL*LRWSf="(1W]dz!)W"/Ns1LTZK[M=x{][;S~jr5uLޡ\uLMJǛVx d^O+~uWL[%:ԬkyaP k FˈE7짒p_@UL@G171<~ {]5''8p&_bћ,ӳ4:q88ōz`n_1`B3=ܗA.UNV}Y֘HL2. fE-p snTV䑜TC PRn<Ȯ罧0K@x4q$Ԭ$ƅ3 =JR=7EC]?.Ǵloma;-ћL;jJ9d] B}5R/v:b@ ~d? [Z(!&H8ʗE ;k@7/;d1tWL;8“:.:0e:9t[e h:%Qt 7>}?Z=pDdj c𩦉U[;;Y"`ˋ-~hZIwQ~ȶ4٦ZQ»ovfqws7RMSK ,Xq~K[]b1("faM0554˞d#8a{>o^ [UXDEQ>jRдZ>f۽tz\CtoJzwɌ5r [7-pJ) ;Y+ASn#ݠO6JyLB5ʖJܼHM^[>Ƣ0Ni0(YQ Wd€kr)4am,O::uK~`\Sv~kLK^q+'mei^1$x(PC2PFL υPa苣i0^po|,r8^?z1Q\\O(-OZxNb-fӦeah Tvʗ;Q [gFali,5Faѕ3:d1[ne{թ\ֻCW2sٲasرJwT^'F-Gy Gx:ú)AE"%{ yIh/D^I1{%');>a"IWH{?X“ej/NjvJ&zyP9mNkmZgcI. P`1`tC:s8.Fk"4K5ԇ3" :PCxzxHcs2r4HW`ӿkRΊb\`i8jYXuVM,%+0XC|̲k迆W1 "$ okfssd^lЫL$ȹSLOWfyl<1fЂ)d?iО }Jd}u΅/y" .m=}jTn+fb<۷'Dj(*i+2ae7镓,`sKNJ=YͦFʃ ~:'=4c\Ƈu¤62V^GY07x aH" K=)C6GM|;U$=gh j#U;"vV<- VY+D~"` WB,>]XQezzĕjz왧 Emr:/{,1&W(>dj ưQ-jFoJ2,~-jG rm>) !Ųߴ) I6+Y,YVa:?AUF|,? avI+ve5/phOdo{oQpbvU:϶xyXj mY"X' KHN+w _-I0>_%0̅7 d O`֏l?q`,uUj7-i:`4;? ӈO <,ZmEv:$*mURr5U%*j-}{8g׋&'XM 0^YMӟid.W]^̋֯{z," -p9֙6wU4mů]RodlGxye=[Q0i螎?Td(Ojn4"-‡dqobwR\ʀJ6y]M]|*ǩF~t_]8*WCz%]]UU8F&'t 6mY`RHE_Y= PIjĨA( {1e"ܰ5$rG= 3wGwrҽm,K͒g C-X'wP-Ҙruc0-G{Pw)Zyd{HL ֲaV !nK3HR~Ȅldt~n?} W r.^҈7ʟV 7JSOP:-)ђycPv#/.Ȝ[C[{^:Dpc0e$*}UM*YOm(uv! ͖:+ 0E!'$L=<Em4. T '`+M"׆zړ\L AWZQd#ގ;@L]=Ipd.߰톪b4u{oMΖaQcIEߓMܦa'udߨ)9rS)#G/ +q$,lm9C m"kDIyDC(~]9qy>o\_Cڜ6Jb>W_&d~ܩ]3**"|bGLӰٛ::Y:9Y:Z[;XNlؐ~kmKU sXnvJf+ecNȕ&id Lq} 2G"iмlKAzQ]Lfr|jW8Vv&[`S#>`2j9T1d6&SJ^k]T tŐA%X>,VoN-o8$C`2u굆UTu`0H7v}rrnڍ3yk'n;*Rӧ]:@#!QJr\2h0!pSO*.TZALf(iL6[=f֥3HT{ƀTatҷŊBqGs ȽO>DА=iWFBKַQ^}po֞wr'ɊsҺR";~}(U.pi~_lg%/&97#K]Gd*PYW}p4 &;s- $c=6b ]32AdxƱBL`>zD2] rX`y0'-0KX':r< L(ZmZE7*Y%!1O?Ѵkdh>NW[:)C^+v=z~;ڿ&qCwz Sn~Z&ou@ZGw 7i>Gyt.&;;#L_f5ڳN.ԣ56zrx߃X%ҐɑHt}p_nUOEh8z-gX+N/@AoJ`?BgcdV,,J%Zlš<,% BEsf"ɍio a%x"[oe`ڿăa T `궙8\CGt &H\Q@p1V$' *0^egLkQ&@ڠ_`ތ1SUaM٬MEU*? 4>p3w7W%Pdd=j6iF * ѩSm?@AƳwxs1*! q,z_e3w}{KMKݢ͸(^\2&(8AHKH̱a~ 2U~__A.PDE7D2\\ |n2 ݉9)aG*h pz7SذQEװՓ R :PV@qLGfvK ^C:)@&Gh6U%L *rS=y'ю#/)  JiyP&7oB[ەz,Yí@#2r96;wV\u#9Vv,鰑3ۺ{l9REH )`%qէ4P^#=[sb<c21?掼ˢnA'_7&Ldi3n5;SfXn7\`WMݭIj%Yģ g_ Ǘ#tn]ffiFaV8oY[&41KJ1Q]Sg4[̑ quve^7@ Kq]Tb\PWR.d{vv3h/e+w1Z5&Çn"bMJNN=.:%pՊ{b W! O# Kr,Xh$l @gEk`gQ88AncȯZHxQ"S)S~~*P&cuGOqMZ(Ⓙ|Qm!U|9xr7< : `v)u!os@B8[k|vU8#cǾvm%f|o}&->|֭>TnXizOЉn8DnT||8[_A&ٴ Aܓ;sMUKG#m Ԩh>)aƭ&V2=AkSq g{V`vLj/kSѷP)/Ri/sY_unY5}W!U|z 4XyӿUӲnAՠA:D6۴LͪƓюLX6?xXH&mD1}^*8/'L*/RICG cCXi\6hV4H%xp4[,[4Ӭf}p-.*+;\% AqM.1)+Bd{Og&`Nj(I|i eܪhMyKY:4לN}wo~L^$m5gʗ!liTPXBYZM/uhdk7Nh]銞o7v假^X,tyw/tܖ.km53P[ZPvzy(;BQLϞXt689y{CxMsQRߑEw9 pC"BMNH,]W YU-Pm͸HαG~Xߎv|(J' 8N[IYd"䏁3,X l0*Rrb0 ׮!pcHm=av&qvj; n(A⃍alcᾂ<@7# OJZvS4W\ѿ^ ^Lڏ'υi\yCؘ7dڰ9h:'@`18˭Fn#aGS30@۩_\v jlM|m* J Cz۔t[RVZW_L_:< vuH m9otjVCoZSeq>mMۮ"4Lzx7=cޛPHN(yMUD+CkQ#6YkjA;H <#H[܊jY"~`K‚.YCPQ}(Vsʧ><|vp<*X >U,(}C d`Fxe/c8Tn5!yj2K PȬb?zk aaַB0Σt߂SVWW#N* v͜{" :Q en@ Z_)tXDO,hlj H L6q Sh_ $,l UGo,Us F EE OKW(OZ95|&ܹ)o*-oĈ jt15#oKbDdEĄ&;CR8#y5X_ : ;0R4A20-; "Ыpեꯍ )#B\n,qMgũ{^BXQ\޽VCs)D@R"A8.\LHtOtli#^BhpA]#@ـD6- 8l`blեқHcuMk g^߄a8b |Ս 2GK<OWqwOBʁzfsޠHv}v""wC69יG;@z_z<d AŁFow,7f)/葑u/)f :LO{$Vcyo0M`4Di|51>ښ8a(TKGmxP}nn`\l|V,xJ;I,:FSyAbĝbA@2@)jgY f1j ;Tj&_SH}ؿiOz wcZ fw+ },BslY>qI/2n{{XV.|roohQzf0_C~ܛȪ)GʣőxQTD둸B'3&gAɝ1mHDb,x׻"ȼ OӤ=JE}lfq=%DS'ʑcn7=nJq `.IS`7]8f3mpj>w4d0퍳!{8jVu[9 A!DP甛&Sl[("rݤĄ%@[-h=w+];Ŕ Hm2H|2Ք~V h(۞\ׄe>]i!FCLC<֊P"F`D8%4Z2R ŝpt$ 7|{z~x=Q%zez5;7͏Wh- (`}x Zș{K˩mW,7рB ؃ObD*A cwG  9 J4.%c3k6K# PMIVeׯSBr 0iJ] &Jiob3vByȞӷ&1TFѸA#dc$ (賹UTޝz+,s N`d@uɧ9uH0qh$>uԭsF8 k zıMj[L5D)g0L YnloHB K*4Ȟ }( DY}ߓd3|aߠh e PY139&VŪ՞jnS.#glG#?qj1ȱy|szgdl3t!^Z~[_GH'$|#+ɖ%ƅ7-%9O'r G>{?t4Wdd^˛ ->ϜWC7RO~}%&DV^ y'NsPa~p?5Dk$`_˝u|@]]ؘTCPoMFBB{:QQ y)f.2R,V)0J  'x<L mĒZJO A܎@RRT Z MPryQud5w ]bd/>i,lH6L !܅Ʌ\ǚ|b@#]A>Ga=͞4ͼxJ!r?yn`0 d?=zY`0.~ʽAA x,ɯZBʷb]MVx8@0 ۑAO8%=>Sb؟)4}(:)rˊVv-U9(Pr9dyb.r OYR;Ņ54o ̘}mg;wdHsI Rp/DFh[pMfOj*eƐ'71+mꊡ퓊jæŮg߯)#WԕInH;H5PG!xpS U^:4ֽN8=~\dDǎz1zh @MWj .WWLQpbtaW}.N O|ChFj\I5>܆TZSTf[Q:ÇdT`VfsAH21q+W%.\ׅJ/tB]+Þ|$HaŎzlfo ֮b0pa.<U\!2,iFN/ j!-m0{x%YlYqEE ? D G|nfW А BUނ̪t0tg / @'WHC`|fE;8S) [#4)1˰٫8K#\ X6h L[Ӝ"$JncPȎ9w'’}":tjnՠer{J8z~!` 0>"#0&dCrPLhi6;?jiF:739 rNy[?$ cQ5V`iᾒO x~Tjף؂| 5phեh[~y?V:Pc*~VRrJhEi:ILi&Պvؔ)/z򶨤 Q`Vo泏-hk#cXjkcsTN~l%j5)aer2 4$d. FUWH;8D sj*< ={0KMRݬ7h4QT f8cA,R]#/Ŝ[q\p=A@\@D7*x/Ҽw}l5쟷(5򱴺m E8o^A%VvCkͱ&$YcSOF3)Å"џ1ɢc:{1˯c;psE8}#E%Qf:S?ZKJNd"*Fd5!~>фw4AWTǪC~+(M0ӝ%1op;#_1 =LP /Yց||#1/{q;n&OɑȰfWc/K:'U-V^]ևZ_fwMMCJZ{7ukU9/;^w{q)K !W=æ֊3$xV(wfF`S(Lr ˠ}إj a<t1)9;쬘kˠg{|]ٽ̺, 4gku+A3DpC6`9KC.0)Z3Ld YU"E5P&5z_c_!k _Mbmhgd-Co@bFpm@x=ٽ^g}Kr$UFAEěQ7IKc1{vd%YެAUZjic(!ZdН^\IO B6q*@(x~_ ܤk j# dv6qF̈H "6e3XzCu}p`4n"+60*jIlk Ku*&5|b38厳Qp[˫gA+?KIfIuk+7+`PO|cjk-.N@Q̑L^_lQ!ȷtJb^ &,bvy[sRΖ47:߅ ܜ6<o 3 "E4X5}Bpf8RžAycUю9^?;k?`Qy\/Lҹk\LfvPc|9Vdy uHm}R`TV ;v k8D_ֵiFQk?.T HO+,V[y :sr #0d ~C5$EMHύHXc2d7&/^Xɍ诛976S4˽[qݲyD5CNO՟$I-YXY>CUv- U Wl0^jJ6YW' C[l UȰD+<x5JpR+\//eXT`ۮ-nloTh]Qdُng'CdlXW'ӓy Ea)cܡLOjr@y/b]a,-"})ueɃ!#p@(]YkzYC=Mbrj=ԍN'u[ ]BBB6F{[`yP3TLwyL\P~7'2u/gE޿|ؽm"k$nx*Ѕ@$=\'%. s[ؐ}$j4S3IRt5{z x,$zF٭ٌѺ~vdȄδV5^"Z3@#hܴ iz\M B+wcڧ3~fsKmT>g罪30$9aj'HˑoR4[beP+r-XK (YlttRช@ 8/NhHB&2Nls >r>ԶϲtV|VkeXffyYۿ^n7Ϙ6d'*51ǙZ|D;uR_eV(B'EWi%ɇVwl5onΝEPDFŔm*k񜤳2MyBC b}`pvr]Ĺm# q+ KH'҅GbW̨WkH+į\MW]G(F҈^ zO{dh_W./>fX\T\=ȕzo N#wݻR AX,WAMSuV6\s?<`'\O<] ( xl4eܥ(XLh=I 舻p3iLnI#Z4Kz?s2MCϸH PY,fC  C s.{u6=MW)|{VU{uƻ|ڗM77E*٪^yRb|C_BآFjNJ|Ni"RSej&[LCSXa+WC%iGMNQ/B,W!JL]ZrF!yKUUiÒʚ9f6/B:c[Ա,#;o50j$Qc}*g뗰t+ǝDzN/ʹ!\3J,#ĂFrI4stAP`3ˢfÂ| -ѽvłMG'v6+=A 'C5w]GEupw}/&Ҝ$"eqU勴EMQb&T+JդEMu̺jRʠKKf˒ĊԌq'Y=m̹WFMYmOsvm41KysF6yix-qSiVjvAKf^nC"Wv A=0Py zE.fQ5VExshh7uתoϑ?,-9m@vcd0,Cȫ5 /W,z+խ f[X=L?ZeU(k5^ȰmWUC @)zN϶x+QW#NH|[AQ-_ArQ?RE+-Ew.r-c;qx*WtZ{Tɲw5c*P6!w-eO"OvOɀ|v4 E%e\7U5 %p|5SClgdue !F pYi۶aj9l]4_>M))" #׍X w:NB7t#tm[/'1'Zc>W8(H*wY:#h8ADsWm@),LX=b+,]ٮ҃kNP"_]ykʄ 1Zjtˣ@q4ڙ)S;~$ t;{M]_#PҘ"5ʚ{0tk?V2N_ !AxMN9JTܤ=qcM?/?#K,oK}.%HQ|Gz_?Dߒ&_"#$[mk`l}w$K'\:/|iXd.Jvt%U"xbTLe+6B(@=AKHVN5Zar=WPWY; bY{rcHKH VRS'z\q`u-BuO,AxHoZA+|MQ r_lw]qRP(k7HM*S*yC}Y:B yhƝBskq^}tZTq^X6 0 &ͯYe=#?A2!ҡS2D.^ϓ{4w]k(+{btn@&6fhm$=dgkt Ҋ͉QVPT#<3ӸC<<ްno. WLB®Ӟ3,zTD *g=y$SXښeS}G4[KhUﴶЙ6isM-PϚаnfS,KzeU8`0{F ( ާUH)`yp^yöBPwQw)'fK=`ЍS1e^6ۅDQ`fm$P]]0s,{L)I).@`"X^Sد=qDXAf=2tᔳ3RQ}q?]ҶZa-@0˓v?5RnGc`J:F2.A}k NZ]"3QCi.~"O 4HmPñ}#Zn/%wD[Όaƨ[cKbŕbX"g/2nI˹iui^Z \?8ׅ.r5k*y %5 ŕr܁:$۪ ݖ7Y^Ɗkߋ##wC`_:ufd<ecOCL YP*;c.8}xVc|Wg0k+ۿq@& swR )SL7}g{r=8rrO[z}ͭ{{z(+>zg7O&Dw'i($Q#AI}B1W5CM=XӱR0@ HPZM_k iQ0aZ@d"vc.Q# ;sT.SP@FF5C y''{[ ДѣYb83| lS 3bQ3%$bjp4괝y#W:)'"rsZ[n5q=1Fvn=tz5UJ‡ƳI,Q島7E]NH=XOSgC.cHO1Rj<ũ$,HNv32Ti7qVDn a<48+)*}n<՚' `7EcY2iQEЪAE78|:G~yF*c 0j[$+l~hv낢!z4-gn+(j.YRo'6άruG:YPG `K$w*gY!K~sXz+%fz$7YS+#'1MZkaQwސ}\*ݡXB-jj&2#Ӌ0GsJp\R"iK[Zni2<[Fw͙^OQ$ilԯMok=G@ z6tՄ>iՌ O(`8Е2Hm܂$BpNUv#кӈgS$Qyt9){ uU*!s' cVlWbɨ IiK#%ףk}Ҵ4/PT, .S=U?~m;.iyA OP\JI8JXd+0]S?GS4R/gFaMZ±z"[rM5Vi?B CEmJ4OQC}17>0\SGޘTNR̴G8DZN"_?4bɳE?A 4j@_eRn::V5,o-N?ޕ> .C5B{iϫ!w?;?rUwKH/LɈw[޻b~boyYY{jYuuj^jGu'Eʸ90-דR*!|_Y1C/2/'+a+/3/+uJJ:},~fAΚa1܅Х旂)rŵ~$*-7a_ ;_5?<ױ6:IbzI#&([c'nNZ?E t*e%w-koр6hlG0P4.XRxK//F<:G{N.y 1rR(ZlD*biZ;.^+|u<@u )=%9AVXw.?Y26 l7d*EK:S=/d s7^<&kړfH% G|(ChY (2 eiWaU—~qad(8 3"DrݱvMͨ>DL38X/JE&l N 0:EV۔B|n\*l-k (`z= nLz@*Z$C> բ@iEWceSyl[j&)2 .bڙOp^20OpOX>=\%G .@c7Ғ& ܝڌl%.i[7òGAcSAH1Tufb<Vv{`)Uz@0RQK+ AZ]a=c>C}%1oSAge&?RclAIDrȫGrl!- 8Y .$V'Y (ޅϪ>AIЗ2T\`g|[]!0Y!cq6I/YU_{My8u]?, 柈xiB)`"f3\-;(Yj3]CG(;ʆ NYf@r\yڨleu# =mPY (*\6.C UHk@.e*Nx+lLǽGxIyÇ>6*GvIR9X Xy(3F.S1-vN΃ڂ h"* A%A;Ud/9|uenkC&`cs$덠`%TMdBsD9獴)dfMN/[k!2cx`TeBwObgh+ss M &AӉh"׻Mm&Y웥'@GEGH~^n)Jd='VKe~RU"`N:"87|=jofC_a UʍA 2IbjfwTٕq񛭐M{?/Z=ĈUv1p%.U6w D66<2ä;Cԩ1 _ldћX?>p>Fz/, |'C@m5zOg.B)~_[Ź선vl{ONnk 16ޕ3rE s6o%L{|ab7g^u娞5Rj2^}xOcydg>Cݻc;(*?Ñw׀W8 k_N0_԰Fhysэ-ʩ5OeRTJ6PZKqge3{PN]IUBi!Z3ӗTh .Z:[,*!I`\cfY _ZW+ x?11DWJ;jؕN% h2*i>ߣ%wL[vp P[ 6Szs8zL0ሯ.k^[|J~Tie (By)gTN4.8JX9?xJ=r̠<^be(]g"n0e{*~DR xGE+ |œ>;S*3n7w߮t'/ceR!PPO݌U)@74KN ~M?n/,&>_.jP$g1q@\P QPT~G^jV~Ep8i]0KAƢy93sE |Qʽ5Ix!^wǚdcD(r\,?,C1ײ6*FaNUĸwQI$ _Ɇ:V[p4 'X|JKXC6GMM#=\s\QNECȒ!p"zZe;^UonHqj' = lfۧrf<ذekuzI'~CfbS\p>7JhP\]K5KH]Z$nL:cMDJ.44U'ĜA;5qtO"  &] 8ɝз0+Z1QZG柴7+~;xq+X?[i2F[)~P^ =҂߫=s'c2Pe{r$c눣0,z;J"jA8;Tʄ8/ّ0'(J~&[-J'{WWm_/\a9Z~xiVJtwE(\~_Q,}is>%ܧ0tME !e* OĪvYl`~Pl&r"&[>&/PjdLX_62NtH1z*p@!g$,~%O [ L_O'& OC;?r7?Z;-@EFSoI-)(j7Y␄0?\ Gr쭹|OW,)۵yz`,у?v{^W&_nK!FO:im5;ϡ, _d[Yczp P5~8<9.υ(1+A^zUݷKKQN5L~?b9$.VgZN-sb_fza,ɭTo#G~˅=]]V4un+[yftv#|#}yȂwv_0(h Ι z6Ù(GDAIȅ'c'(*x)m:Җ U}*rE@ӟHED֭,i ;L,w1l 3$G|Bk+gH5[_C`emF#:G{]jnX ]cC|B 17mЇk׋'+ucQx[IO$ }ˏjw)?7tttD\B钹3a;Y?/R퓭);/eg1$׷6+h+vXkS8ۧm~S֪l?DjwwqLZdRxS4R'ޕv8e0ڭ }mG"7Br룱{H'"nZ5L|X Cטz9 }z9H奃-%xxnž&¾iG#Mɿz`F>DX9w,+\O=eӦOk3e6k/4!fAj<7z^;k)mknk@J;% j06=d/h,%#faE*s!i!sH~m|2pmOZ7 FOn͚uF!f l2@3k CɁE!tmYuV|odMQoǶI',0m$?-.ttj &zX ʯd\GJyݺE'ÉGjA!pvnLtȇTCӈed=M뼢1-ր >y8m"7h$nl'Jz2)t. 'j"rRH6cp\]Hguy>ެL}}!5@"O}6@ WWfb - L<_!脺#i&KQ#H7PBد@ Yae32>b5B0:ƊM.x@FuM.\!LT?{sx]oha u$r21HXr1-H=5P<°e͞ޞݎqe<#0$zBqua.9VUoKwk+o.5t )ŕ:TъlyH{jRT ԕ% 4ՇKabV*pe(7$N3&^0quųS}aޢQ\d_ E$6-6::z-iZj7e@rUaA <._2B _DaS߻ p2;#_S4 J9Al=%gy[gJ+QW:4/iܤyDH>_q'#ah&.I >fXҠ4|0m_jygHx-lB]65> 6zSꮖ\wWc3UjZ*~u7=^xJJɃ{fojuf+feA8װYOI+NKiOuʬB9+!4@N1rE kDpIpǿj.JNkjRS φ{{d+v!lN\ng+'ۜB.BMh?&3T#6m@N׌(bY=9{l ǵ_iłT>X'fWr]H&=GNr oGK@}Ev O WV -W31*ajDMQ8g)P[П&3JSS|g.]R){$D_h7[>ۼc,NgcP2D<&q@',B[p9!8Cegٽ[bb:|f _ |3] |mx+7)V'R0+wN] w?' [3kg;[{GgsGB5\xb3ļP.mYY!1cDJq5_ϐnb(TS'sÙuMN]I;cD 7Pۅms2\ `8Z )SDBgqIR@>"Ŋ5$  vc??IOf52Z8]^HYBr"63{]4BF۾8 &3v1O??-760 e,.X7S gOzq{TW{+od߅{uE;l*nW0I,'׫J$b-M3 @G5*pxe:MK|(Qvmn/?l[*\Ȗ<]HsE&8eo8A WwdDFۀ23 (yJMHB77`ޱL۱oHu{J!n2"+Dby)de~R#{`|$Ȃ)rɠA( b:ğ]U@hCBtcwյ<<(K(-# ńS Vs: U7kEݩ(0~LW'/C.^_O{q~}VŐL{1!)K "r밣#tܣc/*lݑ6+{lEy%s|`wS #T8ynv>vl;o|[--L8,"I՗ 1҇ghfU+&4{v(<3u~ڏʘ/'H$8؛; ց[i=skwF Y֪/Lr?C/.Wo1+FN&g)R-)Ec`V;Y?&b+{VoU|<4I/ʧ4u+ҕ:iU4v0$` <^>{<󪕴&$d"4`5Yl8P =g(jJ2QRN$+ "aw{k4( p5W0wɞq%O nh opOf-9*ܭvViYzuk^jj֠'ЯQSNh#G 1n\'3=(ut661qv0qo?a_tG|Os]*|®%$i&,5[OHGЭ'w]&3{zhM?-`4y=#[.fOǑ=w A I{tlm zg}\K!vi~NeW. rCFy,~ISdCJ2/!Ub}1-#WQQ?ygօϟDON`Ɩ1+3upQЎM^Y[D5F~o[9cM(cV64RP`:v~QDbKy|Y]~Y^2-^>"BL&$&X9Zq))R3?:Q[6 f)q(JXwk<#Lv("3yW=%I P{(<4&GEL&kX'jHTmHdeҽ0!-5xcy e8(&׽O:oJfޞDI^uXnʗ2qYiOv47>Mpra &]r4P} iS $e\5WAk8q{ô|#wΒ:gYe"@b#p*F]$1A偬C4]}XӴ@BɻNMJ=V!p /!duNB!l+ 4^crKf+6Ps䤼H,զWzgxZ3)#ʸk[ybIf6py(KW8\~q-Q4gnh3vOм;f:2}8{v֤9T0Y j'IOOo 3}]/6M'x/=O _X`Q"| ԥ4UZ4AR2Gm)Q`7ijR.4底iwA#.?`*v[ >[qigY$o?#\؟û}}w@ZԬt6b$I|5 $$)dCGaa )cFS/8Ir^ &Gwt.jo9\\..rf~޳FDl cEZEr6@is6H%bE|#rx?'D;ivoqdw!32[Rv:1-W':n8d=GcoA"*ܛ'M-\J.XRWզc9s(mvx7 ,<ߜf:V2P-\Iycy,e2=+Io,[x]a!n4^y*5 0VJhqP Ǭ.l1۷^|} FÝ2D2G^[֘58LcsHlIv ]?#5FayrAmuqY e u/V/]LT#zti}i 9;.E|fA1`Hc∪BCS+'YT`b@g -Cbt4U,K[4o o5u:lڶG1. SeJGaeQ +ѥԨeN#Y DhND Ef@@H@kvݺMGȢ^8_ L ^1'AE u;Y_! Q;379{"L?"NS_id4k%R뱃;i$HNd*ܗo<XOټ9pK3+Ap8Stٴ7lan~%*/,,Xpߠ~-QqG9DjClXb_.DT *@;?? "q JU;74ugB㡔w\{VBC̅SA!ÿD?йdFN(;J 6whEh'p^lǒG]'"kb&ݠ=fKHc484Y6T[D3-E:2Ya8e, u#Xa߄ήFa!X.g!JFGG߉Byx(jm0U1-Nr$ExB_j£2} L4&wŒV"Y!mo7å=۸֛])K]#kEtlW\ᏃHARXG}g˵S5#n;SzTOJ틉*8X yċyiSkā܎ ;ztΈ-յ% 2C>"Y7J2/e>p_FJk(./.dL9xvݠW`{kD~V1?ˬDt:uOP?ӿ:?ޟr}3T%oM2˪khZCOZ}׀ ' >*8?==?N.ˊAOn`8 #1p_d7ljB[?f&BOv^"oZ$kcAu+p`ZJ"I'`FSʔDBz4j"`AxGL9'H֮[~qgfg{;Q'!BjiI(l=QG)gaWB6"JjdOD6`ooϝ l[@ _e- jb* 1H› 204axc>16G$|Ŭgޱvu:7wqIZBpcb?%P\)4_#ϴܣGI_Lue>&-8yc'Pbd/|H҄=,m~qg$Se;V uRxpff K﨑%UMH?c)kv)>OAcݘ֙ծ\s0'Wt̖G1CLIvrڃkKُ뢮D>'79lAhѰ*7}Gnɕ7H?JⰧA=bNeC7C`".^NN͍wvb>[ iq@ޗ .XXھS˺(LJe5w&%ыJ..ўaY[e~.x`Wz/OqIJyH\}vSm8y4Yc,{@2.p+,eL1y~8"^/\\X#ng ‰[Qo!3 ;&lvv B;C CKӫL7lMT7>FA0s2zjn88@:#qShrG!ͥ%<j-2q삙iXANxZ3A*bԁhl2–B)9F*H.bkEph!kNNEQ@S-j@XN}xLX3; -3{ꭀLw4C#};5 tA(] TW3p3.޽FJ =8 DAOހ9 /']'k[_DoX&4񑴵^CK`3r8biAZ?yI|bxD3lhZmf2`8k7(9&E"HP-KuSnp'A#BC .f9Hc\wo8Zl>Vʏ$ 2I}VB0++o|JFE<ғqc~W-h%hܱ +ue }R<ԡ9z='+'nE:ŵGՇ?~fX2yixO8dxD@v=;"/LITզ6J"#%c,];iz&~ԠdeEb^w. Qw|s, [VЍ"j_Zũvj,: e nP=5̵J91 #(w˸;Tf6珦9¡ H 9#TjF'H~BW}B2?0gwSV LX)bgn,P"C=O *e,iaԣ5f}ѡ*|O {O[w;xe2+UM,o^80u3ȳ~iVHɓ`%F^zarYoC `0-1.鰘\LZKKO!I{"K}[OƜ)x\u^|O|aHef oJJnxkvv2jݤr[AI%3i8OMHj1Q{U{jL\+Mg")wOAT$%軪'ټn7e-L"ځ!w凜kvQl6b\Fe֮馛z.c1b?NG'mw~N+g蘮뭷?0@~]߀&:Ȍ]Yίgv'~]v!hY`Cjo[zӁ)? Y_[z:UEyYK?YtS@% 1F-wgnd?owr8w8FӁ|1*Ecꬥ1U70 9$Ud&#DI7 &Xc̣3UX|)œJ&Tcs 3xCXr>ɗXEp=U0Ii:44i<9k|x}MTbb*#9V}$ ÝkmW6 Q hSl J3^6tZA9 59cZܢV񊴌Vf1zdM>W)(hDW*³?”3bS4y5Xvpegx5)lW$]בJ2܄<I;_pӱFV?_22iԵ:s:ϘTY9FjKf5(jS"ݢ[k# ًͮ -y mhC-+ "odr~-tFT3$g3ܩ|euj a <^qfoᵽr60}rN͛06}Fe.|{s.Rrwˮ~y'm~$3k=D؁wj̦jmBǣ=ҥ R;$rWʃTGnu']K\/yF`MǬ{7??MPT&0(-?&=:z/Q!y*: _?{GsO+#жV]hN1[ZOBf:{/cXi.]gJ2oJO nu7h<[gM/ Tj=֝w#EZ~P)al_hCps/u pwb;==sEJLzc30mx*nzJJ:㖙#o2? 9~r`ץIZ)Gah}Sb2 x[O3 wbal3XyHկ%4]6ML D52 )=ܢpd{żtUnw[E˞s О SdЩ[{A )wzMVqfVQy Glm4Jh\] F袗Xx/PxFҧW3"^qxf ܬ#6cAajЕ'5ۤۓ6R@zm?Na@y"߹U˵mM m:MebCEG l)<w w.)  >'[ ӂOE L{Ix-ٲ\-YA{(p*z2n j*Se>HFB<|勭AHVjR_Jڂ.mE L`P,%  ľp$h0f~ $J{o]8GXd+9.A+ \QNoynD( h u?\01ȼOx#E*FhvJ&VjEaK (Q{"3Pe m?ymYZ޻mX:>ku$M;T'KL\|3gskoQb1Os[sfhĦn&MSk Yf ]NGH%(,m-&IRZa~5kLTfr|  @ m\5Y)Z!4&_(%WD*W_2i3<EMߞ1 g߂UӪ/?EY9*_;xRt@=?`MA+L#{`g>_?WEW1s{ӹj8sq"}3.G 1fkQ۲Z>.抷RJb|O-,[iGݖgpUb>r[/Z&Xlۊ-2Er|P?v_cd _'g)* ]s_Q'N/@]$KnnmWT z<$%3#n?J#ǕDiuB~|?ulXQQ4VP2)Vw+8>0oG*i#Om𠖋4BgcWZyD|$ VI'W`P1HGpt NAsGiJkvY(QmץLMU\&i5s'iP݋o2B'Nme_'G"  N B -y dC  ΎA#(ďj 瘀#%[>>v:nvQx@:!(}} :]1He殐yp3PfyP#R.0 ׷v!eh_9,\m;>tant3??\9d (=ɾ(5*3,=4YKV<=>Xn(IW!c\q)bEkzXI7B#Lp!-a)#P3+!}ULOטsKJ; m_pDoN7?_61^l8peI&5̂}J0d :"[2-G;4:|a?ak 8W/IX4a4mE eKqixG F9@Gf^5k)Vc99t# iapF0:hZ2SdM**ҔITC˂J[HiJJF藙޳eC~{Y$@&*;'g kz== [ g==:{D,~_TE*:WSa\5la+ }Y5ryޟ-яJPյ}@jk\lB3 [J1zYqVkھ)cȸq{䚃t=ѧKv8Mf |RV^B#؎ta{1JʷN*qnߖn H6G&ۤV|G}l@ ?avɤ"ER1}*ќl/2B-ZїFvbeztFxfNHfqe6oe}qKB3ma3t=Rpt++jKfnR5qIg zՈQ M0F'S9To8!LARL=~v~sXnCU,5u-nE5žO5M 9)dcCC !W?eiPoo5Ȁ%$$N.i,D`SeB탳sk[3ц DN$&.4ThJw M AZ=,˵5 U@^S=6d1m.ZT'Ö!jn4)mXB@ R@٣}˸ pqpp? Q=b_5l4(- K (@1 ~w\#.3GYcb&\:?Z\/Ւʾ,2Y=%7ߚZE5*E0)C8)EzT4fͱ94?.\ %vG3b|?k[#Ѝ|$D #&UV )^5Gnٖ5Hn,l,w=wC,18 6Dcճ[!ao$:U(La\ ^nB6Aœ-/ NJm&oApbAQ>Xr8A3oq쵁H>'?#S) i 4FDR7_Mks8=;K|*17َ|s^mrb"(7c_ iU"x)}m<%"6}Vl:C-JZOm=E Ѝ ϪXe0=J?LHz͈ Ue`,>-[6OIR/}!{L.#Դ"1\"BA#w`u\7ed!{W:2a9F(YUh^m}LuQN > t`i\@/@?  `c#O@;g[U_s30N.e^D#-/ $u?@dtg&ehZ@`~{! I )g"}`u0|1H.ph!!N*T =X*uߝD2m] { j55\hR(&}3V6}~9|9;%FY_ H% 0E[p59R,V)ML1s[ӔG2XG{ b?Fj`k*U~[ t } ZD ¬Vw!TXM^ 6X̡'$~҇k!,SԆck%/#I.&s80{*! 8q!JUh 'ZMc~Mt؅=|dZ ;U@9!Z(`3$!8h!JeGdkQsE=zzЎ>2"Ԫ8Ojmxз~?)qTi׳#Bg(\\6vGLGd=^3?2ʝ ֧yҢ;H$<߾eѡv6:>_sc9k*&9Aﳂ9>wsQ4 Лls ~6TFԶn6v!#UTQ)*_ml'J0A価vjN j0-Xf/le~ʿ8BKt0:փDEi&Aw+N69 כ{WnچQ|E2FH ac0 5Q8u23WۯwаV_qUиEqmzW@[b忶AVnʁmq8G! 3^QxCߑ>PBi@ 'ҽ+'6H`m"Ҝbj0/j I OT b>n=ٞVKGd˻ ǻE< _#q]k]}mbG{=u\_d(`mFPYآ17?@J_(6H8 E=W5EV"kQk^e.3gȧw+# Q1D2 %5} C}FL<+#b2xurͿf(,XiƬr3ɌSL<=d0{>K  \ՐܨղdHע;;Dy&%6# -X-D!a1D]%x3{.(_ƣ3ED}z ~MwNr:n/T5&vNgȖE(mۻ oX5?~ kX MXW@woui(qQe H~ޱ1^Q+F b0GSh!+4@4I_YtLK;S(BZ^uv- I۵z̢[# }rvB#\+xJӑ4{% JkUIGc5 47Ia)QȖxW 7~PӔK(7mF0*RQь)!GrGUUZX?{ FHTAh}=CRC }I8fK=v-DIV>\6!UNL\wWNƍ^N㛙^άNU祡+WWo` kH8d< 1T7pR "F -$1TxG@9NTb>l}PtmI$9ЮMdUL-41[Ĩ8T76m}6= ^mpMn5]d ;yb 5CBa}.+GDM@&<5YC_+' yN )QF=Sf3W WU 6H`Z, v~hF`w8 B@ڸ &8eZ`v(LY,C5cvaw$mUY8e}o ;#N>h(jAgPS':Z^sȣ/N;+>@ڛ'Svjfvn @y5m A' jI${5tvr.Q/&M UQ?x2A!KrƃEpGT^}E?t1{4g8"G)(K@N>Fh)[V:SQHkHEGrg%(쩤R7õj!@wBWf)vW=SEc~>Sn#/=0[z5#XȐLEo31%Q())y? 1?L- Ԡ( Р$Sh:aG1|H e]Bu'rOs'.6j+a^h vu"m:@Tj8U>pҪ.us fMk Jl9M:N9mh5( wm#hO)*̣@3-JvqT3MBvU gFapNz^NAbedb눃j\H+xzjMU9' >YO9]BހG11#^ٝMxMHk͕1*"҄nt|!?L1/DCX 0ymŽᐴ!korJgQO|izS:"9ƄqLJh^E3k!! (@jG*psa o,DF{ Z\__YigrËy&%V:tS8 " _D[D BŔP3b.e:aYh(áP<)XKm۶m۶m۶ڶm۶oMUR[d<2ӓ{wV!яR3&-Vp`J`h5ЉZ*LI}HW [/:u*Z $;Pr 6:JZ#hVt xfz bvJ:^R In<<"[7hy8ɡK0>b6zn9Ua6"P:oFghe^Qi[>܃f@Ck5L#Q5)a5V'}k |4 SkjX5ԩlp.t]WV23lJ+!O8s rAHQ!.!ݒC!y@0^CA'< [t*dɴQFT&)>z~9=g` 5Bi)rz/)\~С"GYj Ǟ`EhIf* @#^4I QTYFY H D+*3UG !<,*T7EZd f2JNyYdTtǀ#/n L>EY!.13DZ$\aT9kK6;MLH 劗P>B{7 Yn&b VSN9Vw&{ xFNiF׸O,#tؔG|fvl+02kٺ ",-!D~0Aa )˾w$l;8=]8$0,ۏ^&W$y:Byh ӭTYvRzW*7Za{}as +)C ]yQ33`K!5QKCtK fd0'Eݴeb Gk-}D!#Xd򺓃'~96_9ՁI%&uSMAN>7,Y *Ǹ+( ͈`'y Pd+P[EyMUi^,gX4rIj͎ ެPO&L~8Wܳn0ݜӹ U3d7$%rO+-7W^ 5?W?Gmpx~@y=)qLÞFSM_[n$g>=fzg&{d`-}; &(N楺iP [=i[V]ophʖ,'%_6f~zZ wreՙ`nMyp2#e%%`RPqqyrEo߼fQ B4zS'~ w${ʳ@旁{iL_",obm6yU0"Ȥɤd,Nl{=pNe0%,sA#t$O@MVkrSh$Y(,nf& 4* C6-Kh5ML`;8l5-2 ]D |[ gnxf7PZ)_^ vRbH +ELT -#yl G1[$8jry YC圀>58Pt %BNi3y?uMvmP@ JjNK(@=(Hc{EOr%[&7RF\2oibeX[=Yڎy= H28YIwZf5RwHyܶSxƼSkV+G'^pz !ϝB.h7e)ӂ\Y\#نby %5]ASE9z]Hj6btZYqOl6\{\nZgs=/8*.wuTںqI.Iج2%8;N-a=!0Fg7ϚeW jVlߪ`4s.g6*Tz}̐1oGe=q(S&)x]}ĉSw$}gWXsr##-`m!A2Bwdcv˥{HB+Âi) /VObkVaNZNGG6Yy{(;|/mtyv-g_=wzxvvW2z9mE}^gq6(T>z:/ڬ.TR~@M`[](@v2Jǘ`V% #bOq ͌3v_Ͱa7T%)-YvYt>ˮ`jiJί }ߠ]fhKܠw濉K%lTb`yf1)8L?^;8=r'36T^3=ȸ奕X12Wr&ľ:5?>"f#dE uZjc 肭i a2j)]Q*(ވ}z=K4v4{f5+޷3:yoeX~!޹Oh%`T|wogOvv8k D.<ymNldj{ݣmh z'sV 4@|z >)X=v߸3#7hN76cCa??' ro&A񉶬󶘗LHReO+!gWQBlijiUz:X`fANvh awIN_It\IRNtB3BJh /mçz!_~?b$joq^VW?!/^˻ה|W?1{_arw59(<'#^!٨І~zrFγW΃}O%3ѷ^JrvGp?bz@SCUhFa"JB4F-ajB._؀4ŹS$F9 cO=LG+MDSϴtF:mLOfW3)dohAp{~m m@mwQz_)ff"ef"^N"^B;ݓCmMąlf"PV O^^s>0QQDtH{s=RkmQCVМ"}q+l6@@9Rڂ1eX轵a~^=#H#UdZx58ĿyuӮ,Ӥ=0Lj 2)K8ju wyM$:pqČzއE@MT#k9! :ԣn!5fn0q~*n`Ñhчnx2ԦjOtX9ȅ3AZ.CzD<\ٙ3vôCLpќR;m%lO] #* Tմm(1S uX.Y!X`Ӆ",bm:(n m%.ж+׶T@jOuhI-͒Rix7uQLX&uu q%*#0n*I!2,fM_t[47x/zHx $Gb½e)hۋM%Sy[-7<[C[v2:Y].goo[uDĮ@GZl}ChbƦו 9J'&RV W€ S$ZocՉҋQm Dj3[ ȏiθܷ{BL>-.zV+!Ⱦpt1Ǖ3tArn*W6W8d(@cv@1 N$j_S <9~7n\(WC0ٯ; 3x9$ڀRQzڑ$ -b(ZVBg1TC(qO9zZь #'$tXπՌgpPi/^aSk(E鹘 TENGG`ߵgQUtuiY.VaLe gL>SN ? s 5 nwE`?ߛ9D|~ϔ4a?yI'/iA%OoYekVf:1. sRlWO.1V/ 5k;Gd9܎\;qFmel[= &M.V6eV*PEP+GfZtv5n:6VYngn2䑤hv">sj sQ [k*^xPWRc1߶=d.KC}X0`: ۂKPIMmubG5XaK%|x<3OÍDO:@j4$l5TTSJS[qqwo'@b!gВ@qq34V)m.+x,[4q0?=i_6X\uBj+)bB)LeiΒ }N|x1D8!fcӎ^FN$ҺAM 5(gt _IE=Ŷ҄hyC4('T ?=W͚?IQ$Z*>?Tm$З$^GI,/K]%0/v92{ÇlfŒp^*ɽ&]"]\tmDLLķX";1>sm$%W^ kLQ f\bC'#z)R mey`oG 􎭩$zmyvXcĉE8vGq:LX"lXx/~[j]Qs3! i  Q)+O6x eCd[~=n +Ml7,k1d(T`p8w- ko>ԧ6-ZN23].|64r?<BpGM2l{N{N~~Z].SmxTHB<ݚK:ͻe؇t;:ۻ> ǂp9U>n_C%udJ&ˌqZ{FK5^$C+#P48PYl4K]@&flk# رiblRq4]N`P?`N.5D:LDUe]E6p0ة%{k;[d7;8o4g[<,S>eLO'l6qA NOɸK.pL?~,TF(OzP6#~ta?erh 9#\0=18ejLpAS)k_; A9;N[vk\Sλ`9zC"6,#XڤGK~qWu3IֶVmi?6GU S ZC)"aD-)ٶ :nMCwVIQp,}Űvwe %VK+cKPõ546^*ۼ bNކQ"6 r/B"16 9 ( ?uz:si3٨ >ll(Z1|(sޑd7(|Y!CFMtj%{v eO}J\(ɞ1qL3L6Y TTE@G=zֈ.lZxeE]8| $ :==i!ķtJa*bP(>v}_Di -)aE7zf]utSIcPC-fI| kPospڤZ;hϬs1[@%np;'l)V]Q(}||N Ea귄vr!Tp8욠Og(7 /~ u]/,@׿rMI` %^L #4]]ܿP4o!9UG߫$xe2tAsyhG3~܁2+ˬ=Ҩ\@׈R=j"AEt& Bm8'HB2\'~@^`t`w\5pEi*3uN ߂}*:bwy*U-E؟sXĻD6Ս|b5,8?T%UP!쫢)Fs-1۷2`O[\a Ĵʠm8I6R]y0z;&xJ.^E"2b$U3t/AW6F gtȭLzͺ#Oqoc[=BjmTDE@mENerz\۩X"D=6&9)[SHR*T,YZo5|-(OQ WrJFuyGPB++?܁Z lѲ/QtCGNK4k 6I)50Ñc^ 嗷Pӿ+ yȢ@aAΥ<9]rHCx~+ҝs?kHoN+Եmw 5EDPSU/ҵ7jU52pLN~;HC,dfռ_t)x$vfg^:UbpIӜ%` /t8N K̰ {Rè]7Uٺ!>$ȉ3 ضCK !aa5c<|n:^HsM2U%}~369-bkvfJ,: mX ?<5Gܹ1:2 /&⸙7{(^CW=RC9b|[IđFl5ݶs= >\4N={;ɻW;fu p~a񉼳1ʛ2W->x}|azT)cjx2&^IaN4"Jgf>$CT<* ]u]V{1JTk)}5́Ӗ35K3ٖli< I.Tgp@Z[IYg7yCiRSv`o^/ ר&=qynz-f.6Y lօ||Z x9Y+W 73^\ bS ٰ؈t|gPb@Zyu>I;+D&h( VV9ȴ'O@Y@$wcet w{tFESwmK׭!mE-d[u/*' .GI2@$Gn_%k: h!|.L^ M AZ_%V=1k\.ҹ7wPy%ޣTZݹsPX6,KB&}.K^u&' s}2i8q'J w.nKSE㊵S@| 8\ :/zʻ;^;/pWA0;D: z;^ЉA7_nf_,/O~o =cV,Ԟwt/{(_:OV~])Ó(خ{gÕyJM5@aOVJJ; 1+s@B z֞ϨOg1&1He9e0GKI|t8aqkdRdlh3;DcЅ ۘ:P/ﭾ2e=aw6DKxRj៯x%|".6]E(΄O廋پ5{0'csSeƁ}9KB j,{.wT2Gw V`=>!%tF)eP9D{?(SyN$HGB7'[L Ys s\LD׍f qeM ֯=eI{~.}0QfR VoaX%ʹTYR]rP8=B{Fȏ"@~g1a^^oO{{z=w;\AgAh>z1[LfF,5Y#Q{qzMC"s-4$鑳n9zO ֯NV&uәv'lѬs g%ѠM ܬT|ԱPӑJ (t'dǃDL蓺%Z.Ɂ (ș=(0HQEJU`Xb-8zO6 LMiOr@eU5v)Y?v_]Ђ9O&ڥ*un+蹃rj!l-1˜XᨍgbfEvĒرh+Qv r~*]]俼+ 0([@ԛ Jk2 e3K 8 t8_񶹉.Zԛ֔r)i<*T0ՐU}ZFnc'f KZح/j{-aՕؔ&jmj[j#5s3KH1] inrnfv ԳUHce5mC*Ex[6<\w 䭒u}9҆6md4ʭɻH ,##Z~b0\;vfwv~~hcok2YyANN_ڬW9_ #䓢9;_)[#Rȩ" =dѻ-hs%ܰjzjoFjg3P?bIo%m6^Ib3:3,7n $N,e *5RZ۳{u%@.IL 8DG $61).Ru?xX9{N dL3xGa%PID/s&?09!Vf 0` /HZ)F*L'-Q *x`d/*G*~_tْ5}4OyǬTߍXnKKdqLzuwƏawnu}lE&3%ްu㚞ɘXJ+Gq=ЙS.22m=FcQًDOd 7zK6nD}̰W׾,$prw.|%?gB 1gdmu-$lyH,SojLKyU4K )~/ f;  %\.[L'Rmx ^)p8h THx1@]xt#6\IH VP)"\B\ ZZ_CV\|0~+`8z/$OFB]2br9 Z LCH"^u1#"wA feҟQKZ}q01\'+ʺ4?#9cd(~rtxm/M~|9KiOfu% oÂwjN}Hئ&oaAeibbf)F#iwge_R蹖- #(" "( TWFr:+>E (Ά=qf1ӮGaCclNMͯ+GM*khC`*+m\ͻQҖCZ-&&xUrPKIx%),;F vC9,E{ZS!P ;Jcd$QnA'3 7M-uɿljOm;㜥w|:x_`7ݪ=$(Twﶜ?2 =W ΤƜ9 1ߪw㹜 a=^ߌ;;PS?@;S9V/WkEݡS ݭRM̖yӔ[m { μ瓇yV Tˢ_QRY [[b&I/WȀ!Lhl` B /WxU1 DH6ʞpO qJu$#AV 9>{#ħ_mH`3^7Zճ!C4'?WJ`-l*Ƌ4JZW"ѼxOҦKvJ!hGk%]=IБp 9Kɣ3w~GO朶;0bɡ!k+KH܏^5sF Q  )n4)Up6*BON9bْbfdz?m $JاKL b"/d<* S@44J6J7c+2W4HԬ(QpRrl;ˍ&t2m3\g m|C!Q  .O!L_H}TLx(L\ v[@*[kqmP#+͇NX :YaN ]rn >=5dzURǘhzV0f[oQXk۱>fc#)o-Жq7bnҍKИ&VU=JޔH x7_38Z;eׁ+DqTYXNrFBH|KWN#e rfe4WwYP,^s-FѨԀnWUM~O eܵ1GmiW|Hۻ@_v]#"K-UB!ؕaʠÎқy mn;Y6?su͟ 6lDE4߮|+kgPA7\o2QvAM5!uʑ6e Mg.c1 d.ىD/Ը҈l*+P?pG 8I54,km^ظN_IXsz6 yzwDc\5 βh 1i0`2l3cr]_]Tgل+ 3S,sS2,bx$ Ei"^39iG&Uu ;V7]u0XRI&i0H A2pL AEI ‚aQP7oR7-)5'@ Fۘ7gUkB 9q˸ ,v XUa9/"#^3)Q683\o>o%c̏R#5GizMjGpt)F GzzAwxͻYEB%r B+|fK=d/ OŪ1xZ˗̌ #B-Bzhb5r@1! -.$Ih]Y׵#3Di R2CM>ۚ&OV }IZN,zVJ?}E.h8H_HU(;75N&@zjHpQJe}lf7n\b ,X/Α~폇I34s%hII4O\:]L/qh>ϓ'z9Pgn;CJsZtEsbޤ(`EޏWNQkS`u( Sj>ALL_IjPBQzϓPΕWA!d  qr>'<7]Mƴ&6+_L|Sљߖ7)XpD|eTbT_=|@6ʴL'`us1!eH~k'Ft]CaHnQ/;μzSwܡqƽUtAUPJլxW!0AfaC2E8R ?~GuŌ9h”ҩ0LHD^e $22~ӡVK;:t r{blI|:tvj& (/Q!(Nl K8Gu ΋ UDHWe>oDjN^7Д&op#DgzCc7x"d7xs +)TrN3.3ˎ ؤ5(z-[i䤵a"S ZJI,W/"d&l­%G{`CLL, +3Emn M]E][. ##$a\0D3m]]R!ΔĪs>/x1Ǻ, n7$Df2)+$Y5]qL R'FW&х",hwn~V)>hwGN.4 ꢱSFi,L>?R|.K4 Jm9 v_ni \t_Z{+ĐAqRGRDjXλU\uN"4q5Q'K/ * `7,3ӎ,Z| S_7uZ-d3SM&O]rÉPnj5 zAnk~UdžWo܂ydhH]㮕L#N^]Ɓtr[P#[[4W.Z {f=}9 $. ^ੁ+#|  ك{^!4z%HM;(řD.:Cە(=X#r1$sJ:mB`\\8gGL)'9*_ د[u/tMJ-^6!8q^L~G<E7Ť!HL*4<4:Nj$d>=}Ro2!k;)C~[$kʸ+l'} r ml˿18 gxkWX / s?(Ũ<?sN>H(Hc +?$~K d+rl%jh=苁ضY4V<-Y>1EM/Ht[Mx7?'wc_dgjvG]9['p3j@5V̹~-/nk@5/0YA#nYM Lhm@qL:ȿ 1$n`{qpa@~@=aІU--k o^qsZ O?q\΁4A9 qF3̆Od(@vpDÄW8AnI>MKb˫om8Րzí=DXlBes*^)sgл1_}SpNO4*dH IBT0[i!^qƌ1elܥ91^F 3zVHAEq=z#&}TK,5f6hQ*Vl?oeLYs#EC&TPE J\%JDPĺQ"qCY2Na HZL!4T;Yu!BuExf:%+%v-Ex̜ܶĹ?H@%9B_P)qr c gIx>r07Ǻ6>'' bU0IƙоM[E&<gޒ74}8zH^ 9F۽_6JT;.8bA,SF]e~!-my:;eG,%,P6>8Ht]A넂a](όMWPY6L=^"(te 0t3୙9F|^?oʤ;iOPSr:xXQ@# ßCxremRLs"FЕCxоd;d(&V'a#9~⻥tqdn{ -;^93^ oHwD+IC.׉M DqE(43>|X4+4<:mJ\ܑjjMxZM'<ӱC +o\b&7Ʒקzn6,HI5hz?܏Rg r&8 ʳOffXcu(Ffvy'v}T(jZԇ6{P: ]m\7[ P*z"ҍc '7]7G<(6$J';~` $K^Vk 0f`ٸSg6W,1FEP~0:2>gc4K_@# hh걯냞f)4" oǟS)#NxWMc@T"ˁp)*8) c5Ĺӄǵ}@sǎʨ:Ih^A]\sHny)f}X|`x $ JH[ 5γru׻OOFΰ_VV>f6޳tu]'f&} %$tzY!H`vq[D N$"ivQɵ0~kf(2zWaI:(6% O8+_8b+˜?dDMq:t>l9ݕ~'91M-ܪ:9\ġA$Kn|OyrEMl s pB䊿 PLRH䘼VOaʘYUe}WP;z,)F;`RqDQ0a4w劽s$p}wC@s <_(?H}b諊%1s_A+q_zRNtW*G'1hr|]>=zۺasg_Gto\5KBwxc= Z4bҪ}/ ^D~kkNJ_ ΁HE;YC9K-wE$$kpbYs`7qBX`(<0V4Ey:?A>_b 7z83r?-C&[yߺЯR|sznm ~8kШh [-س0N5sU:lcacf7uF2ϟ*iUsfEd= 0Nj 4):n^k[,ycD"n#TH2jXFEdj:t8)ZSEߴS8;)`Er1BZ/zYoi_ܤ|fx96oJ'l sʠ6:eH % 3gh=a  Lw~ e`6Mar9`p@K1"MҊҩbGH_ˇ`Ҍ=ĥoW8b Ls M@ O @K.q;@@  LhLѓBek˯&#AE`J~wqN>l.(OO/0|xČ|)'rAAІ&䜯~}eɺ>a:.v~t ~z&J ug֐̝[Qeo[$l@,6j-6xZ s-䲽zBQ =I۾s q@ʸN.rydi|(*)5n: aF8XW}ڴVijl b80$N#0xg,Om[$a2.R"wAODΓyxo܎+Bn·e0ΓBdoJn+<Lee[h+qgYulDA/`n0 Nut}yӤdԎR#z ad$.hl鮣m埬.䝪,mz7udD+x q +NM0g9h3$'Kгg*@O(R J6h) .,odc et㼍2v3@Ty7RH#SG/G$BqT"YO uMOfl=hިD֍MuFhr ?˙t p b*s!g[{bhFvB7td{=Kڳp>wf)]# Vn#iSц㐺tƝ׺]Қd0 JטQNM]VҁynA/ <$<=QEy*-艵1Q=wfUQ嶥J&0w*&f@f]u{ &F2kdKo\C7?`9P1}ae7zrcO[( +7/0M.AXZ2V:2IuH׹b?X@ޓeB/Xds9XfTi"-Aەz䠢j.Ӈculk E6?Pe0k/SB;M\k0&Qfڤq?(ˇ&2Rb]B;.Jb9,!z !*{<"uG!ѰàЋu1eP{p)]cY&s9MI2*2hf˵GkCR,݊*4 Na4t cn5|xM8´VwpC Ձ*ӣ%w3[!;t!N`T*%.ChOcËk(Dԭ(E=!!q ߬S!_ʆxPR!xb^PLIAsLcU#c c W0kyWDRہFM4Oʀ F_@#M X*z#NFPc>D|Ɏ2BP-x!-Y줎"D0D eD)gܢ"󵂼J$囗Vou>?+ė SЩ8Yq<`#K!JÐq-`wEM<ޘ)TqSXZaJC0Yoűu}шJ|`9`X0 nSk`֢)d1_͵=n["Cna['A֤c9Ze[uiDb/갚 {Ed3 ;$L>38qxO`*&t/y ;TSʋHVHR1SvlkR#]3:@H(%ܠ)(|hlLARPUT@(K9U1SQ|þ/7"6w:VT F.u|jhϧWJyM*.OHh$Tq1*}}!Y(`572(nkD6KQ |3sP6L: ,rr)"[16nG7J8zZ_N̽ۅZ&'o7>pwj%';*e+4aV _ d!E7 񬀘sA}yFI-q%[GObh>>?A]1bIyX-m d-DB1NjK[LAB%P=!A,Koѿ321ŔSΚ:*aRmBuIA`[Tqq<(~EBžxķJ1h5EĥIa:r.er8 tI1dw)cvE7&5"oǩC ۴^2Dݐdn">tzb:|[gƠ50j>Xnڽ~[[ş՛{i_{:n>WA DO3Q nX ~VhA:\9^CSiXC{t"B;YкfȱclP3̑$ќ:,ȫ/L^bcEAM^m"!AI]ش2ǒ}p`tEhF%[)m֮q_w?ϰ%Wix *IȶżnJDzTXA5sOξ`B)P|6 [ѥO9\Ӭ1¤Y)-t yϷT2(4-{ PBe9W:?@p俩uD(qfG:"+<0 >`S;R˶sojيUabbe.)/SzU;'.K>'}b_bt9S/RQV)ek⏩a.ĂpQ!GepbapTdO=B}H[Ƭ+A2t|p4atsޜ+)x8PlPƊI[j.+6+7{H_"V{%PiD~j>M1C2g/8< /H eʴ[}nD-ȚƹJG:x<&@PS9_0:Aqa5\byEp4r푅C"TxOf"[Onl/pVnSY1O4g/-ţx؟}n;7 \4nho>Bް?vW)C0MFRjre1BrZal60" QG\Pe 2,FR.v)b%wm;.u%=&moCtCa(%S&<[6{OiK7=GjNC&YDl/zx`ӕ?m dI}d!A nr#l q}\6qE'   -ӒgXHKűl 8(R-JRV; 2#}ZUQaђQ-?6mtSҕv>n[;9 gfd3˒XsF5 &jfRLsol6 b~HPqLp04OW"oudBU':5tJ*$O4m*S =`Šxq b@=Dٮ_?~>0+zRORA$)2345Q4\ʲ|$Ob< p %Ѷ/Ea?YE [|Nꕒ8bj%R?H{SuMd:S*CQي<aЫ,v`~a )g/ۓ9unYb$C_yT<)e!KW--5G Oc<k\Bj(J+z p) |1-1A]ƶξ[z(&Q z@7бPvv4ͼls|!_Y5T']>cGZƮLajQxwk8)@5-Y? T m޸->t3$9h k';??H]&\QghMPx8*lcf9{hr._4* 7;"kVF^RRˌbgA z\EI9MakQZ04&??J;ѳ-=wQ&^>'Y&MNQ'O{wuύ_Y9>"g{xyx:T7;4c`Q͹nJ܃vOjoBa~NN`Kvfi%ގXk=܅6V.{b\)|w^5Q;6m6ݽZquD#8o&Ͳ!:+iBQ&qg XohDbMbU,hAmTU@DpptQBeB :,~ ).N(h]@%sW 3OTUE. "ؑjj{+ J֝0 m@m88à QMs2qHmжw}e?t!b_aBWbtzt0ʸݒ uQo{Ɍ{4wƌ ,!S+IoJ霢aTw0S@)g'rܰ|uCmв?SynE39֨ԝco{VѢ `LnYҢsEWGUd+LCQf%͑ SC!JۑŜi a&'7Q^>'MwFY,Zo8ϵ\~z6=N7EƜ)x_!l:xT7M#a궦 Oɳ=8mvyT)EU8p>U~J})ZQ J|[|*]Klh_e Hy"tx-ʟHIu Mł*Sc$m(B%I^>itr`%4@H,bYaigt~qQypŏ"dIB|Imu4 yGu%-~K>C683KD0;dΒɌOk5L^7Edcw ?{E j])ݻP=*/>Iz\}TIdn Ojv(shNġ2M2LG<9}Ff վ̥HJrqpX9+\r2'Fp>>1d,|iPǤiZR a|YNqRڠ)~L/lXWyG? #,7C{RO6(G؁n9Pd!!TvחDB`#EK[)jQ5.%n='X 0)0bӞD-J_wb|o ꜖Rk`D0z9)iP7D# 1pj/-c+I\+TI):;9wv}8?zNL||GyvCʖ0N118 1_k5Qj.Gh]sDUS>p2KcC"Dsw/IayKQidQ {]z:#?YJo0% QPvO $K" W)ug 44M`RiSFen DՂysWJkTHTo$+n5BPږ 6a$L:8F`4?AJ!:J24 5o wiN OU ;]>aXSʖ`!&y'%H3k?jnUjns]Jdew`K;,yE[cؘ.Sx.ْܓCW&h¨yhEACjz?f#UfnvץzoT^U`+ۄ/_zn+`)+G>'m&=}J9H-Ђg7nRulJ'eN{sWnHKmcDo+zӖo"X(Tn0 N/wn""Fp`ގ;*!`I.@G(cڤWKq-&4aZ9!q͙ )2qtZ~&Y9Z*,jL$:R8fx몐a5TgxXMd`?Q{P*:zk `hJPԪ`<^G(}^n_VWˋ/8Fޯq{\I9 sѽ>['ۭ8|_l:sZ9>괴N6v|\:wPxYf?L.=il 8I(בSc&_<'@Qx&hr<39P޲7*-nEwY5ft?OBPxgW.V졓qTYèe3w6g5!]=7jCƖŷv&8,BGw%*H{/-eݡy&gjN-Ze2{\®Ux;hG!*Cj)4Ll򨕶KI1Pی#患`HG[u|oo:I(tGoaBE"9:iUI>>Fɓ[鶋ւ0?2%n߬߅Ibzi'l?Ԫv޶?z=Qc!QT=[cf䠫ѱKQ5 ϖ TDO|K+D z*}:^iL}Lec&}ە\T&[aE(id; V^_z_BI1v+,D;^esE#p\ک I<3NykYhx4ԗ#CdDIEzGv?\4-88L8je #9zo<sب(9e}Lh5 VL{vo}oǨiFW+-~I~ZW,/X@?r"|,2a>goV. Dd*[c斪Ui^gRlrF>J(EG0yə.0U[`#O\Itz9S0/m|4 3Y#9XfBûp_A*ӀViI7Iqp4i*@+5I@`'\p:ZuTRﳛ"0*ňH*8JB%:7ߒi5{W_`` Dȓ;;iSc^'-,#W"W}c4s+=v4' *5S ; s_[dcӔ_*p\I=!m (VH#U'Hة9 '3o?᫨K2=d D؆p5 iU\JhV7o;5] gNBq K(GlfF" n9R~vM(+Wӯ.:R9#Zizx~KS0Sʼnn=m bDQsQRDxrdba4hPUzUe(:BяH oHWXQ }޸\W~ ѧ,3i=ljPE<2QV+vt7qKD3Qx.AcsFvPL\ũy- /u5j˚X7˧,ywHȰ'uI.ȋ#>cQ`68[͠Œ5"XD/ On}3ޮIgpFJSCVnsJ!x2J F\~nWDJuЩS:z;ڹ[Gqo<5dKJ6V׎E|hVZpw-0bbl'D~:H j:9֒Uӻb~,AatEܵ,'Xġ-q7_vu jU(PISU~7)mvd> HJ+("GFD +%@S~ׁ*M%Qfӳc(l?.T:ZD}Tjļ.r%D":'ǔ=3Yesti^î_ XgfUP2Ѐ|sAY " T+-LR&rS!y[k+٧6ܶ\JI.CƊ$'Vb?ukS=K|w$ Dz`Nx?:snaD{@'vhz)NQK8Jo)0i-d k|y~xka7J;ؘmȶ!xI͑8]*ww2[A&"ZJ `6_.yd5 r2<(/˒{4.K7k_ITom sի6X՞vݺP%o"2xڄ /[uu(}79Tw݄'Bm|+/ahbиh[;ȞAY ]bM-ʊ2e]RېK;χ/ᖔβgʣ*CuÎ&^[ءtK=s &)44\ց"cKMW)%{SYcڎRK!v ,Ij0` vXb+C2h[ aS]r-Z*E7A(0P@K&#լ?g/PO )-s=߲lUqu]fpKѕ\mzTD!kS?;Aw#vd1hraU!d=\ 0$~`s _;pUlyNE V܆I3:+Zt61EcDyR  `X#?t[ߚ!m=ۈ^Q<9d\r-SվŋCz8酰4]R9u=M9ˣlt,C4 VGЊn:z`eԨ{HXQ(Is b:xV}u~^:wetU IPpuUƋ:\Gv9"} V|Ȉak},7(5AdIÒc^ƥŝ"?x#4m;Jy 2SQIR^Qߠ3=YMn]yAso\V^':!wpL~kȪ[Mjqu,#N(j0X|0 `[E~`Z7>ZgMzB cSWq*-N =/UxuR0 $F`%~]r~)K7fNwN6 ?5W{?+G.eͨnA@E](]þ L [{Z ),TF$:=. uwxhBw.Yc޼Y͒+*ZXA3nWW7?O vo4[ ^0?9(xU sY ';xFST 6ĸ  w_1G\[[.$ph@Oc1 \۳)Orq7"+Xfq,p=Wۅ1@UݬOLuZٿێJ5~#sY 6AgJɈP4p>Z\cS6P/)\Ԩ#`D6=Sf0u$h+.y;CGN$c7@cRۨo-+O> o-܏Įy/B:ij0WС@q2͢ﭵ-<0w;kb/é׎%pFQl 4T{7jq?$p۴PѷoQn9,V HGknOU:g@?d] ȬRfpozS%r"=2`EhbwHe-"묋. '_Aw䐋] zpmMb O-2[w (c)1 쟫S*'T(3 " [u7\N8Ub{O"y?n &k5%42 r_sB^%}C#AρƗ>1xR?z1Xs!v K"(W 5[^x޶ Aqoy&s=V8$NMK}BC(2&@rN"8y{PǺTsEYUw`' J;z؅%o.H9_}3o;5yVBLڿ6"z@l<킣V@"p?scIr&EΜq %,T 8ѥcݢd2zڒVjLي2F2B4DrfCŸ#ؐ:#]]`6֣DgwmSerWKطoyg|9S99Ǽw=Suk׷5Ǽw=okykƧok͉?ؼ6hk}ǯɩ \w)\w :>cLq=+IZ-Z>关3Fi~5i:\w4H3H41]w&|z Ś|[w-&Xczf¤d$pz2> =?=u@6m}6}6ˀ}6s}T!L!]!>{?"|#ždQg_~F=;|i>>/,xl>%ܽe2Yc6S`̆*2 Lyn?42{PtyWMhfeUuLVJ^-|F,2i?tvX+CbD1/xev+ YƎNtt!/:5@X1Uevc2;9u,C (W9A'^'շڮ9,1uibقqA skjXC(j^ Y$إon=:?Ng_"N̤1JpfKMDA2uY'"wek#, tk׶m۶m۶m۶m۶m[J+YsN-EF8~k?bMĹiZ ~PnM EղD2'-6H!Fr{Z7b9bm*@\hrm2٭)-_ -}v(9/ sΦr~2}08Bl)[NtR]qzj=OT=M}ML 670J|l~:d%BqV=,&H ,/A@Q&>:QÜCxCȜE(YZz2WOIP*Z{DX-% 1)9yN 9LjSʎ`gF\G@=#=b[ɂp{/-PoJ7տc;*mૠrW7ǽ?ˋG@ Yg|Jt299}{=N?a93{^9%ɉۍv~9'yC#9}F ^" |dIPZXjīN WsWx,AZ 6V!L0х, Hn3|8rur7:3"sZ?NG ƟQIes_t^~1)R5~t%2BC~vxv_aB 2< ˆOS7ח$s9V >7q_}6GLwp[B&U3-" {lXrx^8sqz0-s=KI~􈯣UEWW-p5-{4 Њwd9 ;t#1<4,{ ӂ54 wF 9yn-Mz_Ky,Rx6/LV;-"sᓀ#il }' C1#do] ba/~I ->䋏2Z',2PX[i 6aFvoeU Z%8LEU)c'W b(^.SMHL4wSԆ0c-/Ѵ:A8t6_{n;q޺s~oZ1%LHP>)9J+2^4q}. "IS .rǰ}7۶ v@U铲r)裴 R ژLZOKW[|%Ikn2])W?ןeLpJɳzΘ!2tR;/M1z0u:U>vFgwq a1Gӕ=G?fQ\0ezI؞y9hzf<3I|Jྉtp`&D%zm;I9ߘ೧~?:ܩʏJ4c^άsTk$k|IPngXQ ɸ7hQmZj4gך?%b@×8 CW&r6V=TԿHrB{ڄ VXɴf _i y潋sf%ɰݸKn.4= oA!6}6甪UſHo547Cs;=Vz`[DqkZN }62|#߁U#$8㋕ 5!i[5Dz\HR~OR#TC6{5y8h|4Z翘 pg2~hyZ>Rj i+{lZb3v=.CLkw=$+5uv{ ZP(S}~f7J$p 雛4Y[b(Fug-J"tz{`' 1kj|gW؈_!= 4pHC޺z'&Tbq@Mϕ͓ǂS2|piԥ`:2< HM#=+ѩݢRYg`AJ'(FЃG}ĵI;TI.LJvKγ7ʹs_Z8\/Ua)2+vK%[E)5f[%ϵ:>>n)'\075=Ў-)rld 6( hZfR~'Bn' Lfla&!}X I2Mİ3Ocl~~S~&q&bES6ʒ G:xWMJeƚY*8G eO騚 29+gY]kc>Cu]*kߝ]]O̴ރ8q8@Uq>s"ܱW M<=}ft1FOq%Dƕ^ԩPw5˧ &QfqÖe'vg$/;)εԮ@#u|e+&Q-1h۸`y))ؾ]eqnK-l@J\+w~'x!50ڰPUӬDr#v *uְ֨1GK2nQ kZ&;g@kIհ(pm.2B}@+jbشT< u;(ZJ`|xS5`r#d!xmPb&"bp8Q$g%ڟ<64m4 5#.=Xr/ɰʉkM.Q-#<,-je*7^OS*@rW|KYܬY8q6I L=**. y|.% ŘA(׬ )2F. -? d#UE[ԧvoHa 1F}TFVrx>~r v|}2QuVJ:a>bل>b54>yPq0eZ5u"u9BNCU:EuF6hj)7N4Hn(JT8\vl\2!E:'ھ>i tS`_v=њk0kM'$'lj\}:\^@3D){yj3U\ IDݮ `BT*Rx)RI(^Ss`U(a,39TJazy.lӯhB[ymWQ9^a̫5`/->'TD0<)zMRf>[ VA 3ð~Z|+߅>kDjd6U1(EIV78@yA#/Z:XPn ܑ4PYȷP%kF;wz((0B6OЮ)y~>$ &BJ=(he&.eM3.[`~7%p1NAUR0Լ&\%Bŏat` pzVR[6%d/M0*5ajV}whDT2z)0#r VR^>[ĉ5vZ YYrI-n`Qqd I{5k;<_ևdN+$ 7v\ 5x`p;M3e_7@CTopy uzs3.S{`SJpJ"de>p<ݮ55/BI7XwKs(<0ech_ߠMFv..6XO(/!էB:ӵ[.kJqOXKصI 1 ʋ~J7r4 '; j5Aa&!)GcL K{M ْJF9$_~er-aQG.}u6 i vu*:{IK0>ÜL?dPym{ ^}d* :uw pG b-\</(=K#SCZ [3`LZL1kcJG.X ,2#֞F GY8ʛ26FN!Y\lTĜn ľ/ F2)!qӞ:q]a3O(oV6냒?s_H}xIƦ梥S jY/! Ռ"q5BKSKG/GOJZYmEp'Y9;Hw\Jmޔ#ktTaTeh-Z&g B>l z0ґ,%qf^F-au'tmaR8\;jMkԵ©T&(=#rWȍ :eّڎwg,j{$* `U,gp?K>+[8okm3۹iG>5#3Ϡ's%15Ф4/W3 DwmפX~0'-JC٨5'.Tf;oQMKM@$d5vMC̉c|ϺD85՛B\x!K KJ,C^~ bZj:gwxYͩhv)@X5'<7A͗yz0„ye;^Pg`9EqLXEԛH~ {.3?~_#7YOhγp i-tD~BtsCk*bWOӍdhɨUv V}B.+C9~)(]Qa<[}MΙp %F܃ҳEŭ pw }#Zi3X׀kGbPkw7idkU#Uʬgx~ Ihaߖ x ‹ b=􁑸O\Ԁ6JYPb p@) %Nq OGR]a`cA2U,Dtj btw} @gd}B]/*8_@9]Z)WbQ? }eO('idg:(|k y9ac[STj u2z] :~fE&]z@N,N$'E5w:wڏ;X0U>808f̨%2Is~x{L3,L^w(R70lFTGnrFPTt2Y|GiUfH&\RQU"$noM5O/ruU@*|3i\w?!]gy3ge RkBI=䌒d(E4\3I* J%®XI`ȒZT79"X3W{T?}17B1qh o37Кu*9y 4|ܜ4EaPz t2NPtϨeH׽DzQiILSFjCAwM>>ٻ0܋rm\?y*lD$io\`&c9]@7H~4gOn_-Hm0mMdC`aTcխMsF'>ێrW,*Xʲ`-EmM X%-J:l?"#[Ryc n?hC9^RC<5=H@&W9n^ǁ2ߚ \Qt^U/+ݏ揟ڗ/}Y^,{ w@v|gi XC/ă(=RnHR7J6gM{(E4b՗}p+P TjRItf@{)1W`O~9ͭ` $fx/]PHRQEn|H͚40{E` p=[Ԧ3FiEk. 4rB!!qocϯ[)AN"rcO-PgnCCkG%H-ɔo N^]ζtq\=qn {&MqGIqs #cQc@ۦmpxpl<|^!|$aF-"~~7ey K[Kӏb3*\GyY1Ƴ#7!5:wN'( ְA?\v*[Sl|Wԃʼn?vSpd؆Q@ V{\J| e)o(߉8es(~_'.ȅ Z>ⷉ=poY=prj}K-!@OT>|_x`^8pkq~[K57_u$7Cgxo y ec&x;Wb3eӖT XcJJh%? D]k$_P UU+6Wi;d*H:ن elhPC8ۄR)ӕ#_5(BE0ꃉ)g*ޘFFyUQ=z7#},NU19>q) +~aջ C޷j 5/_'ꉵa3'nj崆{ >c^PHU8Ȳ2hNw,جiaсdn5`yKu3@Kv#l3b`GZvITW*[ҍװ}Z=?sO 8In Q4+X [H*u:4M~(mȓnX5oRȞRwUhZI:p]jzt$#|(v:eB 1F@3QM L"VN1*L9-H ~H Viܛx4AV 4K"M2Ȑ A` zR9lb4d9hA{0]}p"8G$@6S !txso>uoms[W&&s?%1e{WΞ_\ g {7ԓ6mw>G2B͚ާ羏81qrhQk"]^`s&#]r䒙e%&LV+)c\ O Zl=ށ`q VP -#z$E_o/n!\ k:Eb|e7K$QHYGK^CPXa*{| Pٟl4Ɨ6&2rT'URؕ"2bCҶ9(ng/*R@Z<?Å"R."3tɒN3KהM\7fȘn2/"^ob-#8p-Dƈ* UvgIc (֦K͐?/kKopEuo^s obQ}G^jvʅ>ULo+ծ\_V([9Q2A#Y`ˆ +GNJ KywMGʬ!Z05{+_@u_ڬT]Eb*wA >7JKzҶ-|#ҩsٹ}yPѣaRa&&0S{Mp{9>~`c+sBR{"@YU'(6gb sIolF×M~H"Xdt ςoGu8L}O7!tUCG;0r_C&ph9*z(Ķ(wI6@&U,!!3sɍ-Gm_ekQGѿ85١6} ԌQ N(Li5us疈eQ+e*$7߹=P=S8ǓS85?(À:U1K7tmf>Aa:cSςRoOJąWV4|28 wz &r;Qٞ\n0k::c .M֕Zu ̹W\b;D>FB [{|[on6*Jz;2?uYmZ`כՀûy HٓfAi/ẻ\I-VB9^4yoP"B!%\K4g?ZrXZ 6qSܕ.>ڃ?^/6]%5[?Oe[D'A2R6}VV_$`܊FZqqOmS$C8;V_S-J%FA`0/K9CVlҡt˂M(Hzb4d*qQH\N=J 6ɌBNO|u9gmJ2π4p6ڬfF,Sh#BHH_Բ$ag x4yJH_To_ G vVfzNa!vwK ӏ2[ٓRe V!z|/Adn[i+(?7hxlgu^̲[g Z\#ͦV Y l/tR/(:* _'w|Ω8#%ġ^:W^/1 }BoC|H'FlFG(}+R].)HaܦbJׁ2KH(%pV7.;Nd*Uz#)gSgpbYyxI|d%"REw[NcmAyW+yIS"pXwY-m4::clж[R=fA&FJxvf )3*fBKjr&NWqpnnH尠) VF>|, BvIgkNHuoZCC26{F~tAC\eZUx T?g/KCu+ S \T{oJ xbj%FOswy*:@\rUN?/LN¨k{jRc6|$[V ^! J"=]~^ԫTYݿKcmޏ ?)Ȧ=$;?/M]Aޯę[*Z;xִe:ܘڑ@gnY u$5$N qP]R3IϯBLg]M{ Ԣ.4fD?"k⎐3Ϊ:g 0]~A+3 ' dCU1_wtW6k;B_'XO5W{p2hEP GOysֆ%y@~)H SԅF:vniƟk|kҡD[LEB!j(ܡߺI5l0i0;IݼI6fk0);0 SVo`CUlŞ}^lnfcI)ɱ _#^b}PW*pP&Ďa-\QQFп7ܭ_v/l DܚjU1Ao#jSDCmkV6?JͿ`xll 5<b7رg=gma<4*"+fԼ6oɳr1Vs]M(26 7_m+@Z1kevg]ՍFqKJǓhVU4glnФKyEf2{<̓ g ayz 5m74|nϦC_ˮp5U xr%x5jMr owwQÈ;Ļx61V|/z'Ӫ槡;;ލFXXjKykZ?;iթӤ\ a|yմa]v$ e$)prGB}!n"o WWeٗk"J]pFs^ PzQ|/P]#W+f]S3i71@1/lK!񹌞^kW@j͊b0 fiUc^*fYɈdTzf~"AJ9Ts̕WG͍VEz))GAnq^*rlݭ)υ0~!8`q@kD3[hlp7xmZ* fZ E3" b{DE!-eTip9jD8dRlɮǔYn1?<۫--yZit]?^ t97I΢兘ض[: f'+Uc¨q_?R+rQiԺ&pT=;q k b}zu7 dWt8WAh\kYa I* 9yGmfX4-I}`K,KGu6sWm%d¡5hovf3G~tJ5XBr)K=bZk@>Q.zS63ũIGXCFYf=wOM^4+ςgcB`ܝ" 2Zq5S"#Wz!li0Jﹻhsj@Hx{%ӰeKjv(X𠈫XTKVaU=й;D' ڰKK| +H+f3ݐBEBئtͰ'e&h$'3XoҠ,G\T<8.rƅG̛lK+tf18]lZ%yұ| 2ʠf+l}.W4DHaDp3`T dHi%0 2'a!7JZ$aߒ%Qx r[=-ԭ m NY@ujZop I._\kP$JA.42穃%"_>ЗykI=5$`aro]( ^(UDjWc8eI}%@9 , #;)JKx+Myo6d%kϑaGj,p-ղTT%%TcE"V|*2($.4GWAMb|Kd7P` 3:9L=Nߌ'9o@3:D%? 猥\&4 )<3ɇ 3'[/ 鍧/ܤ)l\ޟF>i]7\3 *փoemc5K2b-T'IY|gPr|;$M)|UlpM]TXE{z[=F\N 8uF.u, R#+Wb]ˬŴE1'!#Ӟ<i ^ 'ͪBKX5TAnn;6*v%=]AY&*Y8Q%, RqO])Tj~ҡI{wtv_oMrGg2lJzBYQʺTNC`(0[!բnwhp*$o{*4Ԭ-ːa1nmw"2 iwbJ0urf7Tyy!bYE&a # Y]3KH/ Pa>4[,~Zd1δ[6" q0 >U0oψ"l qZ2;) ||9#R5>~,{/~sșW^`αZ=EFTa-71"2jLI)!­ڠ4)q%tye%w<bC0_zܯۧMouv%ղ?͗ѠW"k_2ڬ=V!: / ]!$ghި,PoUֵ=r:Q^DUYYc&HAG9XrCt T@ҜSCϏ9qqת`ok B%1ʬEpٸ}x]C c8pC(VaeG keңQ ]HAcA'S<\$̳X.%2i'A z&ŋvׇ5lZDXCk8{;fYG(MK iaw[%N< hbPQ!yx4"e35&0%%c$(k! 2h#꧷FaSJV(˄V-CF){9E#͵c[52)W_#rvK',=2]&܎FDR/ތ'Y04t-:M<-?b@% atƘo EkE;K9C`&3_جzz$ZC9m𙼡SPUF_ӾSk_?zOaԬ=A&-pO rOܬ 1W85% Ό9tf XĬ:Z̬~XY}A3sOMjج,~hY}a3sFO{Oo ؋}3uFOq޿yOtF,`hX.M9φMhY?S탢/2뢿KOh⽅lo:4xX^X?:a?nR&#ɯkK4悳ΈM\\'cĜM|'Xq``ij`lAkQncG;[n l lB<\g] W \o"{\Y=>OIE>?&_}NhHě$/WL~o.9vv53JZMw-p`DlR](PG*}l9!j ,!iދ5|4Śn^ln2jjN@ݭ ԥKi~tdΖ =jχ M_66*Y@@dP/X,-ԪO0EX\W~̎^dDáiSI'j*3Pha2j>D.嶋߸.Ъ G2ELS*F<QNXrh3i*a~d,3ΧhX*:[,Ȋ!FHQIJ.W#ԲK* )hE6`yݎÆyV'GCrDn}ꥊ -P>z8Mp<hQ`t;%T O_ޱz͈*&%M)+*QسBdmQzi .8i<ؓ>©_V7l- B (6ȺV7 j"9!E+VԤ T#0 AX]Xݷb=$$35!)jxc)GL:'ϴ} :\PG 57=N[ibtbˎg+d/sz5&nidmcza4^0[6JL 5Nos,%xZJ3P"f93a NelٺrF3Ռl̸H(}Vγ >UcHI\hK]zP Ui@v-RQɈ_y{K(+WY5prv %`/Nb|Z|렼rRx@ `` 5Jpx Sd@YG)KgSM@ްPМIj@ 11`LaA5f܁a9Ϧ=u(iGڡ1ځWb:7Y;cgg"cʪOoc|֝T\1+$ѧ( ޭJAY:\ qY6Խ۵-%FZ="y8l}B]r_wxq?/)l{펉-nG vu^M-;zg`D/l'U-wou;L}? ]y]\5vլE^EM}շ A}OlC]X["C26'ok[Rӟ"+lzA##il^2Q9UMx/_ I,VB߂ S.xbdAX<0;j5}Pya'\e!tK:%da/|Қ_{^Cܐ ?,u8nSI5w֛Gx }0745Oy΀=J/|XUdXudЭÞE` #io1*P tVlNd`]Ƙɰ>vOh<U01j7ta2u'(r;W([3ѶL(%1ν 5M3IeaXއ<_9kQ[m+,X뺭,b`ja1)qP?l7 u]9!8R:\P="qx~ZIھ;U{މ6нVMEC5:jjCuƺ f@eeeFZF5ƺ0div=܈ي8]V*!%GHT0E~_ŊG+_ִiݪ$ڢ~YYY"Qjj1㿙{L^fAX2U*'h,"nic -`ƈ&{RHquN+.\r{kso{KZ5sZnc 1E)ѶT/j2]IH͘ZwKi"ad~[M4f{8m]Q7s~4QT"S&t&~1,v}ttgV&Y5mwit9CmX$Z7w-[)VH'|W줓4i7y(IyMJwWq#hbbdn-C\ L|v*2yqha1  "><ګ'@g˺*!BLK']L{jRПzIr7L GGS4)iTv4ؐOɻA6vCp(壅#( xM"I"sС18a0U3qS$?##yXR6á "B##(@wNf#$̲IA]ݚՖ]X7'%`"5ϛJ΂,EXs?v314$:uP*٤6׉xLX.<9{)%.B._ak*\eRjۛ2v;,uLLGX//_?i".,Q>+HYM<=RJ*W]$ϥkhPkp+߃]dd!t-%Ss RW8wu =Q'KiQ(0[i+aX ?eiI91g-[~f ܥTU{`7B J\T dž *@Enl BNGc 'Թ_al[iP #VigƎmNȕ'h@%҈ )X6Ұ|4F6ȧ c?LxKb~K;4Mr"pƺܑ?e:?+R_ ~klv'\F 1jGv<-Uo C hOmԦ! `3-Op욒02 G%e‚0,-:r-$KyW]r+JՁGA:>Dtt#f0NJǍ }IC[ C  6>SP OoƲN}1F,ƅfa»b㛜+&jR~w)F݊L0/zˏƄ}Tsva UCu(;45Nb,FV촞M/"g$V1YPsXBpdvٻf߮Бj\)g؈ ~[ V;xrÓE]U_$E?|ivWVi@{Kׁpk^p{x:vs{7叞@ vXLu3ͣl<`z+I4"- -0kA♤8Ʒ嶟L1Ѻ<gf !wADYF}iN)V/V%C+1\77B?>Baҳ/J#O2YskgK;b't䵬J<6{^(Yf}K?ƲMs7b1wSW ꨢDPE֤]=_(%Wݥ/l /xdtWfzp0Ew.?xZ מH&-_.}{nCeJpA{5?yT9$^TwT3EձWCDŰVR^9(0zQ/9HKc&~u9洙|5<`,r]gKL( Fu}y S}#i9> ƈY)`'c*rtE2rKLg#YJ~MR B;EG攄. @O |j`eY3&YO;=WMӴ kuW5E@5[uNyV68ZԸ$J$]8DOGh)h ]#ذ=ڄAկin^U{Sx3Pw/$KexOf+1Z6qLAݵLL0Sld ' # SJ|rWУqe_^%lIl(r ă3suK%%\Y=T)+l;wRx0i%}erL 7U%2L}\ȁ3qL s8w!G<,jqԵ 2x:$a~bDFJܵYΣ}Z{h[)bzRD2G'STkm[̍VEO(sa6GS\-& }d4 ^ j%E2uF'dv_Lao[ɪ񁱻"j"*ˢuDލHL™g{ݿ>1eca,h$@?jOvWN$ r 4s؅Wſ2 eM@w?8^*ݪUCr/$Y׬cVIt719A Q:" cKHE,+jPAP 1$tJ'5ą ?(Yo+S.Pu:بaǚH$;QËBv\C+a<m򮳡`1#TaT[3 {s!Y?A% {o#Y$U tT)M)|eLMTqjҕZGsbgDw,h?Ďo6wo)/88Nߴ< "~^XE.Kg YIЂ&\4k^g7t6rkS  fj-Aֶ7FOB\.נi*}ı|i̓IQ~d}F(:ݛȝf_ WTKU: 5V}y`0Ak:a~IІPÕތʒfCE8ћ A6dE3gRSBqe|]H?(3*2!X2,Jl?om65(PwT^0g>0m\|)њ? r`rbl YAtgY+~ՎbCqٯ8܋w'VU}gGmsggCwb _Y[ߒχ;D` v,7eK",le+n'b\jmݙfaz5o)K\u,l.@KB$}[z̘/9b(omOWD$o77Sk54N' VJ.k*@h6W D,5j6Dx/y,#xKͩJd8x3*)ϣ֙[Dݱd zh" 5}vR>[YBQ94ppWb,d%:ߩT҈2Ε]<}8b쳩^JlyGG9vJZ١-85 mXC0sjP0B0{JYD&.^r0; ƓEG{mc;} ͛&?00+; K{񯨝;zNAP}GqgXL?Rf%beE0yН(CJ(FHmɂ!#/)=iPRq.JVl:0UIaAeO$:Lm^P d1pZۦ N&L* і?HT>@t,+:8N ,@ dsAtk~fd?8BXKcOvْK#
c]_!1$dqɐ+-upQ."WjQR#8!`(@B~ )b@O hzq,S䐭gD5,d?/v*4?Jq\kIQjG6cdcϕjC'^?*Vrvȣ %EZ:3F%*#IWY.a=MN9n3[ӅB~AoƜ!BW_R{ӎ=0!((HMmxTXyɓ92#h>񹯟i t:k AR|0C"uѳax:P8|t=l΁]{KG0>RLQ*ֿRY.s·Vy1(DMƑ60(k6j%h% C1g< FeƝC4޶x^Ja81}šŢv:}МRLJTOd~pn@{C{}IqvTX&;S(PD ӡC4|Q% >#Q#e%U[ĬX3rl N1Jbs1(iL1!5aI-:DGt n}t^3q%5dMӭ-iI wGS+x.;=XY"93ٰX=Tk)JH Low"џU pQ*ZF%S:gLEARF9b9LcR~Eh3MsޟM5XSWU" 3[X&̀9޺禿VQk]Lf@QpXvتpk㾑DD;]\F0L^EZYAx*L)A#!aN[R-?~MEA?Ȫ*{{HNLj:Mn㝜 @JtS,a $;F' X*[Xy{W4-DqI~]| pĜ(oa?P)|]^9#ԀHz Wt``0Jx^ܡtgغ],^F_ @vU-_JXゥbGLACHIfW 2'dua3bd&#2I`V!,dL \, C[%Sa."E8=r"R?0D~ú0S-Cyj;֕Nq::or;a'RV&|j:M:.if% /Aj9)3my-eMv܋v{ @dV%KD1ci"RneB= pFkCV ln_OiHEӓig8_'y&3j[sy܌rp + ē )pfnnFNVYvLNYZol5BӯEGV%uzMO2sp'.oPNO#LEuOGfdaFțd"*|0bt?t,RNJ<=1qq:IJ@"bc۶m۶m۶m۶m߱m[^11sg3wӋ^wwefgVeʺ8Wue. BJ蔓iujhi^:9K}=UJ^pq8;vֆ~\ߠz~Q%.BcCvTiGWuy5?<^nckIOq0LDQ;l &i#m!2xmXz#vW-DdC=$lfI)Ic 9$u*w;X(A?k.YBE-q)I$M$_K=F*k1,碭(P"F-&=NjzPb%\+SMm&E/PzMuDU~}*hj5֓o]I([tjZls$ z†MkPK#LG^p$T T oI[2R= U>5XI' k Q|bզ.\/ib8!GcCQڲCMk?m` nN7/%w)<1li$ /葛8\^RDohVVYV"7u6 !rZ;j+AqҦ&~+sI3Ouz+zj%/n/߬io*vAk9Na&1Mi:qJcx$AvlÄ8~p[s.'.Վ׃omű2y҄#}ME9f @mw9=nȘurlBHlb{(}5S\\k65 ]>G<(sTdz,4$5Ts(OBٴw\1s?O#M ,#e{ƒkէ81Y{1_RR)j'2YO*- XD{)ODCF6֧YLAZ~B2)p ETx[v8hSĒJe*ssa:|Xlcp:1' OyF7JQntʊr݅0xevP1I5MuX+p8[qP=IX?Qu:MrC|?4K1S;TDڭ3 C#cu*OzG@xip]b^=>&wް+~,Ĭ5B*,"v=8vB+6˒ֹ0>rbs~~Q:ODf/ ~n AaF`xԴJ8qr8ŐJa$d{I$J}qIS>|L\\$XԢ=t,#]$)ojEWgfr2t.=aN>򚼕;5‘f-饀jj@3(-jn+|쫙d!nBc7w? 7MX!S{vC"DnRfrfQLйsA)2&7\ '8#oZ PNMfYQN=xx8-ߍh4Ng ;.7ڙԺg_}qg`\T5+U(qcxzc=oR9/4zQb0~V3qx( É36IыN 3.~ ݂ KcZU =+L ùPwtIwho1InCS<[m[ 6ۘ85Iz1jS#;'.r|v[;DA˄v1lú[7 j|Ȏk1̦P~g1&MLih뒃nYW8g]k]y%WH3uǗ 7MdXi6iZ@R{Q@F ܚS˧mk'J'0[6^*&7msUp J;l^;B:RW7)eq)pֆ/xòq!Y`P7|Az@A7[ pB =CK@B .pwGoxe`I.\ُ3L{PTk@}xpC ji!7fJľ, NZ8>MYq'$#:BмBXݭ.\M*ԲamEj"ъ#l(GC3(+> 6T & CeQCq:z0) VLc3jPax -J}TJ!s}ߵi*B'Y {"=AH̰Jב0F"m1ˈ`y ҬSqs]扂lr۶%X+іS2Z!qBDfCZb`NǤPrwmk2"8恞6Qjsg W]~aqP$Sw !\Ӑ2{;WcAWخeMD5#t~gݓmSiqk HI.NçЊ~I=| 2yҸŨ%rO!Nr4Mdo, D^k֑[HҰEI h۸Ev\ɄK'PnJ˜vL- ;{Nj'amSCDuD1rk;0۲ I~ԶnR]DyK s XpeCb2Vf+3ɉ[q px}ZJ{I)3nٻ)Hb舨f_7G-q(uQK;me%;-%IAf%.|/%[etdQ' :t"W/1UpUIXqVHh R.BxdλlhkoEPRTϨf];"5OZ-k&`-Wbp[2b/] 6yS~i:+Om镣AStgtZ( _ |6vMY7.}L ɐ%7X`K6kqr.1{1j iXt5I4 8zMvN=6Zÿ2?٥ex8`aQե7xc$~l:).7 fp q=hpo6 |a#yu OO?cU2eq|&]6n3[7dҏ('%=p?,mBMuoW6f-L헦X ~M`2` po,9WÞޠ^'1Ǒm{X ە$2I,ڢ3Ƕֽ|x>=>M.9Z}$CO'+ S!u^0n`͜Ie$vfYQz!1<-#@vډc%QܑEM.PiTˮL* M|o*Tv3k褋|mְ94@_$S5y͉57&*NlT38Jś.{Ld\ ,z¸:4mA" 1f1^iaYE۩}I@L"h0kɃҔ_s Wo_}''Vk^#jt3zpSܧlj' D;!sIBmyVj6 μ4*+aE"n.b@;W-Y[)X]>+hx^,0)2xE-D,Pr?'Vz&݋Keъ?JlsA/<+i‰t'ވ J SVÅqEșaLs'mO$ #Ug Sg}O%/*VYqO3dI]=ɘ)ZimW!b~]2j Ö;@V^X^T9.I0.W=RԼ1* C1rC'|ؑow2bsFXݛV9%2tClzZeA;Uݴt/Aa Լ4ܜ .&Y*u 8/Ծ߆ \8eq2O%S yE|s۟_l72SP֕4[]_]AP-7Sb.LIi"][g1<ցL.n ~f75MCS=ͣ0:Z]OG+w* tֲL;u|-"i3 ,Y.G[ީv2gsOnxh형=q8"s ķ5y1;v~]`˳ͬI7=2u7S5+ mo},byt V)ʒp[3TKo՜^ٻ<ŻQ*u<ّ1#9JSx3Kޱg"S^$QZ 09'~Ҁ"@KYBZ(Go{*)+uoLCe[jyĪ! <ڙKwub\Չ~TS愺)VOM c_-.7\6{fUl{7t͈f}MLԤToagėVʮn`Yrm_p87opמ'~nHm|. koatމy-;X"&/V9[>3Zi+nre;mgLl&[gxC,}835B}ӚX<or$,4r#}cΩgۍYga0+MEd>6)9R_ubX/F+yż×v_V{(;/X{͝_ÅxOG d9C>s1ToDН~ayB?ngCق|a7i ZJ2r,U1 0 cFF 8h6f"|cVeC&8ߏ4¹k+ OɟUI1q[ X9ɘC !vqH=%憖t[mUq*Ikg;9'^u Y@1{_<AHܚd|ww8=gM}0xbsia2'< LΝ2=%#Bײ84\dLeuEW0{=Ъ jr%8]i:%peq³N~->p Y uR꘥Lf0_:n_agTL:lO{ĆJBg[LAvvp4= Pn'SbޮHzi*E)~ݩReљDžvI0r;"U9hʀ0D~tx4bYX!gc-v{P)3Hä:O0 QH K(ԥ*SE92?r LEivQ"H 2ZtD qgCL15yf3EIb''ObawQ6MFwc-Pܲ\̨#FT03ےN<$jA0'r%|^(o/VPٹ23\-D2\ű O?uxBQ[Z{egJUb%%Q>v:(b}$|iģ_+ uվvO 0OSm=׫Vo(*V e4) vGUEDA1uԙq3M*ӚݷuQ60Զuj;fЇknt/R]YD[DF,t ^*EoJ.Ǫ-g~V2(!΄2Ԭ!UH?BBeq4hϭ0LN[d8 &DLf[.NUv"9;XN&Pqz]*33o[E…Kam`\4Po>Uo|$b]z=v *W`Ez3 dK9,ZzGheDX\-6ZL9¥.731Dxj>G .Jpg]}*)p/0FScIv&`kAU* 5 ~]Q0}gМ v~Q-UGQV߼e~)<ڢ=hHx} nOd6w_El}-I8Gi)?65=jbpg8bמXr!Dg_^kW\4;g,r@6yp7='vIs{)av(ir)("ע wǃJqXONN^ &pa{+vR{j{鶹3|ާi㾵b2Zxwz2' eTWxLCy"[=/ CXr 'vsY //St3ՂIsIKB6 ( ‚:[ʴ.'>JZwZӆw"ç˸lt`}\OaB[df:Teȵr=cvjg\OelC8\!E@dA^"%{;v3bvm>7yw(%[#,B B]dv)䕇ړ.MIH %B\.("&j2&b+RU먈2b)IDa R&j"@\ĖTA\VoGO].r =d%r H=oU.ާB4$)6\r'WQ +2BuΊGmP/5j7|Ԩ4U7IjRvex&_RV (/wn& O 7]90̓jJn<{"ãPm4oPs8xL YFe', IC{9ڬSQ%d/[<&wn՝^P4݁9E+(c44F/m(qNǙ.ma^'L ZiC߸24O}7Ȗ`G6#T>0|.7Y8ڄFu33qdj,fJdI Ze1äe4X#k \uGdrO֋3,{ZFʪTțjuPJ4}83]>~o{@܉ iMܘk\nFxq '>DN4/3!壓s.sw#G?UפlQC,eGn?L tm%$#pPcZMԧ'_1FgE b{^_<񱝚rG[ &S[2u#>.ML3R1}A77dU(9%yZAh.=qEkCNy*^鑙$?Yl]01E`~(B0IaQ МPZq,$m\R]Ҿ7 V (/&nJϕ5nt:"]h79nC.B}aðEUq7tረLR v;w}T\J"e"(x`Xx2;y w&U(ucRϜErJDU@4 '؝|l#ɡ3HW{Eq>pKHfEWŦ3D8=X/%rm1̭!S&3hSLqa8O=JKIs-%@@oV\HHXȻy;)\>XЕfGwR`tE"ϱ%'WoWOӦm$d0%{Sڥ7&ʂ!r>ѻCjN'A7\epnR > XB-hKb{YkQ,Z2l-s֎tzw)/PtOK=iʙWR-Ϊ00+㤲@y䢼LQB?=^;Xb&5S5M+DI'ܛ ?6BM[?(Tyyn$̰0GYƅ7$6P(bY_Wud"vrfJ.2TG:;ۮG猕:iTklqHIZ^g¿we3 z3j]5 ]bfƱfъYOܩ@7cLu:oO8Miu |8 l,l]QTb.v. Du+ @א4#+,L;Ƶ_<QGU13@,WO}Yn0QFx㞽U/v0K-|wi(*MqDU+$ ;}^Ӡ:/Uk^46I}3Y1N6wyͶeLBBX(f/]ȅ{]'\{MwPgN9ΥKbRnR$z\!_nKx!-~b9nם? _RVTN=k0.{Ô*>Esy^Ɂ'Vݙ,y9LM@ xLzY]3iEMLmYC00QxccoB0c > !*IRjBQe$JGJ;YͽX<-]"׻l~_-D [AqeN˜!C.dj`wPҩ298Es55\pn^c]ds:ą:OfV̴wGUǪ\LȪU3wmyqxE.}9{kRwƊ<^fy*SeTe{TOucy#1DW]F̊]˿Ն:kZu"Sp'rD1kQ x8Zeh&ZC6 -R>AtsB-vNF#'Z  F=8[Kg~/+:ŌSsѿu63B뫱!#0KZmKOQ! !A'[t&eаM>*=妡~ˬ!##I{($r2(rf+Ey$Ij~[)Ɣ- ~kθDQbqtDv׫'l<F \H#L(RD[T"\i=_tD Gcc`>IT߳;s}P7i*ݾ6X*,7b_a014éɭM 7Rt|Ne,qƃ2_8sy#ZX6Orokz4H%qS͉i1i4)l V涸7*VZxs㏒R%HIj .-f*(*B:`i5](/ 5#kjXOscnʖR#\ QR@#i3wf~36I gێ(}D:(gr+A qyGo3|YSga ZA'- .A!*j:!w!s:A`k (u[*3;@<`EKӖ?d\J"+yIoڂ֭ͽ.vכ-5jkwӲ$zv l] SX|.DP~X2W SYI!ppFHW `,\͢ٹSDdց.;$|W,xsrݬ:BTiL]NJ(#xPb◊g3){͂u$ReK Li)OBgq [,͵YjFlrB<|ͽ>oY7&EEqX vI7947Bo2 { WXI# ٣קoVO5\L$-< U1ISp/,{b8MDQpޗLzUOu /ò3d1tdf[wLßp q%ĻP2$ihk$鰳>Cp2<\B9$M>|U~h/.QGZLp""י*w|u(1{DxNtL"/:ap8"mwfMT:=UUӶ#LqmRFAKH(*YZWK i[wt P#R[`<<%o#*Nq}4Y [ʌ69W4;:50 FL_tYoT1bzS;zL^ #f=P`'sa3x8Zv:^ \&Uy1TW9u_t\G̫M\.&OIMrŔчI ỏn2(t'?paĔZ-1Y$E Q'*^(/>?)y ևIb--u!(-=_fPI.Wv Xij6r.a&i^[cCfO g@9ơGpC[XWxtDaބ~S(ll)^Jw4A;DB6߭,@xNW1Yv½SUݒJ>q$O%lBq\̓ b .28  ȖZWyN]d4!U a2Jzȭ>Җqv74`66{$;el+a6V3?ϸԦ=/[\$et剴Ww ~NS*5cSᄀQNf[YD6#=}zE}+v^k+&TY#H㿍QhO/s`⼅m2xj ffhEN/4m|aP=ϓ1&H(?'Q'䷋[W? >rL mH&)eof1"g@ە05ޠvĥ0Doyo >@=ׯ@ս{ߩ6{Z#J퉙n:Qğe) VM.Ϫk'o]_nuR(؝,A`3øp/(yPA'6i.e|iW^ O ,)n^;ϕ8f`*F/`|G,X(Ү۩2'fpk[Wm,-sh_C}ʹR.V Q+Ccp%& %ף|Ǎo(Qƿ3Eёs{mkx]8t-μ^o6HDmS.'2!Z>JgzȤvx&vS_<rGfՖR6/|v/ *jM)M:Zgݣ"܃2=S2:97O} /RQ B/ /ڥ-a.+r*N֢V.jv>`,уwU[jjlZuUW V3'ەi^pbr)d^e*5PjgegZ6oJ:`H'-e+|S DEak;~C{]x \epW1!Pq\i Keg=O{" QAOoV m#64}* 1\p#P%p9Ds.@X+I9s#4L[;O!:>FN띍mTKWpLHS(Tmw !JToFBD4O@$.=R\KM"kĹG'Tȶ4;ժPkZ!zpVp߸j2_?&5@XM9m=ӵr %CZCPFCp^DL؁eM U,l@Ɩ Ay[hSݙmc~3wq׋_Wo^sG;7/o5_On,@XY xbPN\`$]LgI{iZ>JE!%%&` *ju/9YM?5D).'lKN/V݈Z?*n$M;~ 1cobͤk@46*$& b7B^|nRD6SZVBЃ(@j|m4$ a4NLV3)4kԻ PBc$A>G;k' ( !ae P . ~L%F$j89'1]L7+ti &]%(l2v xdM,VIOқG\G0dqFq3Bմ-p`.w%h F4\؟K{{Ha]| ~,/"`ng'N𴑅N~7̑EicKF fbsn}N L}9LF3 Bsf]-7Oh&Ύ5X)aHK͚ws"K) _&{/.ѕdC5fI_M{Hmh $|^a3ߵ=|†RIGBNꐥgp,lNFv Uԅ7MTRB&V)kTqe/E>JMbDFq_ziwȜ58}7h6Ng+8`-{z^F~VYV$P- ߽3ϗZϣ=(![aB4~UMxVC e: .g)U"`Pk8s % yXr|#Eٴ:% I|s(1N4@7U#U2`Uck⬋ojg#:V%ʊ!5KV[c|zX TX~j"0py+w܈ pbh|,.iw \>i9KLeHy6!ki->y*ݏ(g9Wiwq! L$Ɋۿ|Xj TL#R ha!Rh@K2-NuSeBȎ^+܎udndՌXOQzmq<-'/' nu3pe NQӶw_Գ:Ozws.%*Z&w毛9/;\ZZ f[<: ͎ld]HZa|h%!V;ez*oHUm~R!lQd4Xȟ?Yrj 8Oߚ}2ǮĊ[ۇ'*,^UCpw_=X3}570MLXІYZKn\G}6xK-鏠Ko P/H69 8{~l` S;w AR(hF񨦶.(R թ](whJILyPWK2H @#tha8s!̟dRkN3M Zwto+^n֟;ߛƯГ1z I)'6MSJYhlu>- URW6F G,A2{adX5>ӫ?EkؽU*߆wE}1Kn'ؤHxO\iJ=|xn?bƌ':٭LMHFeQEF:sp +Ku@>t ٩fX&~LY<{tew``h#߲VnTʤ+:E3EkH^=Bk;ֺ'-T<:AXsr0:pg _$^ɪ*,c-9#rn8 )M\U͎eln@ ea;xM*`Fobs4i0vr%ao:'8מl/i[([ٵ 'j)м5b^a&!~!MENln@+9ᩀ4/]F%,5`R3)!.%jt˓s؈/Mr#Dv/㮅uA1$%j)C t¡BxÁ\y7 {y T3o6"P|0;Dw\q%SIE*T?r9&!zȅ-@(c<^@IBARr vqy"ͣL/bSH ,JxD 3Wj&u_}Lr'R¥ϓ9@^F |ޚl 1hk 5V?p?;aKc&S{ ~h@̧/ܠ2R(MP([O)v_dՙǗ1ᲧB nO9>3`& rݞp?Ğ jmhU|%DQt`3DDZ*=Mv#E8 $`Z?dmq-%PNDL+xk3:)Jtڛ4?cupֻ!?)UfRt^q΀S%G艹SWeH/%+\h%[~Z1}oR2FFyVJHhQ[ZRJ=ա] ȵ"8AsNx( DV {%k ÛY`@ HdK^ h(N,+\exRuGpkH 2ߡܚ^x"l{n@b[5iRV jVOK"(%l)z4!]OjN[[$\}@[ɳ\kBS$󄄘Q24MDUϟ kQXk9L%4z^c" !yM_7h`r'W' 0 t̄"r8R PV87b*|pbHXZP:zHp\0e܁k5Q'{G b}@Frb`w(XW@) *sK%ˤ ɠW!JKwDZ2Y~H3N"?A! f+o4PRb j b wz\=H *}"mU|GZ96dOۋylpNA4 `BuOARUy2Nd˜}&m..X-kmVMIGgs#SpLn2B 1?ҤLIKw6O(-BH|zm( mfN(2zX" wϱn{sl2%]uF@rBM|,aΠQ'5FynQ˴E& ʫ9sD/,4У'.u_:[y;DĮO^aSb#07a&/]p\i%\u̓u5^PV]Oo+ɂ OK#p}{E2<'MH n1%4 4D.l? +FQ1eZ^ Ҷ,DFz=ĹX!ߌ2Ν8G$!e$< g@lsibACd$tңP8%3ct1}:#bic{q: 8"Em-]{hxwϢc89W^1Ji$dV1^=EL qy7L2xg7Z<.YqPoozfgW+^^vjת]wtװ歖Ve=.S_s6a`r% ]b,>n?xx6}[XT7f]iW}܌&J=YfW"f;XB/UxA 4^<)J |;9m?P.@((ňMY4\(!|HCԻ,Qkw#HOrb_ip@ r1ETJ A dYS|{\gDgd_i20bMl1_9C/I u3##} Hc @1e"I$()$A<~CdR&q2#0_B@9 `m$X^3idϿQC1Y~|L@=TFhY ]Zx}T=Z~~\0 G?'?~*'O𗑏 m+3ۨfd :dɃ@E"%߀Qbf'B#Th[_S+]&gop$ꪵ>Lek+Ґ/lJt/cwxe0將 :'jPt,tHn%2en/v~}|]<$M@B t}y=ߟɟ{M= 染e "_":ip lv$a`0Fƒ0 X=^tJj~Rr  CY @ޢ<^6UzN/4{ *$+jS|/Zu _1}P!J'e"D۱b&m uiAń:1@AvH4xgzTejTChBSygɟDZxIpgrr'i& 2JUxGv , T&5]7F 0%4T4{G?~nZk MGbDC,6ijd4nm/puTEZ֚85jg杻ݖyoua'Bܛn۝9q0Of!Pki빊Tp-%H*7z۷Y7v$6SJAA#idFFeVm5j97ti@L\s颦 "IZ AU$A;0v)׃(+{% m7qckU:") BDWVK`labRt/O,(g*)d0P j5GG:PmjśJ^@HX"&VldY,Lk[kEO[jea|1ˁ')W/kn `ItM4j" l ĐI!3c2 womᬵډnXT{ e0:H*Vy|Q9rqM/l=AwX{}yW:0w$11GG_#%Z<.~;>|&gbk5"MxΙ3b'5*jG0`Z=Z2 ?ճd`zzG|HaR4%*1;n30vw#]%+3nDk53wtǽf>–Q;׻%P&ip 7Fد-K5'@Xm? وb[K̉}UMݥtr:-l+LM"r*1ㄩV{_hkBOdsm:ZժZsOb5C6^9C(u]$9xңMvqjl,iVp}.1%2/h}ŝ*ͻ&_j=oKHg1ڢo[-6_MH DDvӫ|B;صTr r6 `>h[Pոmc69پۑ\ 1 /%+?ρ8V9iyΫaĢSY M x<7"ZG!Ϟ&0&-,H98{Hh #d1y/fVrE.I6Y2,jqu5z+W׬ksPC+lv@/9z%;%(M8.(l\ZXZ:,A}K`SBlAY}Ysꢨged5G( b*]UdS'b >-V(BZU(Z]E1W(BMLfx;270?Sf-_[2{ *ms2U:VtIJȂXIc;&>M m8=ۢO-Po(FE.s;i>-v+5%@pf'n(mFyWྙxQ.y(u$Be0߇~jgsHN7gBpxL/^QuGSE9*6 +3Ew]mځiQk cqcL)͜ޖJ Uh S[ΚS1eO?l<~h>䈏2JHCi.FN&"F$ iM A4!hZ- QcAV\m ^.E{3aHJxVp5_Ρ#~.Rg" 8,H6ޣ zg3gx;fh맨):QRϵmX]zdjR< SKk}ty>ކ~ջ0_*y]َ{{ΦH>"sy@sS^c^&zg;s*9(\7O<+ᠿbg#rA{Q'͢=AxGQ\3~94w8ɭw2Q_;p_k^?x*+Hm@sC?X>\$t<`>4 _sq򏧞𨟆ΧcdғL ;}- cG#q4Đ.y)@:ѐEu(|e˜sߔ+8 5S)7.$gnϊ ˚]+3>~NG7[P\n .Z~7y\>w מ:eo? =͒B\}-l|B:NLq5 r1[ve`7Dv5AQ,Uy+l[l+K(AdNZ2CPOt+! WGAޥ|(!D#+"bVdQMLlN 0$S~ayCD.h;\v\CލG?_E<#t)} HB?SNo9 h?,]3ҋ0F[oh&r;MyMBCQ{)A];!bJQ3 PY KnM>l >Tgy>K^0_"Mp$uY&ZQN5\pX 1w]1?6CfjD E y8ʽ*bb\- ]Gx nfWJCH};/"6` }뗀[" I`\<32D3 :r%31Ӟ(L`*h 8)~_P:׍j>yF qHsPi4^&VM";[Nd86ϥYEjp3kd EQ w|Us~Ϫt#Slagdq8 LĸP(HZ/܈);Pdk#" *%=2oE~gv̙5e>R{ ѕ똬gl6/xhl4DQ D6ioGȉ0'>`Fps-b@:]!Dc4tvqlD`aY|0x(\Az|8.6qSfIaI|˲N]پamc.v*:zd[lEU7&BƜ&N(M,mGffоCr/2 :r\plD}QܦgGkܦɿٗnܓ>g~MAJpJe1mRl(,(K Cx:;Bd'w$) !~&nmB㉵4L`t8e`yDnl0?c d.=h衆[+O?< J841>n R0ⴟtLy޴t!J8QHPv[h"n;^CxȔ\6''y69k0 UnF},i?I_8G&>HM~ m8;ﱤ~rST$c5" z0h6v5M:؁g,jd{ C ]BEH.AQ Uno(X]8_/Ų(r&rPb2nb 66lHCŝ.QO ,iEsrYbEXwP F+F?B;C/ZesGJ~NkH˕bgkr8 .l_F Mw:fC l:F$B*VwgM2-ţ"_O5v)0,/jM֕s+@ﴳEe]Ժ_U9W;uwcOΝ瀒ے-u\$j,E0gpE'{t! _e*p:Pq`ܾSmH\̷pӰ;|,Y#DZSJڴI.y:JcOz7b0t%k bo>!UtFz)Bm1ƣa0z!T*W v/3WH&2rq߾jmc̜+S2^<9›Dwbpaejej*QFw_f`VTp^1.g+5{L*5}i&-)D`Pl$ Ap%fT'-+Pt/{u@^HAdk8QOi#FiFv"L]uܙfTy%t'Oo S6TOޏah*]^ -s/q8 Vv:JXr#8YߤIdiÓ''7;&^|&U)e)[/UoϋIE$]JeP)>\{> ^`4W;O2jC'~434(̆]V܍̴>Bm7Bb uSjo~6=;w==pM0;xx]nL w!0L h\$a̳ ZmOIJgc 'h3MiZDr$iy*7'zd)n~/AmѣH֟!fD0IPj_H4cq)'/ ]<q2Ӂئ$!gt>b]͇%]!ۻ}iȸ)/|)$AF6 ~!36e+ lUGEl#iɖIuyDf}u K{1IK=q+e;V#C34Aҋ 29F1Ĉ![Sl߳gAlŪwJ84Rve\x4f qSKt%Cp0̺)s9)9=2 ܷX}uf1 q (1}vHPz_SLEУMb/Q 5)+ߚ;y3n7ZM~ 11mN0ONQqMN4s[9ǪQhdܹp^D?(~$ nF8!%5+'NDgc g3ýa.y0Hs;p:2}tP}W^.t@RoBdqiG!_EmU͘ẓOtP9N%a B@jE (R0An#\+ɰx81?F좞AR^tl^k݅Z3 =LN;0Y6E< g"%c߮giNc2+C+ŏ :(Я1K)O mq)Q{ g-V\+Cm3b}C![jz 1=HtԐ d=k r: HkB$o8:VMO7l<ށeF7u ̉O_6& eg٧_>9p9Eqj=)ρ< 12H6,AW_[OZ]GR2drBU7JI+ ?Ec謋 fM _}hP2W% ح-VGB+ QXuk9Zm X^yWkgݼ >ճkϸ ZͶh Z݉6DӀnTH)(, 3յB؆mVʴVQ,˽ނw(FlF=ѢEyȶI,J~DIo) \pTͱ(i 9gdF>L*hQ$qn85]!=Һ [P{dnW[|&^/y,&AhP1̡IAl5ޯ(4.8Z.aǟ9~hS0* 010Jrl̉x9;EKM ̋R} 2?Q VC(m \5ow[n?(~(~t_QB+C`?3{`ΥBT)ᣳ3SY?jy(.‡3Qvvg~u؄n0 U@- CWH:߃._J7uMŝvWc 8n>(cg^;e>]oϹ5g`T ),q8pQ1get'&vBr ^MgicH@oWN.kҰ?@gTҰ!դq 2swk$0QLFTsw1J)l^G< 鏖.6Cյ͑vpHұfpz=C(]*]"u$RХcHr̓AN͐KI;iJ"rFp'MKS4&uIѾ4$鑔#$ G4hb0-d! ̤วǤČI Om&' xPPN"ofec ''^2<82VYoܞ F’ͭ L+0T+`i<0C^ adP4垫j9##@ԧ';`䋵GggGWo?zC v?zxA ߷?v?BOz'L<;n<56ʋ̜X_u|6'sQZO #fߵ ~">,ICqH#$,k5$0J5+gM" ͚W$b#1B8O|JWoTR4(S}l3 Y -8'{ HNCƜ ]\c0Z|up{Ă Kop`L 8BUbxa_@UԜ#ҋlלZ-ԬNo;A,Jn&SEFJJAsmw8_]-g\qkR],on/?u_$d7u8yDK=o2 <O%BA#Ӱḳoem]} ceË8%kD\qGjƝ`IFulkgͶ/Tzfͪ5Xft*~QƋ#`š^[!kKj: ^dbWuՄZ| KD/0hmI/3/89T0JĘyQR"~]%GTVfյzc%lhFWڣu0E.݌/A2l$mΩR(Vג&?aW]}h0/!4[AAl9P!\MKn,},8 g:i4*[<#rؽIL pU }RO؜-E7K̘@&nwQAL5ĩmQWj0)jn83xs(ρ[}GpzځƝ8=Ty3Hnp2]\z,"_<J Fw>S1\P!5K,9Im.}ޞm* 6\!*w,@ 3Ae+ Ьöm۶m۶m۶m۶o>g5WrrISV(8m6iۉ׳[BT U,' |#r| g%ӌWikGGtIPRMn6K,zLnXEw5W<ѩ-[D9@u{\ZZZ+*B=Z6B!n ʞGރ/R?s0c.| B` 5HOC^p IaIUb-)^r;7fWK,*yUOw'Bb` r&ugθzj($ g_+} WD,#Nc"R:@iZQeU'24T gUtUQ(l8mxR (1 vw!PHdGA BLqޮM7iꔂ'LG|EZC œoGz?ճ{ 1ЧZbkTm*cM]kŠqb-M]rh[YHYT٨ y'I" C$|{7MDsjSEzE/ڝ]}F~voE˷ksW?w$wd~ЪPԎXV5u)C$mE,߬m yTDaYggMC5hj-HD*.ݣKЮ]mLp =/]Ikӡ y=B%e-9Gc껲nIO DrX̐  #}OvW(zO-+SZq`虨d o|z CzE/i1m$Һ3 *w27a w\d6;W{IRi58r$*rTTbk@h%] jdn4^qYlP̎}㰭ۻiȑVdJrY$2yt5BOF/t`WL5XwLS;hnV/ls|QCmT 1 2ЊЀJ >i ,]jQYrI|!$ R}&GIl@ro5i ZFV~ڤR&E&BIW"G@_uAz08_:N+v'?LK嵈:0q2f %`AJn2k83kTbCmi =FTԆ *?W؟{LN)8NlRW&~%ŅwX\]z`FIT?q+ Y(+*q``E qUQ=מ^+zmif?}pLE|9'j; er֏ͮom, ]_ǽp©9%Nc"-eǩ6Sd74."Dp9-][iv̄ S;.BX.n2 0-.%S-GWtېlxGr5>\{^)0lP=j3$b>߱XN+V/xi]Z"r c67.w]Mh&8U"8!y+Ʃ͘ɛrRAV8Y0ISɘN ZOP{&8K(= 2%)ov26-XИiﻙ=l:J[ X!&2=?΄F]01vV)W3kWk]j #E^t^=:6mFw`!?avE#z7 tfHI/=ʴWƋIN|]O+ +PwO\NL\e5[>ⱦ/DV% jɹ>`?ߋ qL3>aKse==◉DHCc`+(6˞˅n ia#}\{ Uc |~G^#~c%IĐp-*R ԌȶS*)QfR$nI.QNG@t(;\P9YK1=4G3_L a [ܸY|a"-RWӯf&/}JdwWTp,:l28ߊnyLH q;JGFڵ'4W&\sG hǐwF(`*ۜ ubt7;8Z_=XZԬf8s7`w VNVZ {;]b?$BT{93}DQ0ft+/jn>QD .\j4nˏ'ethTˍ/Bdd1V:1[cdMh!U>}<~4xAIS!?W b1,~?7%}=J y#̓rlʯg(oѝlw\+QTs78_{Xc%ړI/V[3u]wf:NCF{W+CcP&Yp9OvY%eR0(gNB4E…"`}uOE5,ʍOl[{~lOF˽FcG &d0oOۗ4lG8g=7.>a[נz=Sbyٿֶ@t󣭵0޸)G)QEGA]9C9X_=*9jN_ږ ;DeeU۞ZQ(eBd[ 40WM{7$}%YTiccyߺ" zryjW?wnVXʰF`1LM, em6[}G =[N0sͤ1eTQD͉"1^?_wV9OaHw|Ç5J6v/ZZY KCRpόlXsrȇo\4(roB(hFM X$)@u>24>DF7Q,`,RerHGє1#H)`jijcx iBą K啶&lP "0s a~0C<=^.ѐ16<\z^_r=uE$݈ *ɺ4!]ɜ zÃ,")t%M;S=ݳ㭹Q9 >ID-pLmK#z&6O+SR@^ށClܚ3'/:xK*peֶ+4!!ƪ܌A yR^rw5XkN5B0ӣXAi8$]ZjAR>Q_jq!ZjHMmvlI4ySu"hj1ˀbi"fYH,()a۹pwʱS7XQY[FJtYiRȮtvrG LHk 8M/ƭy∋N8ARJ[©s! CXEKei5&;Y%M]့Ido:'=Whf֬W :_X8NzeP0CgdDgݻ :(OU/R(iKӫ@IAz<*ELkLWOJjY's>H]{$* z{DVĕ\(RvIKʬ/>¦.X(6ߥ/UH L#_6ψg}NJZގ yEZ={ܱ1 9p@lZpQ J=NY`ym][nkÉy 5wTxq$x|= ]kַfgkyN}"wf܏]ʬis![w-5󰦿s eg~sZbؔagR$SťXҔNIטq5ś]hm,n\9wHWܸ+n6'3.[;߷ٻ }G ѽ^0Tu?+P=%s1j4 ݗlwQk (P =lU3X܇afl[\QCI92^/Ew@'Hw X 26VGW/O/1Lݴ칵.KJyM\Gn:qAhi>?*Űضy5p9bk"Ie &{-.x"4hR67XR&m)=G.w-7CJ} ?ve|ӝ<&Oݚyz..ʌXU/<S+Εx$b۟÷"kxV~{ _)z.̬ IڪqƲjmSt >wcn7o7ם'LӉwM㭗MjM=4736{յl'V{եn=c{t̵r얾a]R9-ih(FtLɚFԮAtOEwRL{fwI,UGٞ r=[\VAmMcߌiwnc=gUaCvL7 ]\ :KgB(v/Ƴ̎Nb)I%vsZ,N, Nia޸2/Zуm#Q8_R[,[pG1F%1Ն%`$[@`SN`pDezeLN{Tm 7(,1)7ۛ+NT93'|왨$ o/4'  0(*68#qQ64!c[n.O &yAlWඝ;91dE)BLLv@@!N B#Vݹʙ?,\py&!Oc3& "W/Nžb ?zo$G nldۣp<I6wƮST`;Hj76/n\kwYF'rI0M3"p-Ю\Jw5 aKLR=}NU:=rvq˄hG/ɘi/չ 8Y<(띊1S9!%J'a(;1Q5XG7F󎎰l)P"Jh8_KqV$::&!!XbPXJ p ZA=Sr łƬ/ep˓ܐlSavYXLj}$F[a ifߴj //$߬Lto DAq@M6Gϭ 8J6Ҹ&SIʜ CX Ph9QeF)e0xxxT ?K}߰|˄'_=6P!b! q 2#SHȥ?xycD%R`p^yOm\nvZg)0r,3;D%/@4H~NahXa83\%Zp '#qrH'慮BdE'GӸ-W\ GMWx.2!WɚE ?:dq(WĮZ%WI"8lP!Tw,bdpJgb9VSȮ5\":9dGdmYh@UF8}_6U&m!uE$Հ%]XLBnٷ6Ṯ\:<:i8-@/Ǘ93eP{V*c ]5cGv;֠gNqiP pzwE_3'|'{Ve[:f'&& /]ӯhךg^xQ@%/%kWs=ːv-N8@…w9N!.M'!B IV",i gg3uB:=v|\k[eY(ijzQ?W\+\SAr#`1 冀Sbszbjǔ;F. b!Bz{Å:J3g0eQWQodfSAwV:`!ȰQ#E]OHajehrm, O"̲O}> کAqV$^9_6=%OLZzKݮan 3mxp?k Fe_`V)J?H\.$ט <{5,6lU*MUa$xX2./s()@%XR?ga`7m=|P۸ ;ȜoVNs2zmJ7hhYl F>69_IDJ ɾ|7|\%߅x:Wwꍞ=u3.ͧ?G`&8y cV;s?Wu񾡯m]a]yϸi>ej|ɼ/sd\CDO&?񬦶=$jEgHX.G!Z9"@c5)vF $EcYwq&00}(}.~{lF\ }[v?Գg<rm]sV<-LoRI9u|H54xs^y-{HO e>Fd` 9UFi1d\6T2KW['+~y֤C&G.Pq &\'(pJ,!<[\CYGs+؄56%%"x9A r NQ\Be+d*U^ؤj8) ClE\ެnLxOl>T._ω \1Y6[2Vv^~@|nn][$<f7pCgr@'{~"E֭Fcbs`F~P:M_Ω=IU@UUcY~1&6VA 衼| a֝R!S\= qO/փfQЦ0"/(F fp^fν;Y+Ǟqz<<[ZL{BНκrlg/1Y;rGmԄֈ_n~H{%? c{g@Ns LVPoQsϔ߃c5:hxzH ?TYtŝQ[ _\Y,qXrz~LUU8qt>G+"DіCBIJi=)'I4*i=2ùNK3z90";ʏY,1%5~^{Hvkrn1ȰCNȉ/O@uFϾtMU&{h?yS5)BNw*& "a{|2qۣ^)k)͞һm)y/V9JFb;Q(o$a<#>8,i1Arv}RIaS!<-$VCuF^pk"^`_LC~6FP$b3boܩ59Fz>Iˣ̛2$sX1Sޕs~4jMY^DQXA6LWѪgz6($mSE, *[F<.$}秡rJާqP Qe5ڬ`է,ԩ]5b_O4PMڇ|z3#!D?msdyMqߨ<~U֫}k=)Zh7*c,E _L ]}Vq~7c+hqƮFgLvxs 65^k{1&riw0W2R'nz[(WOn~U\d-_qkz 5/_Pןklnȇwwx6X,`:.e8MKu75X788ʱT2&&Wwgnنjy2ytz*GwYDܪVr~G#i>ڏOlR'@dܪ--&d$ ZX"+QAP*H9%>|Iߗfo$$$ǝqcxNU'@3jcպ]0niIfvs1Od\/j=knѶinz\/4$k^Fv<2kR {u9h[ lS o N?[ nA[zT3D@!)~JB %AQl.^\ Ntނ%ʣr"L2I"/ly]ˮh j}׋i[@l@ Oͷ<$RThiI"%J[#( ?0(Ijk^O`Qj9kg3=oyX3\-aОIENJ}}4f_ KǬRyGaOb#bQkpK(im-3ʄFHG*J{g}jI9z['eGW* %kCPok+H%z|<;Ѡ3yي Qe}4':(:o;J]X0Q-l_3yѬX6\+H]Y&Q AV^g˰iK{BWr؏U^%4\ ~zabi 3#RF >d\Ҩ`o0d̞]<1Da4|$ bPb_Kz}lx]nC?r(jȽXA Z8!f4X]e`|r9*^D9*^kOvB߅YOaEX}D 3eUYxU`,vX3t,N c%kxke( Jn{<ۂ{M[M|Sw\?Ib9Q9Z]6LI$,_HHOW^epg;ֲŗaR *>o,l):lc_͚B6fMW#W;{戞6,fRXWGD2(#DȦie^_1v~./Bу^71[ Ϣ `=MPqLIކ ;M""! 1_*{TÁ{%HӴOVQxL 4lgQEePeY#hi}He{őoP e#fQ-.[~ڦc2J|Ms_± n1QE1e$Dt?$1 UdRR.)@ O!8>0ۥzoGg{zyO4!L݁^-< 0B|&oq̀qfRp\eL8'2F47g0f*2Y&nR-kjO^jve00a$y ج~$ؓ9:UG~-"m,ҿjHYzNv}Fd!LL"IdLv 7ܷ,E( s.(2F;EΣA(&ok8_'pĜK_}EIIJ sKK%x& N{Ddt=:qܳtN{RR ̢f)(*54eftYKN]@I5᙮u=z<Ɠ> IyhSsД5 i[}21~J$Yg&ƶǓ'*Fs3FGD(Rdh$)QkT JsuK2t KLL}5n7Yr^^=4\ ]tt6ya;?Z:BOٍgvqQZq((or$Ae+b#T=nw3ofv7ؐBB #FS}O^`3Z߼n77>r~~?kHKݟݱt>YE#vAPGC4qTFB#Akb6rôQZ`Fƅxw2$+]Ր/erQd'4֔/s̑/uro}JS unkx/Z{Jm>`[5i'+բ<*c\  N ؅m z-LCUtMj(6kj,D:o 2`R dFelƘNmRWaZ|2hY۸u_{s!^dA#[E<$n.4+~}V ;SU#(*STy*A(>*wJY/y 15~ "O5LRHEj5ժC=$p818HX -&8BHr4۱jd\֑a WΫ7Űl.͌?;D pwJQۺٍ&2v ^~{TC-VZseRLRt5Ф-Iٷ_[ctʾG,|r7st̶ +:=GQpCChqИK99'QjZWu>x9_q3#t7pCx)k" tZ.d5ک\DyO|^cG~tHq1%)jn922C{#T -q̲(>Ш'u(|~48J}il!;p6ڬmZy`ȦQqPQA6 mN Y"RAl Q n[\ _[H㡀X0 OLu!ț9"#6YCEHvb7xӇ؞i<ڙ^̞#]W c]v #vz--9vЍ~AZ*]M&}d&/8QxnR6%~b2qs% (. iy%ye[?kδ!ũ7x~I0Homv5<-1b5$ :zLz GWG0c[5 j}4lx 4`;Bqx_ :>?nٛt:m[_DT.Xc|>?6"ڈ 1AA(L/3T-|X71'H50:zZjo,PCYgab5osW#xtFJ RX.NxTXAyYq#{"IGc8_dF D~M$w%~ ҏ@r[GcRndBF:g#Bˬ!|؅NG$m iEĄ i1L~~8[c0qs'W&He f+]+e$aeX̚vSbx E [{ M;$/ձ` K&B? rS-B׺ W\Ynn _^ݥj԰Y/v0>qcrYWd[x߲s|$ x_l<혚*㺚lP`[,0yĴr<>_vqX8`Y$cHHge~*Yx~AZf-- k\Q?7f98ٻLD>}-ݜ(^KXR-ikK@MH+>L%t!‡ىYݜv۳'SXʁ}[{`4Qm)4tĢG 嬂 üzUL5ȷP Ofud=42Ԋڛ `p@<{mtoۺ{v|>&o'9>y+!K6[LdNpqn9ҳ||p~%Oׁ*-i4;[fR=xLF/Omԋ6O7 6}kRuDO4WaFaՓۃ k?q4_̽GKϐ?XImO EOwHA;r7La%$RTxn#Q)ެ t4s@1ͬ _=Ȥ09 % Q[Y:%D3CRw2erd%2ģaQl:Of]wA9xC6IRIrI(Bv. G.Ōθ(X%`?4%EDKmZq!>oQ#.t&oK8QهŃgPfSs{ӔAvcCVIf9&(޹"")3 n8f(2E= #ՍU(:½CPξhwn??.<Çm(gtTa{4#x 1 1.,]*BYNbK0GfE&z '^HbO9M''7m&6o62{tI143R1yK=^@Qeً13qG'rCLd Gk\"SK?ReS?}%V9sf S) R-+McI"e" ʄ og/P*C :a@> Cp$j\r_ ArC'Ypʹ6 RQ,Ƀ7qEK͂2ԵKE]xzt{MSJq@F-y K•=SEtCQТ?,c^pJ[1:rHd_$ jte)WjY!B)p,aμ$dsH,UF"/v^2([IracciVHEm3WHP`\!EQ[u% ApxH?ܓ:9D[AzX|l߲,vSsVG}jU3DvW^SCm^&{\,.>.[9˷JA(pE)(#S.Y:ka gdϋ g*zHwcri>֥G==V5rX?~nj;_Wwu"TQG݄7TGEyw >  ;\e[\IdKH(1x~6.$@nvϕfw¸m20KȞz~G6ٺc_̭ne&M@'t+bi8ʢ`bM*w!Ku>+W'JvԊQT3+)7&Gsή7` ]o S%t襇}kP+ G>( iu^'rͦYV֢rCყs40t֮ݸnvj5: D&W.rr4'dUY#?ޞGhr 8x9Ԟi0w\fR1k$cU5`VS 2֫e3_Yo:wy9r (Y$Ko@\r]?n NݖfzVs-}1D СɼGQ脨L}&̲9Kd:n~"=c<-B<q> xuKa1iml67nJJ)BN>t;2"#ت_Ԇ7' FԿ Zbܚښڱ^IWL9F<ņ{l"xO8.eʉODA^@xb:*W%"5"w>ߞ59APn$5:®o9o:z`yw7fpfuA P}\<|h>SuLq-"PI[ŗD",SP/ ܠW5`R?h`ġ*n+èK-Mwk{+hqMO/teE+RLCː5JHD1ِ ޲Ȋ^UU2ln.}:=z)Wxt%䞎͂AO=P&- UZ~P%㪹x@nQ񮊪}Y į =x,ERo tg4F #L,YvG;w[3d+l`KmQ_ڣ k-+{m:E5wT1 J_|ⅠQ MjTFnX^KQ}<b3I}dԇZzrvxMa@tT8 ֱoDNMM _W]ZNOi=kdUI3ز(jDn)E"iiX=PWjf5hۤv␊Y"\/EE=Ykjfi7fpk v%Pa"hJImDӘyxlzFϕ%jV0V)ifALGt'n@ؘ.i"[ C,ed nu؃>܊4_::uٲP Lً^#WE*ƩJ3-$<Flj6#3R=/KD-5s' Rˬ7nB|b҈wX}[wV$pq%u(s~H#*=mY$,qJ'F^2ITO2c Z|Ca'ma8=||Xp:HGڰje}.xܣh'i"ӘX/DC@'k@b%8s;:"͋7761ȵT_7[hM}#]oJƠn0H@ |5-b\_yk_y]M\~\_m{]7Uqg*3 <Y׃E( 4(h8=Xb!\!(ÌjICb=,`O}Y8}ݓ{#6Evwf\-Qei!w.ik>FP +G=.,иWmp-'Ŏ#gNJ2ڗ >mE1rWr-\E{F[H]mطR'mj{<%5jQkOq[މ [;`5Ĺ_O#u@<&ߥ^a}m܍ 7 :RzdkZP$:TGR-}AN;VicRGJT-c|_ğgoo|wS2kxtP00$ AX`bfR4Ԇ'G֙C?%$~g'YIW,_^\yD.sPFG CEN%U)%] .$*ͼF^IGL@ L>nq@h$!0"0Ԩ u>jMe5QIJl َB< x p96xO1  qe>VeygK/Itp_^;2Xam #7bSrQl6N2LVcTܘqA=&u&*rZJ*6G ✾kIR5JEQ谦bYN2Uhfua۴5sNBtt $ 뮛LxCfq`oI{Kx`((vok 6f 5h觍Oɖ1F,[LlAz*MH=3R8:8 kڍ*,>dAayCj3NvIo5Տ6x*;$RACRZJ F\DV3j ,u)WW3c؏T '[FSm2UG?tjycT&:F-ABl1{rc8G vY3e\tX]5Fؔ~M!fc~Liw}ͷ5E)BKmSW~_6c k]Z-L(oI1ObӨ ?UmVr[}cKj32`&VN=kyS7ZE]_b2w"Tֺ-z,tvJ1\W-p״l韛W vؖ'ry[BXVMΟH>'",/''4Yn'sFfYER'Ӎ{Yx&Pn'$V4Eݶl,Gc!_>p# g%˿BYD~y τo7Rҍ쬽Kw[ m1 /)͘1`[F\C<&@ӵ_-4PqU QgCsIJvz"vc+?I^lR=sH,Ra&mR9/ד*uZB8ijBcc-.h|m#:P"YTbB]'E]'qsi':)kneϱ+ JpBMι'Atm!3>D0t嫺#ݾ#VU}(#0~NpO(%dR.lcUSL}ஜTCd;dԬkaN;ɩK= x/&5hz\5D=RI+KM>E>Z23Oa02t*™[R"E>2iGd>1b".3|LBa;jh*mBGф>ĪG҄>4-r;klyE4E\M[UI]CTi]Sd\Sd\SȄBO1 ebnKVeb/o;{6}{p?ֽ!bA%&>2a_^.e7ٔr0כMx1כNzǙm(a_s-tFb(1t1XoCPԮ_J&s!wV27_"YW\(됟TL/{{ 1UtjTC<ՇUڣ]U] ʩ[u_Ȫtki5ܐ۹*s%ʺgXF8/"^n.4ʴ(ɯ(]ʑrtՔ]>yMju!{= @0  OF5 4?㦟#p*ӓsƓa#S\wM\U9U*:#CBt{",HM掷B 'bkfd|y;gأQ2nvVBQFx7Nf7j289%3V"SG?GhF,O< B@oM.蘣IkDK*ӐJ[TZ,CaU<0 ٰN HA"=ؠnT%<@մknPW 5^lߧSzUau%NH6Pj9Ws 9F׏Al^v >V EnXSW{RF9]ƺ/bw_h;;[*1nuF{@uZ-ޱU Jd:f;Z9dE"S*h܌nCjXRui=1 Ӊ.nw;+_4}d"{Wb46yһ'}+VK@M, Ɇq&J+Kj=%/jDP^ ; Zf~ޚqjɟ9" bŅi)-Y ~<^T՘k2BզDl"xJcs:c:JTo *ҏr 'P~.`[~;rJu=!IDJ8Vjbc'(㲧tKOވv-#n$"K%o/0o{FA#M9ެ;jPC^NQ1W;Z$ ձŬNeNbxC ƒ:_ԗSu_(h #Vg-úpf7g)I/qon9d(qkƭ -\6d"CX-ԅyesBX̓^h|媟pOw }(F`jBp V{o/OмzOM"cد@E}E=0tD ۰4?NX)Z,?Vu>0L,a)?}wz ǚ ]TOo]fKTtS*˕l yڏAXM09+sgk^"Q:p2f2XWI;5 z3mqJ]o ̏G뼵lVAuQ1G1~0MJw/F5Z`h=vR[l?NwX]+~ f[H6@{࿎L$ &k^#F|tHW|A<͕h_=s4K}7)XvuzϖV{*s _M#$`Ryhێ(y!~1=lGd_#c4&+c?ښo1O iԐ1x,ظqV?5v(rSd_!D/z}ega]򑽈7q ܐM*rѽJs:g(Ysӟ\ SE0r6{UԣǎrDώ,RhQ$6QSr| elfc*F `hD2Cp#3ޙߏrAZ~UjcSMoeU2̑*R}PkAeWn_6/=Oߗh}q(Fohw}/|TgUMdzD]2ɫ/!arn` ]Ȗ{4,}+lXvר=g.o4BnZ7DYqS|V(@GEjL / uan#0i9-`:z0! -6sD ӫؕC 6lAyhK*o-{D-DxߢKu*KxUM/}} aЀx;{yQLFzB1xZQokUt Y$]HIЉl돘|.ҍQHtMj ;ǂw[?fM)N|Xv4Rpw[(qMM_\/AOkѩyĠ:$ZiFrT=,؆\mwX[$R^=U-ERMǫ~ y#% dsXJ9OLGbӧܪ.(D-e+݊}RaHT3[sx2KٔDӧ^0=o"67e1"H !PBSwLl1ҞijA&GM6=xƁļ猆L/$a.CN-2-G~ŃHz`q226ivE>ƅ<~H_1)&TvB&%XMH1%\o6G',zl)a!j"b 53dbOa)WJ \E\< |"@0\Xe<ln=U% O !6+4~rL=ooUH TI$$`%*O04j'CL )MO hK+p*K㕷n9 %K$ΐpĎYD. 8M'ޞM*-&ssSu N#[p_[mPԂ[mxUsCLGsY@2g4 sl |H"e UA\ق]|\]BVfzi IJK4ECs!eReu{܄D`4D1[ɕ S7PV v5lT9 6}uRnFΔ>XhժKp;ݰMjU a bM〬SrU$՞Tsg7Lk+<}tʯIP$5L2{=}Gr 6LG9$'k\*yYA-i$K8f3Q[O橃.2zqbvi}:o3_?|c?!qw=#z%3 p [ˑH2pgotD^k ~J+%8K8IxDL:6ڠuQ;>JRu҆8,TOCck@ bLů8ErmLɇxرyzdDOJߚ_-33Yӑag9d9T{^ {%-vycھpNVVgP}\׫ 9$`Fٚt~VKWI͙ZCiaCŹ9T^&Dїao[t.=>Br򥇾|Lm4ۼs4zg^<֨9ۼ;+AsQ$N` 2O]g^zW%-6K'!tQ`:I-^imvWCvY-v5i]n#Fu}jԇGI0,Q9Bz, r醭 Y3 {|Ǻ3s{Z~_&wg-gn/bPY"ɫ&dbe~~ǿٝ>61d66^fVNN_K7Gh[.sK5`Tbl ص QI8%ѵL~6piLj؁ZJ MYc_[O| kl5v0kKe4??<aF a-Dd3zsbnr+Kac)1t[^ [q՜OiFAiJ4c#ޣ=N;pA9 ֟8n ݃>vSkF>rz1ߚx6%A*@<wfPȔ|%cԻJyd DazU]6W(..2e"byۻ_Xn.+"]Fcetn[ ak+d@ $js&m$@TB{zz<q4d07ʞrY{Ȕg2Nߜ{(e=T,M֞@$oL ZZЂr" Lo̘Nť]g)"AVޙ^_Z(AM8- kz*jU~iefuE@@#a21rh){ϫ_эx3{&n&}wqoil8'>YJ󅅤lӍqejԪVx9lƒѰr@ e°ɉ @_8iӚjGQ>[%[AXLen4]ET1gz̉4_im/ {QGK'Aj`E2d&N`b.o絔KX/{nЦ'&ֶWL,~,b"ѷ#0xQ/BqwA=R- lqP֎aeM,&y7Y.Sn9Wt&Q*!K>`"l?ʹyyx4ÁJPGC$h{ GFklM Ozs}``sFoudRq .{cTPl&Q戊22$s M@tI$a"vjHL5iL/* ԆCeyRM/RM]OrpNdㆬ^3-{g'O^(ZC`ٸa`)< XĂ#B@Xr~Y屲fX a/}FT Irm hDsb|z$6ZO\>:*(VQ(yJjڙBp$eshGe6*.6Ok*|)adVtK~+UZ9_aumVV6c+UDUԧ[@w˨k/CnxRx(9q&MϏӀ\i[lGP8u~t v^jʑq8\I^e̓U f#ȷ⣥oY҇_yY; lM׶mwm۶m۶m۶m۶l:^eEeEfe,1jmMlI>hxpYUejj??؇t0JR.[b߲MT2$H_ y|]5^.s㟨I+hBglR >|Hk0ldJcIίS7\j)㒟=ëS} z( #U  lrO[^[G)5smY+jCZ]Sɥ1ksuMdQDsSh%pw3NRR`X*=m>^L9iїYnZx8AW ~F"g>g(=`{Sw-CFe5ynZSԐ x8274:hs:MA['5ݿ+&+P%6yc[YOގC Ҁ-/i1E}r^(fV s,3/=; 7AwQzإ&8@ bdnb`ldjaRF+W/Оo]T2~m>TDYxX6GL^A0ahg3,pї$; m<#I|j t U10Qo)Ѻa?Ruܐd{Mz֡o{oEΟzӎHA"`GNQ0oPZnhb3UZ̶AN@J@% ")}̀6be7m#mvD*<&DLʕۥ y_Ct6ΎxUu? t5+^EߪB;Kwi52NLǟ̅KXA!lPHqQwVmʰ"uD8!K2DEP YRB 9 (XzTL7>6+ogYHn2vRXN@sM2i?L:DE;fOe 'D]!2׺Bw&5TJyhil;C̆7Rx)\E'Y0d}- 96-,.%p-t>_?J ;/s^r2CN8~%g0!$nG0w8摌f Vp!'j N,83_ɶ+}&p#6ohsUTz@[U^ӍGeeHdu3^Q ˓-ENOGW"hZQʇQ)Bgn&-}T#tlfFj؃ Ru^..ZF_L0ɘ(UC$-B^p!0J(tј_x2X.0IIנ iU>:`%5wư3%R8d' 689W,ڬ\ UW>oMfT,"Nxj`erV` Rл.u7XL;284 fJ-=MO~_},V̷ #%rYGpf_-zM p7]ڽ/Y){9gHzio9jva=\<> Wg όf9HAqe#.nYyfyqTouq5ySʻ4/|~ ;Dsv gI״ P̨p1:,ٲD&ޤؔuݨ`2@+Q&bHq>S~j: Ys,b>Ֆ",Ne9&$n)ld lϲig:{qώ>Pb\AE=]dS60|B+Қ{*Q,Z@j τȡyfM ,K?uU)-n (CEI}WZޞk.Be+b CMB정%ʃDJWG ;PV]gMԿ߼tr1=Zr3nzڷI(볔9ͧh%S3# fXGFXV# xV]JngsϟWx7=3А2MFAjQrWH'8OK2q:b qP~$HR* ?Qhlz\udg &+hUxfɫã8Y"(* Ryx+|L0kiYW DB0x'5{y(lP#A^doH{g{[nͣi3ܢ@q0::'bL^ ʘӰhy=m ca6|7|/%KADs* =fEͭ} 9[UOAmj0A~\9j!M o! %*1!YN3~ήG^͖*GH>%Ɲo:ެy[mReUbpڀ]t?9.?oh;﷡DvyL8<DHj8m#P18'`n0$:zpU Hi/>NL)=y:%f w+b͡6Q}mB^5\0R8H`fU=89:"aD-)"~mI\SI=,lp&Fdt'!Ǝ[ܧ4r~wM+7sRnLR?w_[n.>ócpGw{obqah7ij3;#p;}4_@5̓i0Q;@0'HKꁼ<ұz[ <)#&R],x߆/u>`P]˭xWq3Wqi,ӉHtAd%fUҠlSeܩm.A|"k'B%)h y|u:4|Keg M/F^&&|`T>vM' #jN±ӎ0ĸD[ AU>x;B6I 7*TJa^RDDT~Z^Giwa7^xECD5RÎ5yN ՎtXeAI7Y-^vuwjOX{TJA4W- Jds$e/ϵjFиxrB<pqv8N)*\3LH bPZ6 f͕*4)mE Bp=Ѩ앓TMsUbCj+KFnb =C`bHݏ%#ʉЕ2Z$[ũc40;(U[ |&uZ*#(ԑTNv,[@prlg#I2!1^fi6#f"϶?5A3jYR2wHX/w}fG|mRl6IF{kj :bqta錳Uj |2 +'+QiµhUs)[`{L{%: n.a6kd3zULK`SU7)0F7mAoL.zEkdŢD I6K:ŪlF$-uad H%/Nd yT'W,%Јt80Q:&(Tw “˛ 6*"i$5#,T)#@ l2*CO,PԹA1:U| tTo]Miy(c1)XuY(`3ݝmYeyIء'_PӸ=|27]TXݿ#G>:#Qh'mQoLj5.C(_֧b K7+Pw;0b2ADE[\r].gwt|P2}ڙ7iړ-cܿ_ }3lɊj] ^ z x2\P3Tu*hjي1NqS1򳝠(5(TKJj ßCX ׿3At47!Ma`xҁNŠE6bsS_\Gn :S [QS>n*+V .ZMAJ{|"w ?̐Ő=U^5.x(Ms \%qjBCc5vRݧe1 5] }6Sl9ƻteSHrʦd _ +#%n99߁P^D{X*t3R0۝ }q4흨'řvBwfRy&;ǡ-LƙdS[ԊfX-/M\&(Ct@j Odޟu>G9zVa9Һ'<$UY*4W1gtf//B۲2<:f tS8~+]$^(^oL l°ZAy䓹j3p|"bR)XC"K>PI *; 9Pܫ'hݶc^*ur}$:oAQ[ /ci9mWMǡFA:T7jP&| {NY(r)d;qJKog׻a!z_c)%?z4ޓba\WHC)}z6 )4 ,P9}X[eH<0LЧ䴣n= H #F}fes!z܃dǡZf:^m* (y }EVx㿦s2`FK|?y*JUGoѠO;A%WXA'H?Ytv`_,N`|'6bBDD,*ceD*CKb';(@I׉=X󸼽""~^ϣhBJ% #$E|Lڧ)ᘢM0k[3ȋ.ﵜFFeDp}~0\ZIF󼬟^x?JYOa)mE ,ږWP"ar{9hݴe|<+Qo娫1!f\u`"3}_b};V+^F7%8tfn&HmkLRZsrsR pR3bx~/kq֮bJ?h܂Yڳ8K ,%t˪%_āЎI|p| [b:Ÿx!e|H׆HhRV%4!nIx՛ Kj<&?CIݤN`ٔJro^ÔPh?kd/Inԣ5e#Yw y|O")U'j<깆'E;_Ldg)dOA "PH Iqlqo )']Uen."~aEegޝ2] \/^PR7aelV奷S6H: .$'"2rn>ڊ> 7`by\eyy\ez5m2`*2bj0eK# zmR# u!q%s#ѠB BUOfnBOog 6`Q`nry/-O`Q..;D\;u`.2=]kduizUt@ N:^kkNZ^W'G=D|Jf5?б]^M,./ 7-q~)};[HroPKxF_lZ:T|Jţw߂n}߂0:- cĕyRdy^ZP< uO_N^u>: @Ae]*4dk]I/þ7֞ W?\?ʹ!^/Z1H3au`YLQ+f By'߱ĠI" Oy[-:MhS3_`b+.yTH>=[wO!%Kgφ%Pk?V]<~Q2̴arCQ#̽lmzNʴGA#.:.VR֡*Iu$[h;Y: l91Dx8ҡϧ~X!7ZdMͩ[1X/[1=(Sp^1KVs/B5$k ҝr64ú.zL*ϑiJ1l1WM-E:[~96̕$'՞6Eq" hac%9溄}e ZܐM- &Ne~r$`RxH#FU| Qwd}I^ppnYn2p$@w!U`+a@#D*CLK GO*,I#}}Jxh էqd2\`&qh|>pC},/<4SRޑ&>?:d$uu,Avq^ٸqgKBIjV^,э椝^*z+_9N`$_}-<U>Tz~A g3@ǧzCnxն)d-qCzhK&p5'/⌮ݵ 0V@  QLklϰB:b2 z_D'a)5?(~"Lkl-GѓX 6$b\1ȫ: SJ|L'S2a._:-gH?Vf=!kIh!ljB&i!D4XL>W3.l D4U=y(C1޵540@2](s DN2%'xmG+_.=s*=[UN65|YQܹVi !ˈaQ-3V :JkYp^E\pI .T/b 'Jg@( ߘZS|,@_J#q'?u50N, DF 2B+UI,ٌ0CaWG{72"pdZX|q#Sn,L: SףSnCƟ#5j~W6Q!UU(^̻ 3f=Q~즶BZ!!XTBHλ=yn y>ТҘ 9NU3#p.Hk 9ʭ}ȋ&+wrVRcExS~ ޜ\t@ķ2-1!>ǔ͏PFf8E0tG61аcL["G"Xe X AIbCb̼tU&c6a7ZG#~F&i!"/>mĩcS)ɒ#-G5{M՚~\X\aƪۮ ]%/ ['PX˯hjyn'Zu9;NNU0[Z~k2X :<<'u.h'vKD/XSO~mHVx|ւtP"? ώ9 q0 ܦ)iB,chuE- 0$b#?$LX] %˶+= BcG^XU'~9 h:kF;Z,پa2׺*>$燽Aκe{KM|ܙ| ? _m ^ gAnutֈwbE5 ,,g\6Xx:UMn.24|? ]g9Wnss,?DqK:IORcCg9n J1MJcFg'qKN# =eW-ygmp4׉$@2ԯS&fI&!RKfeWyh6?{2Q?.!_gM^3JrpIň`XsHg(̩[|,|h=~>ͯs'jT{5R=\D&Z.0Hڶƺ#{nu[u4N2ӑ/_Q(o9)stG[㖹) RN/>cinuID.(x;0ě j):\*&sC>sP~:ĬnaUb\.%8nkZs-Ł3v6j̮&C;1HHk < GTi"+P<֦:rb&FEʠE??IAj9՜ԙ D9 <,8;2=d-o<8u {(l|}~̸mA<'|fF]- ٩IЊJbg]7mux62L؏'RˬdY;XK"u#>$ 1+'#OTIХZi$8 Z ӣKM~KjAmxѽoXAe`|Z;)+qĿ>I--l !4l ӺZXIqyBH%1z/򮖌܋ۋ#n"8N W "jW`=r9ndbD΢]wYh.\D{B ib0q6Ws(Ŧ.@DƏoQ.(a8MDXCgs_qǑRSO+F\onS%Ɉ2,a 4~PnΚ$MҬd,Vu[p_ W-fu"bk$ ˪v5<ֻaOVB"F>ܬwC\qD7xx KѼ o-գ#{QCwm .Ѡݵ>-I~$Y#Ώ3O yg/TAg6؀:lXt%X%lt-74.hpLkPMMf#_Ӵ<;$ k-Yc lc2h#6 '#8ذ f^t^n1vfs urs;hkB!+x}a' }OpQHp,^k=:&2UXyȢ`}8(UOy^h@ c |YJgF4u"_vmm Y$O]cX߿H` N#a<34!d̝"Աd81;MR@)UӪ@VHdjW`<>Pڧ.CE>$] `B1Zr\| >+oxK 9#)#?MiًdKO?@fě6'6p`ɗ&Yx7ue(OPdyHWMjS^= W/ǿ&~vٻ&&º&ܒnЏ>Sn6pkI^IŴLx2M<^? ]yay2B3;,9O9d%wYG?-x(,T$w/6 OT|r`jXKU˽<dX)XYgsNn/왪jswFFAwqw(6}|/R9Kk.#w+].W:ywT>VӡI4K[T4{^W^_t_VwWtntZ?)JFy1\lGPo/ʉGjA8J#F,Z1_VPZ [/ATHs{ձXǵ+2DzOOvm>}߁e򁐞 zbsڸ;to3݂<ޝ֋sb*c,흡 G &S-TjE =-DmH|MJ"jf&ch m*b?N)DZq^d X{Hx*\TSɷ*}Tse ʤon]$2#wۼM;n?&~]\^x~@X*ž+$e 9Yl]'_ШU U)M!%UY4uapW]nٚlp=q!~~8aw$qjMKADd]W(RҊ.*)nWBui9.@6:zM<{ D+:= 2F~)a1&,-n75Ԛnvhg"a{ {;7c$7wa`ВMՕAeoA38--ƀvsxB0ũ3#˳(rc݊H *Lsq“\KA "hƼGGD$qik1#5U,VљgҖU561C53@LTZ_hڌL+ ^~ʹ UMa#47Z-qn,i*Ne$q0=lh:kx|3P Ogr඲EjI\ieVoۄ;VvdFZצ(2bYp͛(:}~>N'pI+S_N'2Idw{C3GʇF06kiA۰h.HܳeU(In40,17 5F'f-iUHn *~f>nryڧ 4HPZl\_xy3^Җkdø}!@rh^̛zb4b>/)!]{@S4Ө\\d5x]OXV{1)MmwG*HV_W+Xk?+` _>߉M\w](2Onve;v:Y씴CoBc2ioUuQ 7 < §H%Nj8|weXiȿpgO %e ?Eq\ I.=ll,tISxo*(.Q]!ԡv@eYPk^ƃl*g 2SAXMHrw>wVD*/7E>1-׉eIؕ>z"B%u=g#Fq ^iS9ʼ{R #lc\iַfS=LZ 3HMɛTBurfY_Gw<^̕`~`RsЇEЉ7b(z ЋuTڪ|vOy~!e1 +[3'x:vg^[`Y#5ǫwEލʸ EEn"k2B")/*{^ T;}6%R Iu4L||bSSek7fk;!4b/y*cB{' CO{WOqXU)ȒmnRP&eQ!-pQEudxOn x,:z7!r>h{gZ?;w6eqkB&,hqniF>u9RF? h[:o$cuR%ؿ2TwSkw=Vf|ğRehE0DosBLtG]:) K:N~ 1d ?]O 9!OK8srO` &Ǜ#5)ʜXRoJϢg9xhkfn#y {gb;Ky7 L*-(AaCD\km|ǎ'/UZkڽ8H|o3|1ؕq<-]2o=:PR!v͟ǃ[:U߀sȸW\&VL|1WYT=MNYCݨ41Vz.1`~CN%+M>WR|С fJʩ#ܹu.xɆYpXFdIӬjc=7B)SU '#>У5TꦤbmqhI #m;$fLΥ5ljVD}$!wyeN)bdNV5ђrVs'#YHQ4:u^`;z C7^*zfYmeމΌ|LiJQ6A0N~6ei!Rvu"1ITx4w}ֺx!#-*wA,rѐMm(wlXeQ 삹#%MAm0SmރBe$p)rs <0#Ndh'ي2lF)礍|ZE'/2=,rھnaXܹ,8T|GnQXߣ2 Fcѐ}SlS~|^RM1ezv.MmX66RAX_~o`kQ@%9%ZIצtdظ׊d?аܲ"r^zm3Z-Rh{cPJ9& Lf"6.߄ء=Οf~EF z 6oҡX^2y]wyBS׳iwZKQy']B)YKRA=+Xʅ AL}Y VMfffۚ.o`1iG?ez˨i oFLw" ?AEhTO'_ȿy+: g8p;0;J5(IgeHG~#pp`|w[Mg"Q%CcǔQ6U}c@]<8X?I|mbR^xj;RJܴdZb6#r[uۚku ^{MLqQŻvy'^ |+Zímx+&;$1ZBe6YpB>/MND.]3ߗ iFBaKq( HG\]&~aob\?=3' f$ xK#N~{ h)❐ԏl=Eq8L;OH?!!>BѰ;y{@,!\Hv !64䯖K$,/t!Fx;zr8[\F/2>g"sbY~PinWNa[@zwT j/F\-P\ƺo#8H.[#֧nn܃vvH#[لXads1 K7 ף jWʴ"cvߙ³YS)YWm4eJ.'y$6C$ɚѦulW,n|chgn@ܦ[J`дҹc#sXx.ؼ VA5.7cC.D9q݆eX .5;*TM̡p bV^](sX~([fd M-+-b/131u5=fDe ڤ[pѷO5Z{>g>9g}RڤݤQNndڝDb0.j t(uJ"ump *woiSv %U8M4ǐ0P"r/fuրͰoPb}O};B*H!ȡN|lZf LI6phgG-x`Y->10q&mL+AL?&X|(FhH\K(?0/LZY\H UlW,R^ud# .;r'ch&wDݏEC LJɋ8:CH*DzqUvgUP%ڋVl6 # ֏qGpTF͋nח¢0 iSq Dz=0B{S aLP@XCc13d "FRډR,T^WRi>9?+"TL_!X;>/aIٚ4MNtչ/HX-cpړՕشZqn(B[VGd"d4*-mKCźU5!Cx7 ;괛ϺLs.[IkIf"%n]cD51Hx d e) hRӢ|=z[!z3W] Ts 1~o[ոFmLΘGTtU[%#L4^`t3‹" *nG/GP7y$nF X0SY7DЈG͛SRܚdeaŗG0YiD2׿!j@vJ/bQK . mHr(c>Z1NaPjw8;K!۟TCMP/E#҂ YPQ?Vkփy*=VҹIO (DS,i0{!IWδ NeoWU\пnrDyzM|v^V[jT%=sTF>DTs˽bxcDj6A8F(WLQЙkj\9r oiSU8RZREEX P_> `nH vݴse韖D{X2ss 8l! t4p]1c9n= gX}ywM@z8 <@fsd\0,1U4h WviO YÄL3('b#HVm^dH6Q _ @v_X Rh탴҃)D`.겜n IR%)Dͧ/`* uMP'ѯƠK"F߷)?SUuM\<7($%:ë ͱLZLJ\\_$Te# + aAETDdpk(Sg0Aն½}0h$+gH?G]l*.DUq(LU*dG Ix|2$j%@:Sv!QqX=чV7l 1[>%l;4Po!UG^ PZ;mv "x,dr;S}y;@$G4(c1Ø+U久dwQD~ɗӘ!!y mRaLn_I?cbFFֆ ;[-M7kN+˨W}n}t#ja's ,f9hI\F%J? L5X1$,)< pSO"1ȟ`B3ڹMROJJ߯&S9NݥPEA Ǜ20NTtgۈUݰ4默n.aʅXO]g晬|"q$XAIҙY8%lnfQxokͯ E6aM ~6(LQ$:\)1BkU qc$LҺGuF j)^Ǒv/ᯞ>k7BR@ 6ڱe{]L{8Q֍h\7 pI 6Ml]鲥II;DN#<؈}4?‚x=jCR7P1FS NI|){w-ムԫK5wh`y됱bCcz1p05-{t1y=tb[JvEf0ʏ"Gct0cö+Ẅp ehڄYgܳ/36P|Z לk n'7$3,hb0V3AH,Eu\w|Gw/vIǠ yK73}-ӞF^;W ~C7#q`ő f> iq"3%WDXU_D}fjH91Dq>D:~t `(yXjoH=%Ms۹hmWXJ)O8m6&zQVšޣFU n}Cڋ]f >[vs=(qx@AbGMaY#@uϷˢݤb?ݏ5p0Yefy Ydn\{Ya}KVCZټ2JWfT^.Dة,61p0]>WY[RJU>JQwXE`jH'ؽB|X*!zLWCȃ\Wp&2_10ߏMZ 78800NaLXj@5撴(KS4N4/o` Έ_3tȔQt ww0pDSz_)G4"QIH|*P[e7>[p6"cy$Y^(90E2.qz锻$I +0'JΜ6ž#?fVH*<iݽBM{.1% [р>d&sFj3hm9b|Bu<^hs ^1at있G([S>HRǪ]xc#<" ]4 i ua6ߦO.YFH#NBXsBHr,(|NOo}j7ž#lPOT Aʲ x]i`_Y=[^ߋɊ|e) zTyJ"yF!07IscPUW:p`mĀh07@f ypbGTA+*?fsrdbYN 4\:3~zf$dj֌ko1|UEJȎ<4,<6QfM^ aW@䧶u _O 5ь*=t]p*ĺء"i5^:(lLc7ˇY̩qRg(-O_kKny/[>O*zu/[}Y?Y:|o}3o)/Qp֖CéﲔڐZL=b4/N<@"a@:[51b epJy`*}^.0.8XK{(غA{淃a~᫜o_f~t?FTwo`noSFl'L!JPt5_(ZX5L1\Sz)ƾPӵz0uNRm}8=c8l&l[ƲA_P֌&ѤE0Nk&*_o21 &ԣGi0MM.Ycc vK6%ޛVx6T ͺ 6\r_"v7Ky|3oDoӋIK,( VA5ӯbPwSKsyvpsG@÷ %O֚iS8]T n:eU2pA A–KNbGJ3EW^X$ ^: \0M {9YN71Lix"qTYs%QDӄFp/еŨOp'ylٝ t_Nĉ`I')ݫ朡~;%d*ՠs}3_l(. 4)}W O-j0(rPc f&8/YAE땘*RցuJ>("ipuO?aUjz8K8U i i 7e !M5j>gâ8 %J_([#" 2f"iC>HJ"aZSiX]\DzS5l4*woD?R]̆o}.̜JѥȚtw Q j0ثjH״h+Ut;q ۛI{L'd8 qlЦŦ t0zdL۹غKԷlD|`!̹Al9  V7S&,KW\26j!Q 5EB"jğ1<,@j[U ;~$+Lkk <*?PQ FQH$gŠl) UOir B&sޗś CH7׳#$ z)(`e͌V<С\ܼ[<%>tuȟ_z VӢe$ #^84*( b9-AR鷫C·5 ]#xy|>};}0 8;bˑ߀&ݛt`&SƑ$S%@\D!~9'[*:3"Pf`]"E0b=K`x 6hl0u±ꬹ_WAqN;q 6KP3^t\T~o˱ 6Ld7ӤO \q CGH{<` (nc 88r~G zGc8,)eqapn9HjҒ-:̂ҼB/xbd[[#-uҩ>{}F9rIܬ}s68(Gķ%n ZB?@) 숷/C4Z6~ר;v/Ȟ5רY&~IQF2)ϪOw*xfsŅ!U%/YZQF@Τb XWE^txgHB 栱ѐkըC3nn:"{ec+Ņ,I csoBטk[y7\ rJ,5FAX!cdMun]A~EudAsoؤRabRLRt hZTʗ&LF̫lҜDaY P n2l ۊg{vuhra^W+#ڜ:Z=״g7ޣړ j/'iӨX_q5AU8{d0֫ ;_r[5]۝@Լ.Oɽ~9@Z# R޺;=5sB|U7]6U=⥹?:Mc*M.ǴJtjZ^ f 7z<>ϗpWbou3vg8G>OnIlW0adnW?_]80W~1B9}5rCgDqW6{⾆RwcီPb~0jJ@.F-jzNGv ܮl^]E(m."("d.Fm#X܎$1r̒OSesCq(zA;J- Uf^k~ʟfMm}|_;zwy51 ës o2} CzAW 5H .`1kփ qer@`x*`؂m0Z CꁇdnBW ua Gۇv=0*'7bxNG%-zG`j4hOV(Vqu;sa8JR1qF"Ҹ:CaN8pP7MB>s}9U_ɏ$Gn}i;nG'ncF_NZ 26gKf ;7ޕOa K2/[j!;Zep |[&Eɚ/.(WCвe!Eqr߼KI|;>޾ .]ȝ )~ir6\^cc91h.V):5A1 iA DrYë˹H7ޛA}рӈ V,73H 29DzfcZ` &E4氎st+MpV(KJV ڴ` Xoxi{LaS-]owbpE{'KFdZ֎Сs Ha%1? 6;r9W!(DjRJ._h~Jƃu#`&|حԙ~amgQ+Aw֛}ƦLє]^- /鐻]z_{<_#A[=G?Nd$rF1Pz/C検^$x99@iQ$ِQF֧>)N&/a^i5[['3@FȪqɩ=CJfJՓ25g8io$bB kriꁫsf%oyƁ٠DtԲpA7n}hI?eeDamzջV53jY&4_f@sα9v0n]8+?7F=A3ѡր_'RJ1f+TBkuNKxK@RZ<* `4N n+d㎆{!7inF(wӡ3f)W: pzj@ $OFBBooZj |vb>.{`c $FD+XÚK7L]u"bgy[-Y"s&q{wb6?X%xgBJȍ 9r2y6հ.TFbPPaN#D l;~s_Bq4% X-QWAR9 V4Î&Igo+NQj];~;RN-:r[Ɂ=Z"ExiY"]! IG4<5wdzDl9[SeRT_-SF_Ab%F-y C{@FH?KṊ1r$qcF|bK0}R|?@ J=j0 73:hyd4&=ŧL_?F"~Ɂ#(HkZ6]n6a_I_g3*w4qr6`Y9r@ A*WD%fKH*%J$9 Nz~h>[f X]}R CL9X#O4<?ݡOҊ593BL$t: džWuO0)HOQrB^K6H>;1Aq϶,k.|ǘ c!@){\Zxԅur^5`"Z6\ܥ5P)ov]@jAP^LJ&ZY[,,lkdD>{fn=xܻ=sɫZLKA\U"qNC]ks<ȹ|`|qPL״>A܍CР'5i2߫Z3f}k Zoǯ8I[bkҕ?((S!ݞ }xF hȇiSpW$ S69>uI(>d߹ ʠ dTR?+k/B@4 (,iUaLf,f&:-}0\/,kę!#s߈ AL4_7 h;NhqL/G5%RxWMQ> d&6%RY+دݳir"#7Wgǂ} %2}"D_%=֋t5 оA6huC SL㤌UTbk6r*- 4!MrcE!C ZfMQ^V~@Eݒ#9boJM{-̡B{1xswiF_*4CZ 5:`_0׶>7rE.Af]_F$ v#oނ7-Ec{U8<^B.*]r0yBZRסV׆<=sW.-jp%bDj#< ,Q]7n#V hMQdz9K (iXձ/Fy @},͕༠HtL/M_|qỺ.- qWLy'/Mk=B_Z*4SNB"͜<1璟KCQcn8󳿞cy+_aeN:q1&ϕ:GK6BXcQDMf;5WG"ݱk{wp˾5{ ؏،=:= ~ޞ#̹Zytq8c%%?D>tɚZwrk=^~:?,ˆ=Ds[}=*ᜁj۷.hv/ }B1(J/v:@s8ݠrU\44.IoM7K '6BME;dDXqxp! {t^ ]*Ҋ;Z蓶BDD]s#á,]Zn1lJ 7{MJ_t2DZҡ i8R*cS')'yDZl.YoR=T*Z)+Nq/MV5Q3|Ms;@P4/&=קkE d\;ljHX_>mf*uBUl/86AP>ԏ+OUoy:U;xӗ@^Nv\pߚH$YW/T=win[T;DŽ `gw{8=LaVKRUEa>uժ^D}(gW[bRㅘ㬱MPW~.,\U˦6_L% VPB{I%>KJ/l)X^jJ$d ̼2s@67sh[GŬ/Fzяv,"Q*tDvL! Qxÿܘ)"E,3Gcym]e"Rݾ1g<"X{ݞͳH5ĵkogv cj}8!#Ǡwl˟miyꇆLCLYz ŸoOۧ 4p; ۚm۶mo۶m۶m۶m۶=Lܘ;O祢˕2(^%l}A *1r`ht?->) >>0)<V`,K[pI=480wqy&ߌ9 9;4TW~wf; 314GCh 4gyiGEPoiiGGMf. 3o`guq!ݾZ؋!#+퐁Ŏ&˭m儻P#wwvt|!!q}.*wQe|܁ԜCQk7' [ôvAS[E5pFG] }~fILZt%ief=;~PT٭m ;lV|Q<n8`g"m'2B/S\9 Zie_ƛ(W}9˂fvީk"yy0M%Z|친N}&>WXGG[o ,wˡA$2NqRN^ Z\ea}.Jp%y>TkCMMJ/WY)1 * ,OʙTdQX`{RZP VٱliSPU|{qX5J0}f8 R$>jӯKA6.Fn˧^\rugܾ)Qo" :ȟ$Ʊ7ϣ3wVCYL%zhKeE"ƒk(+֒HjN2Xv4 PwjI>E&j|] !`‹V?:\u ිMV*,ucĀr2;o*Q$gm s\Z*n=o 7U݅g w '7,ԩ*'-./T UUtƑwc`B~ j6{'Z;DXZd`DEUJT熭M{v^m <U1{z frzz;%#"n k6&"$Jvvc{%C벲#'5u]'FZǩN7pcddcr0e#bYbvY񲰆NgzWy_IMHDNY3Oc@9ºƤA[pcj8.Sh@#TRrK+H12Pg%>QU8̐ VAz+ \z?7&l/ '+70g&uOP`Q+ս%`u}$`u_' ٮqoDLM,vJҧԡ}4fyiu8v0q< [XKZqKaAb=fCD&Ƞ1{Xj 9_Iy~K3 nnKQ:4yr\2񋁾w!~8-ywEW<͉ai>/p/ )rTs'ϛUI!̿ͅEΜ@Is@Njz|S $_'~"55Wj|'뀢**5mx'=S)wk!ɞgVV{RHI4͉aStUjxc$s`YKT_ J m!JpU5R)6ѧ^S'92k%>Hpe 8˞ ɿ[bA+i87g >%;a_!? J9ӊO4jQ2 k/ ؍Iɂ6An3Vf.gv .:ne|QD2t+DbgOr=5dyt6@{`'XLXee"ѹ$pJɦQAVM̑TD|!+F;Y*g>H62/K'bZE= mT>.l/VX ŞByefgiSH/5ir8/gGS o (k{@ dNEt2!M?uw_m}HW᎟0j3N _}K»ByzstX%-Qps S#*XEI:&@H &C!4~®8t0yh QL#nWwpdى6$p64X3h}hք@p|A~RI ݫvmڴ=izPx{qdNqȒ*tx (eUdr)Dk?AlZa ӣxtQT)qϭ;Ür&>wJ܎ 8`ݎ͎:bʐ`ƒ C8'b̜&kd D9@xT+4ʱ &.x#;9?'gܗ*6Jg$m72jȔb"<g{u)twc_r#ڹ]e!#Ar xJ$# d֯@w#8 ?6Mo` SCh"u{;I=q6%A!G3쳙`pN}s<9+M  խٽI6ӀJͶ7g)T{Dp}Y{KED ?cr/AʲD&cSOx`㽌Re4=/Z6KK-zhOr?oowkFO_`I?0~f͕\Bhgy^H: G_*ȑ+M[*^.ʋ5?v }~8 zj5wtco4IŌRl={  \i?˶:Ç8tĔl ŚKf8۽8W${ {^h:RLӜae۶ŨjNᗆH\%F }d"{8A/\"w CԠhA{+,Lomo ]\Z)_O>- Ђt0rV-^<Jz&\biVjjgNQOtu–~'Fd\RdMz,\ wQ]. 2O0 z8 ̠/yӦlXv-"ÙQ X ?p9XK7KIZ6Y3gk\^\B??_\( {"gRE` ,vd|N`*&˺Mׅ,Ffs_c*$ES)~bۚtKƱ Ns(}E# HxQ3|-ߕ~ :Mlր׉ҋ;t+ (oMP05+ѳno@s1:Y!Ǭ, . 륊_wfJ(({ƫٻ Yo^o␲'RHsJr.bxNL-<ڴE ks6k\2DK +˩L|f Koa:2yubQ0>=*&7UǰXkS3m枥ubg@jkC?} 5'%mK@d p̕vr.w/Ւ+YReDXQ V1m2UfULx:A^훭i9 |6OJ8ePzįd>xa°2]c9>eYHҦVRUbz\l0T`pG"ctx2mQG̲_g5_ΐǏ\:n*aB I{K0LnN1cm OBuѥtTmv7^o]=qwT,!/vaϔ"qKnЂ=:qbpAEK%Бv%C*!V&Li탚1d-$͡ zgo%deKKWAꐆƑAY$XQEj\!:Zjʛ̱U3Ҍb4+ )pXٹbӘ860GtHu0RpS\G~̓r%A? ~IbL 6'[/̀5dng^ ɩJ@ֹzj1.hGhP!ד6њ7nu>R: Q"? +׹,RQ,,xAƲ.NO!_ɮXq^j@t-Fz+Ż!NŅ-`S"vky=t05;tjChmw"G%kp.)nA-gqE_SZZ >6#5[]6T <"z`'F/gKGkNq"ܰ4Zc⒖Zu=U1zDVXj9'EV&,Ts`;ZjZ bz˄]'|Q3JE#|`G“]$BWpJbs"bSwQfM8ބpTeϬ(F*@ K#RQ' p+wQSE p18ϰIB;[.b/z=b)LVGe zXkq֋ꀒ2NjZ]s5Uå4}kỪ{u㒼k| J]ua;"v|QU 2յϕg/ɇ3M(U"*L%d<H**bU2>[H\4 2)$I&HUQ7!_SL%z`CmD ۢqee<*8^u%ڝa̼nC=@gK"YA4'Thx$j#ܣR!jTxHa]ckGW`ݔ?֫n* UPN^Eۆ>,> ggRybmQfĩX$F?Jt-! ï7H;RE(E{yO}35Motb/ CLK2Ѹ1ʂFڐTXmS@ZB^$5aP~ъXA[M`k"0T=Z4L!'=KX/Hu6ۭ/Ghh$6daKmժ:Dp̞ n[hh/],ݖ|];5?,X+!&FW:i~]W49xúO)bhm5T_KcLwi> 6[~sڇ7*q0'&tRMsYaనku'K-vUv zhwijQ*#Xz`w\fU; 0S~UD猓1$"Gff:괻[دHC߭3_S˴ )Y[0LP``ZxG&:qgHB#@nI]fռC!<=KF63Tr3Ur\ #$Ai[zxQq9 " Exg f(Jɉ.tܼټuܸg}nk X9bd num'^$Z g!"c ql2ެY!T!"%}kBƅ2 ?pACfēwꆯXԸVMgܕ7taW5f?t!Qy<|W<g)" 'Iqf{x772'ËWԞ\I(Ufn9OUzYY>ʇM[n,* 3MHղtohuп$\ABu e|\,xU afX<6a "eܔ;11aɒY6R˜|%Í?X1jw dE)~pGg|Z"ǵQ$c{mw07}<9%|G`>p 넏{ޫ}/C;{8-8^xNZy"'L&nV 1;ЫY ^ja0GH0]}Btcx4[zsѻsqܹc/' 4Q[+ GL}Q^q&?V1Q@A _shqg\AԈfXwD]xC׭zc)ɛ:quHenqs$S+r6%se\7UA0 Ũ1o|ybК7f$sTֵQ5ќsyً ِB#'{kp f6[K"w6&0܃sS2x?MVMg:>m'zVi\^~,BuO+id<(Tg_A|w*$&A; yrWnpEy;9> ԉ2r όF?C&{xO.ۈN>#}#|z =V팂k -yR Z3LΎ*Wņy. >Hv˘ 5!Z!{3\Ν,el :+(6 >s繒¡ilKw 9vUBo Ey45r!ϳ'Tk&' aZ>#-)Qj5;]PNtB[z|>< Gr{7")6OEҮ@P wcf.5;DuGLf [.]lʴ0UE_d H 4ׇyK-OCE| "΍">XJ'-8_~ze->(Jë>8R8%16V&^CͿ5ut&P*^Ld/_f-.f[F^_Z"ruN,Sm dZp{CYOt<?FwnO^9@[_pf SH?0"ҽN=~XOa,(1Nع0ɜB #`޴Mל('AwzfDܑz1TŀقnH-Ol͙(6j\v'J9n.>Qn=sT16 [8m/X{I:ؠgK%ʗe} dxF-Cbʢ Aӎb߬g#iX/ѡZ-0_Xqd<`h^zһvo[t}'΅Q_+Ц+ \.d&IK GAPb~%GSȺr<23K \>?b>Ь~CݹIu{>GP_tqRPAPYN˔RVz; }P|Ze籙h†Ml27tǿo 9 Y8|< -sH,.O Wc>Wեk_ˈVT+)Wtr#ݽ_[`-.z0ȚIbҬҧ`ҷ#c#26uyV |o25y~`#F;L_JK狝~2/F/3%5r:i]bo+p4\oX#Tt\T*Z* mT:&ySdNYu44,į{ggnc3wcgGWܦul7u ) p"Q@ 4lde$.ZH3Ȭ6Au&J4 ~&9t mϘK;{[!!!vZ즟]vnnM];RܴG-"cw'A_#zov*%NƏ_8OL,QNWHp"1@bԟ=)>xHP} ˺J1~'Wȓ3Q#˭S`1gN0>̵iT"04{t$xB$Z;ܙ"WK( ࠮u=Y'J=Y-PES×=sᣈk%2" hp$ w "ktX Ff},UiV 5";xrH+?r:x0DĄ>& =={9ܫXNYTϞ]%KUI3GTIy!; sQpF}u+E$\2zO__YS2ȣhjCT4 ͱ%%;4xkЏU`>*AKH&g*UdAmH)7Hr>9q]EQtΠ=1 pUa 8$p;Pc GIgF@=A:{ ɭ5m.`@[-:1 %xl5βP>xsAPJ ٰzzb|.VggчkՉS@+[= Qf܄["]2{6R1֞p;GPeJ 0zٕ8*Som9=# 1le5R;G&}##h9Vuo0xi#2=$6;HSa#UC ܰ6$ϯ=OnZ^ N#dEM&ܢJx ڽ.տ~xppFfG92\U&G=U]쑵JCWa Q wO4xPx W3mPiCF/\ ;%e7䓡mZ {3bHmS9k]|ھL g){?%2wO202=o b]M+l۳sӖvڃ*ri},Tm{Ņg^> %C/3T:!d8T'x&C=˰ rsƄ,-ONG6L AEv@l,@Qu@[DLXkoM~֦&!ݑ9xG1PA<'^!"a7hmjMhU_74F>@RXx緹6 (0z5˭ߙ2x/|?oK>!_S;=gjs:T|[S3;־7ԣa3 q$J~̘ÔbUL:.9Psh,7~rމҁ;ދXFMbuUAp$@AZ_>7,f[N G%FG$b3/;CPVRr; ~8cS-5;F7[À7ytFW+jXz-[=?XA"/%kVq.MlբjtJoM O&'>=on"Zu*|8Ӆ-ORŲ6WY=2l̸Dk&.Zg&T8Cz]_@mUV|j>X)0NgD)Y! $V%{KE"Q,\st aR,})U4ii<9iaGDQF뾚 -Qj-cDCrWIَoJ#01 )`eH#jxSw  w~5o ;I5@Vt7b_-Pfgzi+CLD#m ֭R aSֹn2s|OVf)կe>gwyL ȫFʓw0 @"fוZIzDe8i;{(MY},R^cXy FK?αIlG#nI}lwv fi*B+:NJ;ҡ;G_û*DĪʱZq E(ykD pFL~X+΀AC=Ό ް4[ *s+٨aEpC< c _tOEĄ /^͍OlЂ)!gx:q]i<& il[ E&3 ],c9fSP]B;8e4Sup`d'nluٻjSD"9&y2uf.1~zy㴕#wM@IE.^@A'l=`vK4kR"߸QH Qp`V8ιJeJCفAҺ,(+hE9Ä2v0VYn{o F [ kԟcA9l-%ɷЭp/xOЄhȻ<EKKSba3? q@HQ&UI{6% UMPX *v4൐K}` ԩ~BB6n7ųgRqS̒3 ~#Y죻+@w Y5| IzϷc -N |sȟ٪!sc%,u^?iR}sN5l7{ie }+[O xYE 'WP*%*<tdH.0Q܀Od|ಎ\t T^1 YΕhXA$#cY2ʚZ|N$rD4 =n_;_[E#c)V[Kp#πnY#bܕd"ΝWю_א$.PSc: zJ](#O,.+"#!'3O[ @CF@VȐ/ıAʎ2E^:K b_1nC\ ѾnV6p ^xh2u4(< L C{7^HL*D!\yx ɾ%0oQS+Y3$醆E+uY:Hf3N}94#J>4בH Q ęiؽ#z{Dkl!hDQmcFУ ձ͔ ~ ;YlهvhFP52:s8vDp$d]ĚcUC t$H-^nsܻ7/E_ߝV Ck5{rˮvw7}˼͔`^ i> 3?XMЪ υl'{ R֖a1UJQg3 :J]i6_veDNxԆU#[1ifSE FIHtn4-̛giK0 & )s)T@fS2nx1Öi)CbR؟b|$DBձ~Zeg0mmBD{߄H7>_u֘!X^u㛣ߺOKJÖVK+ AᅌzgP0DOer+E O4k2V7{@~uk 'G;`FR%džnii.:V3(`--(kC[fׅWȓYuD8ڋȬxJE"9Xn{Yc4(?][1ųg١5jFiPڶ(Ppv1M}Bqa 5v_5Tc v :>Ch$㭔O]*,WYb; )Q+Q<u$0h,#78!/z 5;|{4yxŝ.@5ghueymێ`>vkՊa[0}RIJykD:G2h&t+[}a;ɽZ.ox o4ԍ,|ÒCugX6LNkї׶WeN AwIvtqlҌé2{G1a=->fABD)!K,n&ȶDS6/F l}a/wՕwPDkXIY, (-ɇ*^搕'НlYw՞ aS zޭ~㉞m&E]敷J[CyRJ꾔sgȌQtTeJ[T!7zÎLű #z,?/ߣܳ_)s3k>H:%A@ɵktq*9v)3' n]A"QݪH>P \VYﵹ[= ~Ƈp&KH히繹1k^%KÝ30ጢvrb4i ʔQ??4oVO1M$bnE\c+qd-Ugoĩg|l%XWMHY)n;<[EoEmM)byplROG&e)L 4R0oj>wG6T &G(9 \Ϊ8gW!@E>b,A.tdRBm=WpQ c4hi.Ӷ ;lSp$@ﲷЌfE11M˼@L}\¦Za;<)۹ |1ec~Y`~8Yj,>C]и::#jBߎhDz1zOd׊|+uXZUźWX/*|X:SζSҚJffNajsG{l+O1|O<)HPU[\0%cTCnmsNB08;oO--KtI S@L|\.V>.ǐg`,ߴ9NXXP9pʾ5) 961V]6=^5SΒ+/Ld/E#wzxή䜮rEJEaǗQPt+]6@' s8xq$=leeN=@VEhGoXod5.'L$]fnd q:vI&h(Z'[H4K)a }WL]NjE E댠.H'`{x1<6϶S..ŚQalYդVo(oӷ @JC33ƱOF4t}L1s'O&?»iG(tR}^j*Z݊X3*ǫ3^C] "!%PyJhvsiHыuΩNECYրC.$C`"@{4 .4>,Ғwf;EiFoʤ&auM.YRZOhU4IG\gD-5丁Go(D%xD/oi<*J֚[4ʇrqIvd'o! Z=9Eb-t1E[}3bPŨtO\tÃ7˜\[Vo0 'cFuH%3|%eX[gK9=jNƁwDZ[b ~Fk_Wk?qɓҁ^XCh-veK>UƵ\3aI`4QQPtHpY<֏8tB&nQN_Fog^ÂHknIH~ph ^Y;EJ=$KM5_ Y%v rvsfg]? xsBk>mљ^^5qZp Dh#yqLfbhmtȔ7} Flю޾?Qeˡ` 'nR2.at,زo-b[vր ^Rз!s).c >楅AD(Ͱ :F;tQ-$졗<ʿ\8IXHNt/w;/ǺJbΌ7oQ{e@tsT22+\oj?o*%¦MQ{tv i׸4p!f[zJ:XpY 9{ߓ S, #16C}=TpY#1)fS.nhΚ; 3#۞_pƉNεrXyC8c,V#T-H ~QOeB^DZ'sbII;70~1uhޣRfuŀl CП q0pMr>x1?L"l3^uO  ~|%AUkE8r)XDGE$Ubz yf:8Z$.ih%4VyvJbBYL~j|˾۵SH4=O0rI_~Yeڳ+SHQȘ#W]To6ow/@0OkUEE-W=㸹kqot  ,d ?RJ8n$d,(v Ռgk:߬TI5SS`rV"0&O<4-euwQ/;ݦ+8]K\Ԯjf5kGG*I*oEzd@["̌˫Y枑 ugm4:}9"7>4{_sڰN-+ɁH ܉_;&U`0z`XrmD 9dF9AH.bWpfO~t /J\hD _qz2G ,d[۶f2}%. UATKX:H=P=ש-Xnch>vw_AadQ^XS{SĘ. XV\ƋYI#Y)B@ziKDIq^E)h%k7ZVɯ"F_ ܉ɢ5O>Zm1Jxϖns}UѿO8Í3ɞ";]2X 霘  tOz# %Rn?;;U1gMf["/" w?r-n9e8k9 sfCxm9s_NIhY5!".3%kc$kqxr8|w6@o/vqi$1vQssd@ /U$yG D'4U4WcƓe"#wM|}z0> ys6v;v0[v_L_϶v)<{ Wjlwf;|QKLP?*K'r+ӝRrJ9*Wϖ%uk`+/AU(0'`DۍlIJU]7+KrUW\+>d]dW\>++Jx+{R^+?*+J|,T~U'WHU}2W>Goc+(S* U:ߢ*pJ{V~&ST|2p^\Wɗ W*V~>U>wW)(UR*xJԆU|W橖,)rZ+0W:ֶUmjI{ii誾)_wBO%EWYw|T%OS\dQKfbT%ODCYu>X[uN(3Z*?(3NP&쿍0M~ 2,rwE 2`܅G`eċE-K\}T^k]>vrUqR\>B;4y"_GE`@2O CoOy wcq`RSn#l" (}bE$$j).WL!ԍ&2u ^"II=◂6ԯ3Ѕ*[5ktY8ʵMV) X7]-~kIWVVffjrdل@=,3Vݤ M:}0hm0cYY+w ɥmzdZe5u#ʰdp}.:ZԻjؙ%izXNrf9 34>"M\&SBB#fBU\NWe@=h٤gqܲ/37uI~fvncB[|tzs; ,Y)zqzLpk0M|[R;`zG٥l RR!"_♑OpSZV'˺AvCMuBsYڪmYF<\[K@xAAพi鍋 .`)؄o_]);ؒn5~׺ۍɞs5%hc5Zh*xfzf,(}VpjV&g^eֲ\Uݍϋ.]L2)kMszhh? GVT9G[joU9+o1d*Oւ=4Ê,+hݬ;hTϴخh͌!7t6hMaG5a?T,<}-&d[Kr:^ϮL^ =<^@&r/7tpB $ b-҆)0_wmAD8^y\h9sToov ՞ ~ØhLx_Xq"A[`\r8[pMխfS6< V hΆ tԆ O7vĞ j韀pʶHR>OU_㓑ާ$7f2 o<2'z4`;{ȇ] l{-;)Ɍr\>;~^OM oѰT.21BJ6w?(", iC]dآǤ9 ]$ 0 H|/ho.dIZex\~ëfU9V#{Zh’ْ%suPd" W׎+ |눃 YJ.uKB]}E1FJ >;8`m=Ɣ5Svڣ|2{ =Nra!撤ISgiu/w /` oj\CG )uSks /Ksw T<wyd*M{ &AS\ ؛o Љ,zț/3A^(!5d(Ɠ|XTBEM n9W)oˆ1[AvxT9xgyjz|*A5 JUObj]"Pf/]F+~TxS9fK 7نl{zh_EzU+>@U[*ۘgt~4SI1`Փ X]ob M})t5dx@9Ȱ# cBK|<*5MC)*4& )3TI=돵s a f30/q7k.?"K P &[ctZ17А}ڥO4r~PKW=~/b""MLyP pSL6a"W0.-r(x$EWgxL\og4Hq2{>,)kSfJsv-ف\' ZnqWfKZyi;t5͚Ɉ#-TC(3p9L|쨘 0Z@oˇb\1_\Ґ,`Y`ܿ 5e5<C#9ОnM=W%FT!T3.Eᓋ;X5Y8XT31na ۭU&R:bkVFI':]áEwN`i!F\a\'Ra |6g͙2gB[(pGy)S =g2'4VajMqq7aWŻfM4d]=:˓?SFT -qe ySBx&ýuW]ei\qFHhiڮ!ٰ{F)F 6j1ygӡ{_Z.]=^2uR4"$%crSsR~\Z7*V3=TL ^ lRFTI*|am_|g[.XE?3L~,V;%>vle5;14bt6#f_$2|^kšc݃u}J؇/x .tQbjkoiK,J6\ ۻ Tj8s2v9uw{jͩ~8E}29<| 9T\{n8K|eF)a:ӊ&w%5NՋ'J%]i{ Qqº;,b|z4i.PJ d~)=)msQXԢZVlJx N[-5-.{nMX a]\.b/c6zDف)il' |&gEæO\@꒒-^RhK_C`h>Ony7XUr4A]AӯFQ3HT^,Q,Rz[yIvQ!d~Co=(L5 U:&Qm\T"$y'Ѹa h26m:4; p}azint>< OCh\zw?GcF[4(2WU6GFVky Ӈ ]LCH˜MQ8`jǭ~Z >ubR GnGtk+ D.Ã/I3=|ꅩ]rM 2"=}Mhl"ȑ@}>&q Mg$a;j8|)HX4WC)Eٱ $`HnC{PzM-+s]/S01O\9H\P`!S?`{XUȉlf1Kp=C|NM"Aydf20cӆ͑9ȕdU㞫D$3`_k (i@F.E'6O j,/^j5ÆCcm5diAg]/A?b5^7 \{]\.$>/p0gk$ʢ|'oiQJLFQ 4os_!62qL FwY)7rpwpX߃zop`>ZQ m\D#5az<7|`W/'PtRX nDl4`輔vIy~gEFtgӹ>Q4u9yCvD~n9lDK%!߶ILf hE~C~Ȩ>3eVaTzUm.{gIhAY6|3߯F>@:^tteGvX N[ !FLQUQ("m!eAqRv6p䍺J>V:w͋@X xBpfmi"2uRs`c'A0>fg\PNSFq4Ds9Pk CER5j .ɥ*ZKAa@اY`YOr*n0DsFV VDC7KOYۊĩb_1Wz_'ݚEEyK\k6n/t\'5UĚmeL$M>l~p>M򂛬c0t2{?ˏ' g۪MW[v5ΐ ˑъHn5nj23'cb<,4[k,[c3BLEmNKٛ&.QҲ!^!t²F  [/5p( i˴b-@DTаeEKF#oʂܸG嫽@{_m?q ]/3@uA:ys FcvG0̦0Ca #B4XU8^cDfW?A =s簇EvQKG+ݯ MɼЕzN.l)\,CRJi#9@D d`0[]V+F*1 oUF(p!+TuH eb~uXiRң N-@W=PoVщ\ܱ=ݧ'D04[Q?U~qXxu9nxjW̷gsK\koB޹7ߠr-;.R+Dg}|ˇ jZo?]yQqΠaT^8(Tг_L;xi1LZ$8mIh46Э2G<=:I&d24ǀ:[l|{L?yZWZ)i)֢2:w~K -zG?˻sň҂N!ڞGN`wu<ϗieٹw}z. dHn}),6sySc8L-;Q;NjǦ8Zlӂe}\ޤVAӎp )0F~ϧG m.&`b->h+F9lnP'MjU@3.'-PP)?C0v[QAĄI*nQl.+&: P]>!]4bG=FԝߡF`kc6[(7m', jh \y63Z+|) fШh~Q,sa3G}#~#F"18cxQ#11_Ht# F1D,:Wpsg\j.3%x\zre}v1"\. %xؤ`vv tB w [J$2|Stn2Sk4)Y`NS(*Ʊ8h6 ,)}3Si`<@R(̷&']J.s;17A?[q>|MJ:k,"qEMt|V4}1fg6}c|fahCxw.5 qK\^ ⪉/vE/ Qd$Xǹ$DC:?$$l O##6z,R.,=s󄈺Ւ@amxXX),!@= ϱl.f~ R*"aUl(oA?X./R5xxfYͳ7.na%B!ZD/kJ"vd^P&c$M5oc~C:bĢ0'I 0\bUD$p?>d)\dӲ9-eVoI]V֫,ilkڛ).@_:W7u`u9v}gire2Z̖%HЃs9&a%D B*vXLvoZ ViHxp.b`l( iHj|#cΈRsu*W1ǕOh_uQ190L&!*hB?m+=|Ϝ$*,;3Xf9 d>7b{̩Lwt A]CB6n48L7́f[Q\/4i^A53 vBP Q TvIN/ 2m譂Wt(;&/[vYYie(C%le/LTRkވb S"{4F@W0R&DL,'@LuyY6eRLgؓ ] ;7L1/ךP6vH+TފU%9Rud۠yHaF&O;ʒC/eO0D1D;f&kfk[uSwDrgի y09 +?+#Β|Qdyn-Cڊr+υh˕=IwhZ_M]7²Ƕ=zq4Ĵ},^ TsӇ~12iGYfbF吟mν2Y>iԚm0qۮJ^=>RjpjPaתupT%ۯ^يRF/Ck iTHCV\ϳd#" QI46b;-YtLtyXN6bB%"uW X޲eNp˩_kI5j0Pc>A-tzngʤeqvqT-zw Ǡ ?xϐ̰σ1Wɹ+^"Q"IJ}BV/pDj&5͙('RF:dZ!$0Mcj R ]/EJ`\[iZNh4缩\R`ؖ+66V5Ϳ&܃wy;`Sz3QdDEKHc0&4i_NX) Yɦ0h|KrKy@j!ƈsv$n=$tpn @0I:>\Z\2^ UDȷ,vzq2{=-E7-y-Iz\|v:=R;C7=}2;UVD݅o=Zڝ^鈪LC4MHwgR]*ȓ:O rv `~:Lm+ G(oYk}-/~Q|,ru!ݺqu,AJ&}%TZz8M!w9^SOrls-.j55uBL1dZ'ܑϽU43sm]A׼Lj #i|48 UZ~9A5>Z<~=hЂbr.!)eI؈bz =ڟM/[ 4"_,V1q_}c,WT W fgfqst}>R໤^_뗟:A֤n"Sqvh,Y+0-66e(b=ڝp<"w.{d'(fUW8Ȥ=^]ßZ6[F9p8^Ӊ& &MQnk bs:2`=DGu3zY vAɹ[뀗{:)zI񈥴 Ԏ x(PqB n؃#2k ?B@hH]jx۔bHa~v{4݈ aʡo̵C9*&fn+#ג ϥaMq9Ww3kR= =GۗCntuť\5RgJ q[]"Lw+T{[O޾N!-$ϩ\AÔ {k NIsӡ uO!, Z[i˛g,'@14FLة%H#h@{&&zjr VJ :+ûl:qSy{Hl$闊&ۚ!y\p0 H|W7N@;B5Ō[զCܶT <|^8vw5]y#Xٗ^R7lXtI3,4AM`bHDVO11-QoFoC3!bνͲH ȠX&042 /VD-,ݱG$+3AM45N^ ŧC[B"cgǵZNհ_"7޶j'DXcQs(GT< POx ]^o^ w(s0^JdymN{~Q98LHn/ co^/F}tf+H= LKxA,} wW3;A?|-  2B(4Q-`vM8o9B|YD̒  "؀&+/[-SS5.Zq¦gr' >BbshU4qR2/©H.UǤT:S#p3M 5D(2~Ɓ(5 "TnseqϝUڤZ4147cmWB +Z.V3vJmʝbKo(6@3Wmw8F4ȏ_YP]f?X /yGg߅8.H5 &.g_4~tJɭ yBj *+$D=N?Wd=٬9),`e$y0P`ǘM/p2\b>Ko7V,<'V2a c/eVaz{8xR \k'1 ?\3IǷ/ڲf]?u43IU_$I冕`d`&1]vd'ɇzL=oJgiIk*W!ȄTs:XH4Y'z5YTk]ZLgxp;|교n&1>9o |CKΤܸ`XMMݬL1]VNU1I|7KJ y&R+mXę+гBwP2~F]t>*(Q  l";q۩ RU͎N% } SLo#5he0}l =970 iJ%87 UJs 2[Hnb_haӗ| k?b69H/Nj1@XPѾ+i/-oW4% +5}#iy dfvO꼠a |;򳋵13ɓ~I{wq6 1uc8<)f7 |Wt/ڪ>SUH7XNp_[)x 6kԳ>S${ gkп*)O )ՇՊ׾j:NOo>ixΰ&JۍAr>zpZb*ϙQ$hw8|kxπz\>7P^y x#6bY@?Up Hry~xמ({8ǟ< dĕm` we6Iڅ]P5ʍ`HRyk} Tt޿ 6",6W9Ph;m@~dV3f[8_X'&\{AUP[]1ux5 㻭rY'H #y}p]:~yy mm0`KNȉ홸BTFKq%z"ͬj*Nfw8|QG^%+xLEH*wUZ޿Z |.9,ȏdup>2P}N&w HJaS]}3 /۟zh(S7FsjaH0rEkM,=4Ld=yU<9?kk`=OtVڗH.n #&Ý͎͜%@IbnԨNђ*xhP% O5ĽJZd>;>%u`^ Ncϱ|3B?gH?2( V)o}J;nG5|>2AOb){|B7me> ༦Rm&G2pu8]ݐН D'wc6o:>un;FM@=ȼJC晽 FljLğ~wp߆K= ,d'~OhJQ>etXTG}M?s3x}whFfp1k6~x}ma;q>$%M8 '6G!'xgDA86y5fzYA3^ sڻdEmomwe;ϊGgP8Te]k2c aޏw$!GLrHuDmTԺ> 'N*p0^:v[i|䉬6Ah\XR3|.t}~4b`Atm5BrROKfkӤZuG+~5_aQ=+̗Ҥ'$"g'a=t:@ b/)GQvdIc hf{KkԽHzlY\Jt\jׁu y1T9&&pHb +{M hP3ӪH!3Wi)s6{7F #ya=/C)Pʜ}+A~1,t=;{ՠR@{SY;#2|akkl:5{Ohu<BΊ}7 KX$hkʶ[CW >k49=e C0rZHRV~]!寘/{oR둭6o9FqdaD^e8&[Ƶp_ͽ):oI%t ٠iƕf_^^G_ UWN Ul.~ zGR)?SE}>XMw4]T8t9 B6局idm_Sle܉?͒٭zOd^TһnMp9%}/t26M|:ScTh"sI(orxG5ySC z~6.H?vZeP)2cj( q͵J^OC89,A^KrPGw"%`jcSM1tVvr7ucEP/%iGX?k٪fl:E1 C6ht}-%}A(ste~9fCA4,syw|}v*ye'/q3Hd4^Ѝ; h* +܊wDc@/4Rx8|Not6s[{'0xet0{#:V#1'c <fKxɣ:nOk;s<$oМO"XeZW83X(r1CXکs&mf-:MuDc]zĞ`E䰪8#]):[OLݘWa nԛX%bOvD\,3(^.;{Xɜ'qÛ$,˥/Mt>j)Jbj^a|"DS1ӊ[H ^=.LjeO%I0=6Q/?O[$9zbSrxLB}^lcz_I:r}*e^?OUIk^СjۣkʌR!؍g"+wWwjeafIa MbE`t<$ kG) > EhP e|EeB0lUBnC T&K;wN3y77b;` .KUXb[-yz,0%|cژ ^5/<1(zh~,uLC: q~q`"%1v? 62Av51"Pq+DJ_yjZ!1V>)^:svKUU,e_-o{bʔ(I};J.RơTz,_-H_i" ܅JQG|WG4f*=bO.'ƀ8D7?LMN yow rF{Qu*Vwnwlؿ.fD{]#d; H>C/QBm9Ʀ{dOy8Vu_%98,V&fReJJ82izhiBzcre4(nH`ߢؿUX9?GَCu)qu^19!E miku\^ ^tߦ|Hٿ1La^61Y# 70ޯ9+"ץDtɿgV<g~6 w Vpa >1Fw9gۏ-8q}]\7s^6.ڪݽ)ѺnԄ䄠C?3kPbnz4h455?QH:0a`qNlCEmHFwFg:3K0\_3 S://] n]~c0yl%xaJb8$9fkFUknSYZ3"V!KF,?[QXߛw>?@/عqH95l:F/@)ީxdp  $&͹q&Fc~ h06z0JLAA$c _% +2wi :# B;UțhD1q܇G˭n8 :ҁmMkw=ղeFzoѥTS e"^(ޢI,CbӅڴSR>HdH'tpPb($&D^ Dr`m愶s$R$Et&3ĐZ잂6(ygxB(X^A|G*"%aQEB$#-š<&xL$ "- i PÄAٍ7+ǤiEO{&i[*20@d@]"]DZ !@ݷ' `$hf?rȴ*$ pŭ ֧PK$o(x=D@A_Ȍ^KWBSk='W.Dn.q ^X9jtypNpKNraK ,ݷ G=8eEUޟ#DH0LL[OvY_d{HIiLD׆@D0ubo H eGz%M˛.=X=j 4IԄ(ӵ, e fo0SMIs u"w+QYw; 'Z~.ϧvv:A^ 1Ԓc QY[Uwlt7YcPh3}|~?U;3AeȬ}? ĵMkf:muƍ`eu&KV,dMTOWhRpaozYOVَjo^CP7cFmEzI?)u!9:ZsYQ<3Ļl9"?TCEdB.gR]33gHcW G2S(7tC0vʷU HPB i''eBdCgOؑ.vbw{׶l ^-LN@oGn*\BE,؀3A1 RjT>2i" I}G~c?AJ9EWH`b)@*Iw[E\0?gK1CƽgK'/hHtm77;^6L'.%+?7tW? U'5K7VH!ήj 8>/qi?_N!yj7_c&W1۪ڿ'of?ʅ}:~:][nqe;Dw?dK;dc;R}{gq'Rcݫ 1!F7Pvr{pQ ;$?ܓo,;5>/:úᆥy٢p%gJyu j.,-͜Zڛ7U&7.xea H<xBA&i~o@{߳,A.mێi>;UǤ9a;-gJ*=$ԋ-֟ht_j(KeGCRc]݆)&BMކE1&s{Ou<\QONQO-xLQՒe\eg3QZ-P3~Ȫ?ou> zRۺ6VjR/i*<`Wtl|U%o", Y+ݤE6;2@-IͅI ]⬏HG<Ξ(1ϐn * _8q6]0xm Y f8#b';aDR@ KwWa :_ò+txx`#2;?^.#Dq˽gOix(rF8E(Pa+hnC^GYM\^F>SYy\RoFi7hfEʢ <MMohiZll& 01l,o8bvIu`$NsM0˝r=UuuYYJ'o U!/Ub)|4,qU" aUrEkY a@AIBhB '-„X+z>?9Y*iUoK%+f EҬ%WILwEÇ:CW m݂<Ln?ELBᤁJY*J+ %1`c/Tq}t]h?tS*@ɛ[z `1 r ݤI1ELH\u8CS;,& }>#sSp`K]G42.Gb7ec㄰d+AՔ;AȏzIWS0kl|7 6$/j\ʮUk9 `h`)1%Bf;96""veY< RnR>& P1xvq;p\vS܎  6cKxW‹-E]7w5GKqZN݀BJܣvnY?LT6,!j32ȇՌb7@'WPb+K}r ? DA(_7<.ҘX e(`߂VXuq=`H|=+Z#;hvjQ #@gЮ׬ds:W3OC%09pkXp*(fs1a%kЗ6T8Y1ȐCƌBW%,:{p:]/S͐?V.b΋>/).xW ckxN/2369OXhôKjW-2A%:%5mdIo!KSL ]YU0o))؉VRx]Q()$@6f70{ׄL?Ti٫c).x\L`$RAt <(̺[*[^7ud:jقxJ^w(َW\2i5Q-clPܴؕ#&p Dف[ɽ=@dhnB2ip>m!+eN0Z5$W"@)`0Q{&|9{qRQW H}K!/)Ӥr0wM꙱/=oCZFD7O/$#q頵%;9Ո89 4%xKNyt^sJ)瓕85#@[`O`:¤Bf~pUe+Z/T~? )9Xi]NbCfG6H:F$ 1[z`fzkr>U58jo1VvgAஏ,8oyMAXŞ6#-Ǭ]JXΧ,f|l D@˻gv0FO YrA&O^q vZ[ƿ7➮h:˹N#|x[ܑ=4J:WRw2FI`c{5oelZ?^t. )f_[xsE_t 36[QXp'|2YKiyosD] 3}k>@.F4D=`)mG}IՄ8n rDuVP`>`FWOmDZ&0b~:rW ߰Ѝ=?hGnVO_CfduSUݨ"?tSJ31oF")y'“ ;S;8]RF5Jǻuv5ZN,$&lU YA;Q9r-$_0Q"( FxEtw$4.6d/\ 1b.E0ףFxDް-S#hDF{,%2ʹyMspfA%ώmdgzh|oz,ړ>ʪcȗ93&}ejGؼ%=`pzk%`zmBtKQ෧CO쩱hԅ!gW.jGo+Rk22~2B45 fտzhAFv(ut=3qN!WIaIR}rf,c',6z'($ (aF"!{( |m" ʼnnZOi\zӪ4q˦ qu&$ i ".A}hHZƐ95&p<Uie:(J2Aú߄F r ANǡBv۽ gδA?G OwoD~og!8ۘc-Xo\!2 eq31B'DG݁%zԇ'B8Q\|pD_3R2ɺ&30|BP<Ȕ&0>1~Ր.WuJ*JiJ1l26 ٶ693g"Mܹ#쁵w%B CQ$( úA$"S,uspgci]6Iߗ섯Taomb7#CS}:'QdDC2L˸)eKŔ~P|C[~{(!YEk5߲z3\72<؃-Av<.0CY]hlԣP1^{#6rJp;oj{xĤKVtJ4"YyI#7z5zO})CZ4{zG)[`WDZ*NNBwB%Gcmi0DcE4`3㡱sɒiCԳ5tܿv{abײ*jiU T)x|z%5yFN*=fW/-R-Q+ZpH\<^̇2;ٸ#ږtG ?Ks݉W֬gbtws]᎕ps/#Ҝ]lsa&.Qk\ ZӯzT1S4Džtw= pӐSW뎃l"_O'P'$ ䷵q$%?]n~ptф33!NLFMA4}! ̠ d=O1-[ A4WC}Q#86xlOrGpN-6ׅ4 )&Go.0Kf`jXƟxkUQyarںS8n87"$g;KR]R쓮 u޴8L$j{8*uu;C;l? Pbx<^ߋ{= 8k[E:ȝ^~@@QSq<7s M٠Ұe,4 SRJt {TQ;\8m.|r F qU]Lb U.+8I`LY$b:{|ԩe#4ef=1=Q͝N{^f98 ;:Y;89Уnc%;[WN4jT-kc^zm#ϴ?5l&GWWVo3QYwi篕@Yq2ؿT¾^<^RTkҡbP YHiLq̡NiUH-(SIt59o|ƆCpx!( dWgwӗZe#2q?& X֡/2i@<=H/+8 +dH|aag"սm tKHCD^Ng'6`@&2FƵD<0mO/5p|' IiN]^h,MzbQLưJoPT$!-g͛ {~taދǞ{;~ 3z>g/yWB !`\ekO#y\rD>]RhL']S$izBQ7+}+ȡMl]ईdQh8Uh+ihI:&۰VI悊?ʝ"bTh:@ qT:vʓnKEQ5_ב!XάHbbb} ʈ&;ڸFh>iʼm}ow)_8W~ 8^]]4;uNiMūrt>͹چqs~P"K!t8h'e;Y2^Fŕ>aDWmsyL9-}< ~ŦR.LjhxװW`;WO؁*27ۜeCux:1)x齾\Ob3 XVIXTJ3bx<\ZfPo,7^͙w嵥{{1Xĉџ4W~&=G{Y~2w5 *h[ws&d8vRTFŅ@-oЀm {8|CdcWp yf4w4Fxˤ#g: w5h w.װi/238i,3}9s+Kϻ$5p6#5ɀ*v8wOo6;SMJ:,9X29T{z0J3H?Kn+k pU)~gfwF)MG:"#.0 n.&*;E==SXf3߸V2Bu>4=j)W,' ,?yDҤTG[&TF4?N,WydBQ7fLZ>#`cb^QOG`i_ 4?NkߍxN3<\NU'o?vwim(لc\Mzؖ+v.+L>ؐ]Ӡ1go˘ے@o&pYU4Ψ#XFg$) z8TTCŸ#BoL1np8QaڕO/8_ms{7oஷ{ZPOtʒ밯+>\-읟rX:lȝgex.Mm_5t"|5z'^ʍh@?^0n`X5_">A"%AOr0jQCP$u%-mO3R, ((!ըlO&X51cMi,X ,[Pˇ2KC!lu5s;p:e2SZ{tky!h ތeo /';d/rh'TԓAlh$}oQ?V'}1 F<%N{ o >K:L^q<1D|Tiur'gۀ!y\O6^&}<~hGR\NpؔaD&kr]G2Pilyx,mFhyӜ@ $4pgkliv/l}QpK0PM-%iVl$ybBçfށ_%r1VD].]侺;ŬATLecƫU[F&`ܛ.D6T ! ts wbirY&U{d33,h8(k%V1oi M}'[ b%߫yy6U^c BhPS<}ˣr:6m|zVL$ty4;,SȤrrvwBMjY l↱'4bѓ\Y9S|Qg׾OL˒}1W#nɸFŠ>;ytnzDYǸSeJgv7GT@H^n\c‚qfR0J4do)6&\BH:sP6!GWVpC,fi}a9'M~zM?ux1 /rGbC[jXui x ~ DѵPXDlT)Ԣ?C|eT\A n@|w›zWPRb) {otoM+_biJ7#OʓCLe3a' R]3nP>n^r^~<JcINwg)|̉j +naZ(|CIzlgϴ #$)hQd\s٣rݺN9]87י]m{U$[_pQߞdz^*?QbbEҪ915]k'V(D/H/Z {'o$b(Dze7 eMXՃHҙ>e\pژϭ,jR!jIR7 8de(Ƕ)нPRLO$ /FG72}lY~z$%zVz7V)9\"Q]~3c{ .H٣T4IOM>" iקP eY݋5 GH(S{ˎ~u]N`g2J&*x}M`^5GȘ`mBKZ;mZh/ q>'lhsə?u:⩲ߚbEp1CU(]+uTDy%`!fI@!( .x=tխ]$e*ܹ\0%(405q8rEۺPrU*kn!`]-4W;W3h|-5~ (cq&H2Q5}"*ʭ m37Tt6|:S7ܾ Xq]v!hs>79ޞ$[Qz[ZrkDQ.N}2DZzr~^K"Dף`17k1bfG &ېP=F& u#3ԋsr &]̒=KQd˚DQL .SkP}0e(W(ޫa~$-0&iKC6jeq) ĩĈ*da> s,[C?GG: F$8R.]958T1tIEI`SLh0CV67Ƿf7kAS+k'ז5dݬ(= H)id2!s9^;NRL)d T%S*}Ry}nlhc-Q҂1L7 o{:Tv7A1?#븇#PF6ݰ"5?]!Ke}4s)H2%{櫅 ؞u"pĆ/"֍kKZ=A`d2 tG>nx(' &%Nn/RIV7;[3nG!p:(bvqr Ts5T[pɉWU2Nc}z(^?y/\hӂ=ڵtPsLhDM2+gEwwҠ1Q,c`ɾR dնR 1?196pF q.]#m9#ac q7 M*{tm9 0K0Mj9FOIKqJANXu$V>H.ԯyu Q&4CTJ =~e cy}ͷǎ'H~?P 9\7jѻ¸Pw]N B5z#pý'`?sq<~EKhL2~:Jg}{Ƣ&E1d*pkli`QzFP7\-Q;I~x`҄TmUG8HӃlzҔ$5Թ7^~˦<Usgsdp0wDap3f#'v{Uy>JWi$6"Tn W.h0}&cB ]K?*f,m()Zni3[*2{?›&Oe2P:u]ƴ }΍fxW#̯~ጙLt y [Rn/o@<S//aB-ejGM>+]`6ͻml-oX0N߈Ѧ43p೛dt|`nevoW|R *I.WZ=,_zqw :jpҬ8x~+  d9q 3a9TIib "aEz ƘPʅV΋A*%0iiif|cR L T2k*NY5J‰2^>Lvh9Ld9NO{:˴#yuh;)jTQ}  [5z?x8r C65rr2lއ+kqEutﮟN>ߕh)Bnt.GVybAӔq}s\}Gmٷ#æQ~6 Ġq;/|Z˞༬`ZJ+^{ f9# ֡-7@n J) *436W)ٜΙ35/֞{QHҀz]}FVSbrN)yBT&D'WߞG[!yjgC ө6HwȰέ;f..$\ mI9+a_-["lz53r.vPJMq淭KBYo1)gSEG2) RooO0?{v%idiX4h~E29-Kpc#`$,reC2Ŏvk˶vzY,R};-ZtӟƎrt}~΍ֆk\[a8u9 H6Cm^u. ֯~XIuAfdIBb_1∄H:\MAJ5AS  фԮ#V1Ne?wŬ3[{{m6O$Q&LNMXJpXϟp Q8?[$/WIϞ>‹Ͽ$Tm_EDN5|` x=Os+~(ւ8}12*!y^ ' ^Q:OX:tkC,I}>XL#MTE(!00H6Z}8S46궎-Z#vH8(@F\R4("g"2ANRCLRQ8LfAw@ԤCcl6'q5@oVr_zD"O@iڊot2HIo i,F,"4RԅJX%+H\  uw!r ;\H'P&Z!ko\^ՠͯ;sl&i`ۦէ :{p7qXEa3 qyFc.iv.ļFkЍ"ChT|g;'u0420ʅr4=~X:}eZH.q Hy/ Lb9@:=b9 'E7.gEƮ$ n=z\ n>;B ,wRVOMD|N= KOI/Œ 9?o6i=gNG6nc~'#omqgHQ0xY:`v;wlL(Oոow=*( 'B:zRc)ɩls&/g_3ZO00` Pl[$z 3eLBF8T3N=>H`99ťH;!QJzC*#+ KMaD6:X8MpW7= 14c5jkrZn8-[S6A&Gؘ9.ܸpM [nY~̈'VH/b39Qȓ_-3+-#r??&OM2,f {h}`"#qsI~"uan DklRĕp;dv;@B!&dˢ|g~JZc$d.֊ ߿1ŗ|K'm[ prx^P[Bc=m%| +c `,Dip35M"KFY@"eV6.\ue哈Qj1YTH_aU&S^as*WxQXε #wWF];Oü{60c(S%>{s; mjNC^4g^.(Q^[sL:rLRefH~ x?[p@X>;@F^7{Tkս+vV,;}9$[Q|Bz]k0/9ͦ1:peg=dx2utd1w6zI@?ƒUxX[Z蹚8 =vD2n⩝ +Oo%9/2WZ#)zPvE`Tܵ.q YI,(WrF ɤ#c\\]:q=]sƬi2 'i&`ȑb)]z% ď cc49p!._ۦ@  @״d$[ʈ ^F Lv+I]98kȬݾ~29W\J*_x2(ч=fw93F8ęseGC=°(0TM1KK_es W? ,M1hi e- ^mLͬ,חh//'珝&6bQUx EݤD RtTq67 ^)Sj?fmKMA2o*!N18Ar9lr@T XR~MѪ&g2WKؗ!_԰{&Udv__r#*m!;NߘqV[5QVR:Mj{/Щ咆?8/{IZҔ}W0z)Dţ Ĉ\%w4?xD$_&~G6IG(<3X<zܹ~ @}j~ -KbXk}; Ebނ%}㾱6,10B0^ph s6`CT}gk,97 ^`fgIcfq>%pP{L43BHAa"ɇS0YuFR~s a4Jނׄ!¾I^"K=DsRt [T CI 8ZGV:E"zdT r$@3D6:x[4MEtt ,u 9EuomH4F D /ic+fXI`5MH׀d8{-]7TMrN;W/£3*t+in*d{dbC 'Ci+*GMˊZ76jm:[ݖlEhM++XMO\ɓxxBC͚L2WQceQr- DKM&gd<᝹ˍ;JexI*<_G 94tprShJ-4aF k]#^oV qˬJcY~oDZK(|C֧s8>C οoUEbd9Q 'P<߁.ٜϭqO ҿAM8V?MBxj•  y[;c&|C4F"4YBHb]}vF.X/S=w//i:j^Z7q ba\Gm~i>Ӳp5hg0:D/9sVI%z^[K3-I5FǤ$F2Cp{iR$Un,g2 $\|ot0Lb“37=6.?>Sܰd5N'$'؆xXݟ* W'Nwz-PeɳҴp]]JMF~ŭ_~BQQm6d]a#%s~8Lm;b!w|x{gu.(&jMّ-f̅__8}Y wx;@c5rOî5E= $3ќ#Ca21 k[2!fr!/̭G$+{[繽s[^ >@&@:Uyi6{"D؍v:\*,^. CDl]oL&{'1Oɷ P(E|{[&@dNrRGȰ;x3E;:,mޡN8V,އ&\ !o|Y ]x__6`avvdQ9:M }'og%Q ˥ PsI/pCR ˠbڟ?zWk7 s:|W`H\?({ a(Hm۶m۶m۶m۶m۶zߥTY/^U߸%)CmRX7>@meQQTS\b ]NM.G%3kAb:4م.rߵ)VWĕ@pGz۵ZػVU5hj.<*ɓTbF{=WK92$ahMzAi1+!oâ)D4|?)Cwm]]՛ֻ[ EC2//n؆@zO(᠊i iW5pmm/&4VU{j }N['|`\$)eY.PzE3>( mCݞ]Y"w AISgOo&AT w}e"% ZD4AX 25O:/U0"nr@" Jd0t#r$w\`*L"ZA=F$(ibѐEDSmSu ~F-qe$xe@Pk2āZ7 !wOXEW(TJ"^fe.2D n$Y __Jp ^ͺ9 ƺD&H u|<5֖ Ձ}A0-ٜޅ B 6k8`Zd?1c09ºqM`2gKqqeuqK[LnSJAR_9{Ie"kUSI+N7Ϭ&A99Hb`Q2&iddKZ__<΢Q]'J4׹D)-N7FjҺ{yY `ABugтeˌv)ˌJ9m&uO^ALXx6"0 S_:Ms<9I dc] 6fIS)eXl gLN{0ͳX+O|#K-Zd;cΊuqh֕QD#l):h[vċQϯ]X#zbQv@7>w0dkS=InoJQ7nD Ƌ cgWIlWv"MSK=vXKb!kO4cLf=˔DAB<0n҉5Zw2H@{hn[U!,׽OKb\vCiUN2xZOXasʱoh\*1K"L?A5b4]ӣE}\ZtAVyޓF8[<&t)pfZPi򗗆;Ѧ84AHI壈xB;iB 1ϳo*?+nOEW2Qus[㲖J:?qsBҼP/KVM6[)@M}&; ` ^ry.v@SzNF$iu "j!WPRl詼UU:yץ6>$e+ufVènAG4Sh˳'3x)Ŗi\k-41tUU8_B,2LB)j>檟^ڡ_0</U9,dx&gsl.fȍ 9{$z#7i'/zgL&/]9 f%,%@lݾ u8Zj Zjzrߺ_CCY2} xٛqs `-qۋyo>4Ey&q.^<$p+咏a|sל._F2 cȠ$K/O)rA@R/Ի%Yz{:K^:Jpܳ$[6 ';&d g*{#81Pu Qئ϶HKhO%MrUiեg=á@хb,Ke5hF$/oZb:{]H_Qڥ2\+-_5_`ZnW4 m?z=W&m Z[ d0z}Y. 0xwYrKيahm߅|nqK[v#ey׷kW.XOYx£Y}`g|w7:pC830Ȓ:"@t ]0V̲7k "3B3;L~眚9;v{A@ @uC{F!Vq~c9`G䡓-VDS֡_*AsUϖ3:J9(*|khJ%q`O 08s8S^:117<} BӿsDZ0O==,fE&2.ݜ,ʷ7f:Z[_a5ِe S[#Tw!#B#b?, ϣa0 9~6#nB IGghj.)$baS?O,sֲX:ͮadLV/6M4W҈=j;Ik,wn%a ̼'P>Fk⠳Z_n9Z.k*Oj<'6& .\Ffo {okIBrXI{G-K#o]\n]-:-r[quZ.O#n=w/dB҆[Ζ(,~*MN_9jhI_h;M5h)bQf+NA<em_ec+5mXxx@׬ԵJ7,U9MmT'ޘͧ#@A#(W#C/w9 X@W/w{7҂T/ˠ^PA:/W%iޜViji)ަ*=fz3~Cs?U4ϏZQ {=B]|zJɢh"7E-4$m`h%.$ogtJfv^HGIDQ H.on e3]j>$CM_IiJOEy7␹"ynI69㡻 a6Ɖ !m;!/23CqDc#HƬ]6BwI1ͣ2uBχndf_&~:zҳ̸Qڧҙc/ӃBG2 7-Be Tl$ V]UrvWRMZ1Qapy:k^:s-1[jYonnRzՂ-Aqە(E *3SAO뼘G ]0vBFez{CJoB.y⹅,)Kw./`2e}?/.@3}'Ѡr3,&On<9y?0XV@Ӟ+^*ӃAˮ VͶB?_< AU3!Ӷˬ%GEM0:?MZmsmF i݇t!>o3ϳmhqxebrOոax$e*SJL0,2xrX{|iz?LT,Wʃj"R*qa{gс!pO]^qlͭ d~5V;'sbN S5CaڙKfЋV´##-M Qr4N]eVf'xewʑ(ix|IBKl^,:8hlƤR.l\Q U~8Qv~.}TT/ <7oRiNYteO8P>vPLo(9}Ɂ^݈J4tѳBlNV~|?78BxRm 8ZҸWkG>st3#99&^3AfbXC/mT"Y)]|+Ws Xǃh$np$НMIXR6kUUVɈ?rBe~tOF8chޖЖY.ke],o ன,h18Ch\ L,Ŏ< bW4ًEd3 dZG?X~Xh"(Y~̒ lw._De;v"n43# \o1lS0/ j &` իJMEp.WR&>\mk0Pwg3ǥ+v7W6whHg;jlwAvmtwv0i滶ϺŨ캀jz;}`{'v1Htn˶SȻ2͑fQ,VFSuS7ڶ xl. `T.T1?Gg+#= . -@BMfN1 (lX("MHNj&J{aJc1%FtR-pOxɠ`z䈲ֲveKY._N~Ph,4|[!lJAG[k)NJ&mP.9B݃rؔ^R,J4G6L*4>g7~}\C-=bhlw\ml`7*.U-\[g-ҹݗs~v{T(/=Dt]E{>NpXn XB̳] y6sFC 筬'Y2Azu)P- ?pWPz<+ O ( uX#k߉H:n1;} R4md-تaT1) THG2RM,3Q5`_̮I.gV޻-W ( ybd2u!*cKf{<=%jmt{c]k>Tz zEHqMs#aOvw4V!aD|nv\qQ dt%tDT3K$݄1!$Huv+ l N>qʹҰh-Wk%?`mA[ JTp3Vj$H t_Wk"N}7{Q#><1l0'{b8jI9LM#/؏}c1&"A{[{E7,_1cI^"grUÓ[~ t  U+Iq;kile0 7l|/\9>$F(kNɄE̫'`,V~1#`A 瀍E! ϗ+KpƏ1g) 뫳t 7^ =ӥ 3Ne[epb` j<7P)ЀO,0?|c\ӱ7 yg7& ?T;4grE!UA@2OqnZ@v*6hD ]S#f&&ޡy EizsiwMQ_;.TP;Q DƂC3↏K`&\HOLfGh;$#Y?$!q *n_%J:E]*-$*gȞGNj|hVq^]rm4n( sr[.m:wH%/ ;R##7NU7*zm Aʲwuhxߺ_P }d uYoIՅn34;Pb[Rwp˷~*$A悔"4v@1mgّӟxsRYswd3(NxN/g 3#w9\[l>!nT~O䵏7ԣ`Nb(VSh^-*xK PVQQ,|E?S ;D;E&hZ i~~dz58]͔(걁ot/6.di^lIXM 0`r"uqA̳ftf$8D41+p0,G6p`6~8 >Sg +5&rnΝfͫv-3>2Ҟzۨ2X׃d\t_ﭷݾfuez#g#p?= *1ƴ3)q tJvY (| %nZAwt4(39w*?|Jʦ'8mtIo8O|E›$PMcOgQ)Y[/eݦ"=Ͳbb Q24UUe3^YW mo <6US*B.v<圃!BO'~1$BgX~f24L'`VxHP0PCkBV@K#\ז[q8;\0[v5`k(4u Jd#evFQ0c9\7i ؘ7H{ːoSͭOG჋ u}X KĦ¦cFPUy?IiIҌ>5yS]ŘM $Fi=ռ@XN5G75' Xx*NSkmWjݨ,1lSy8' Tdq7=+\D~\ײ2r_~q=$6hIkayxC{X@-&$DMN:bsF28p4cx8/E鍡P?ڦeqNG:@ \|Xɜ9fw<袉dsҺ|HXh@t~\' "qw'qI+ķ91O}xG/g|f͊'lf&]ʒxrڡ|Ǔ 2n.pM͝Ǡ@j|P59j]f' Bbx:";DVE/s"tǰ-`(&'_AlLV̳]V|a} hᐩWWsڮ "t{BLeoٟwrɋA^v/BRgb0KCs&&/ .c>r ԄZvI md9GkO@&^0x7No˘J!D1fـ Q ˖A;~M}l>K;?#|Yt.G e'//~X<*&ώ~ՆM}Ύ~p1r/m3#Lx=-?QLF m-E 4T a!w|i|@GX.=5`|3QDL8U4ݼK澪O9LY6?\J7+u#!RRM$a5!Mt=XkEF $&CH_Ͼoq`YՕFRy+D9bLi<%N-<8Pv)\SZ"pC Й4ٙdMx9x;v7YRH8FL ,rAEQ' e#s1>'Ig|xF0= p=B yV{f[D *@܉3F{^\ÜNwMݑfhx!d df [1~gaw6ym&@酿!ߠag2YϮ;WvlAT-/]¥@~=8U '$ˍ۫q&Cϡ1r^?a0HRfu(&V@mQ7 ^jת >g Yh=k5CR:qȊx!נ;>^}LV+q-@8DynWjj^̃,}8&Qe[yV^o%h8|<\Rk< ]* DQ@ ?8C2_1SiN7k"#'O&\'ԞһGy NCxUA)p]Y (&LR4=Ҷ[NQ FZj1t4l\p$NX)h֩ekntKF:LQ>L^ ylh*4&eI +ZΖSJ'u4nT͜mF'URa9W^}Z۵8E4$jF _wܚᢇӮkO/( SV|ߖ s#B?iah)m[SprwR;|XtUÀ͡"%uzhbί.?E{vG.$s=k \EŢZt(LQ$-84]n`S3AHt{z+sƍsj "W]pze<,`:!',oRZ|i6t )ژND&j/ pp̣oH;%Mxjt z"36\!2$m2?ބN1gQ]P` Tڴxlw!^pRB,Ӄ;'B .G˃2 1߲ ٭TۄAjR89JbcڦiS 6t7ё^bOsT, 9bފ)x R Kvty^q blJF&yS09IuRL!7/"}YPNîS5ג]nOx? loRU'ս~sr0RKqҔ&P}gK:*zA/^g vIG3X9Zz:M+inCbq:-\!Uɉqxky<#ktN0[]d&-: 5vZB[Y͊d;$jiNkZsK15VHMKuDrDjO(l8}!*0KS*[^lj-Ĵb^k-~s| 8a6 i\ǿp"13B|)Z߾x-SђKpAu+(RaJױҩg9u0''_$U+)/\ ggwzUҶd&kdEI30$.~3N \גonRL!87mAVpO_]Sd~l⪽ꖺ뛴ZZue̘ħlgRtyu`C ϼdy-sPi]Ffe|喫>?;JX{iIv1TzhVxWUv(DNoȯ눰u *\B[icE=KtjHA L#so2R /UC{ۏAՓ2\عM)?XRμs?d,Ou7zF^ݟG!Qch\-(8߰rx?I22&15ՔB?oT'[L*c"($"ţ<7e.ݙUWRo7:|)UVO Zգ[Pʽ7"XcۈX+չj}2#|i V<>sX^pRXA+"?I xHC}ővJ=Lv%)iY}XT:QZ66_==cc 1Yj{~ yJH>k5hI6jOM{FIPY&f*bXDIlP<)XC"߾ǥ,e~ߟW\w3npniOT%RE$%.* {ͨA1:%dK)5ٰG=%d( dt׸G/SuC">rfE.j@၎Jl\A-\B.j}I$El䬒"6zE )ҳ(U@8-V2FCTWc}#\Wtl7wJP-t=$k=DGZA f>P&C =5ʯ|0@JOFYGaͲƼZXJPcީ"Be y#\#uI_4.L|Ęv|(x`rGb~3 ^3ބNqOe|c11$('cayEkeuwFkpVcU);ȉo@J5㯝MjץWx98/bC#y5ӳYgnYgQ w~&3Ԩ `RB co%F Jzm1c͸ ه9qiqN6ǽ}ǚ7O1O.gR~Z["t^#9~b)nVlUYZEmG~ar7j9*WxiuN/5rz4 ' -RP՚ڨ̓ժ@Hҧ.+d76cHW862M 5^1Vm\bZ=j(c Y=|OYkHb^bm*P.Fu ٣shQ#C;G46"x6i0&]J݊X5] tjNUb-Glcz~^:#f;`׶_3u)q !ATʥY6רD8l|k%{W(7/wJ_ JM=fdDeB#r:hR{\c hU7y>l\v QHlGwrL5EUK0"^{yMjBRNcp膎.S ն+R?"e EOybF&JOr~QW5Ɠ*)w+QVCFYe+:A˂ &ISVF-on 6aG-;÷qqC(Hi f~qFQ)ҘJꨉ:xW8p}@he ͿhX)2'0Q8x"2YXVF (oXZ`@*w~RHB?ޠOC/= ',嵛' ӨS>- Kxv#:H81`{G`e[,^ِ_h8*e :>Rڏ/nVB`_1 ߁0+pCwom4/ޚNvG;FK8)؃x?$e9J_)!\ţ58]QD, DJTMX33uwM Nx2Vԥ5]&kk-SѸj00Ѕ "-=OSjm"6}vg&"IwSi%i Ӏ!pѱQyi~*2_}dCP6cu`BL)p*_9{ #-79ԷHcl7X,P `]b-j^[nKҊdXqi iVc+@ =cLܝD$/aW[K <, ^OO0d_ :=A- ֻɋ4 +KЅEdf\lQ ;7뚿 ;$18Htoy3nY4ט3+޴(N MR\KM|{GdR,{Y(N ,O1I3I#=Hg ?z3<?TpܔOK;EK'StZv[9 k\g&nRKCA:;āJb̻l3(s{ELsPП8Wq*g 3&"iGRZ!9TR M83+m'˱WƯ 3+"2׼zРl"̈́zHww>e]htdOw-y:ilf(1aPyYb%@-l⽢gԔf])3G3uf;/mbk̈zX'm[h^AUIі!֕bG,2^䠰i!Gus뜈 fWC^ġfC6F[)曕cHVvpY> )<]u@jѼdoTC$wrO)GtbaJ0/5Fil/`pP!ԇ}N/6s>CSƸ0Q5QQ+R\[C9cT"E?ۨ11'HΨTb@ƗPd(8l 5d"cEvA'˿\dtYI0#;5٭]C;pDV:qH3#6E_T?#<< u]RqmB$"&GjٞC&L /JK3|&?ѫ.^n-> 40Qx:3XE&jMEpo~CR^<0zO]&|*l\"BPh]?z@\ $_6Cy?[ens @eAC;3m ϜOFA(b[: `hd|А.DKP -6v5Qd+sF%4E&4=͋&T@h<,DzsV!c'SMDWɦk7D5 טW#ksbQ/ zj<+S`YP{5gOʂC,Hy2/e*4K2_`zhl\gqNmu1^+\b쿝c!/ 0탟Ii=!{1qE!o  )yZ׭"?@m\`7ܬ籤1y6;8)mꈁW psdϙ;{ϟAK&%_l]K>ґ2DC_i~E.,MY]^}̡T2r}v⠁.<0h5<ؙANzGm /EdcMI䃥ⲕ+ܪ+`Ac9Za-, ! Sqhr{*!(pǒBlg4!z_Ac㶐y]1K( U] Xehp9B2D8;z+Yl 2ɼYS,(94ٕWWQ~ a6rѾkB>O ӘrG#N_ R N +'ԖaxMZiO;O.؄IOqy \ц00|&D^X:ef#7=bT,w,tHB=M/K*d]6k[>yI^ϫ%g?7{Ҵ|>竐J59%5ڦyM(DUs|]̓F Zr/`BLPQtpQz'I67W~,-ZWocktt1O[C*3CP6fB3R(@@m=Nj'D:ߏXC喂ŏw8[[a>u0w:HfOIG9Ua |!$(O`DɗoLEd<ʵZ~x^2&1-h_FŧM'[~ӱhZnuNC$LRr56wVgTV`̠'[d es$>s}ޡJkmu . kӡ_*l@+GS`Vp>KV9ڸt`<Ƈx",_@нo29 2<Ӡi!]O{d,׾aKbaAk?xk-˄& ljrxn20 ^ؠ) I9bQBdO[OY65BHủh/bt vYyicn'lDMuO廷!79̈́M|er)\lgM<ƨQ\CmУ٦1 grUc+ZfP6?Ǖ|QՋY8"i?y9)˒S$RNٮjǠj1seϞΨk‡E-vo8(s<9bL/}&q5\lLkaaPm=rIED%I^>!Q ju 95Qbh_tP7G>T[u=ij[~JW~8sal'*:mM 1&Q^epqJ}*U42S &2܊U-e˒y14lajc?l[mmm6$*$̤f㺷vǹ/5Ic<2 !D*@1{nihR(_>~"JhMTJ*No+^nG]qE+3jж$Egp|y ]!~LOjh2[hg  I0P7 .'cRY%EKRL=BwMտstFv{AQ.A>`q5S~]eEjD;1`h !15+'PexA0XN=W,<3']R}u@X p8k [hYtSK(SvJExy ؛TԘ$ Y- 2@sDlL#TDrda(+_ou".aaμ LU̶TqD~MA^$SHlak?ŅW^l>wNf̓'O'RTzDؼ#_(cMm^O J8 j}SX"2| [;.xԅXV=yDMG|% tK5]Iճ$6g1?(A]fCZ<[1`DR2<Vg krv /Lʋm0Wͦ*X{2/= q{;c2<{+ѯCq^ [{vۆvTm V؜QK飶3؍V2k@ދ0fpTj 2nmƊ;L`.&K3VU M<xõ+5gd}RH<@ʮfqCnhΎFV-wﬞljS@&a!>T[x_7~"@zVbYW%Wa,rnsa.N1k P-r )Zcb坖F@0C]L18)I[paenApSvYC'_hq?L ߊƗfaGrE٘Zzh~Kςe ܕ00I0F8h^z B;@>b+xV79^xU`^^c$0qv,75^ ,Nh8kMVrKo*Ww΁W`)3 \YsCo3Zk̘Sb|yiy_k a<9MiH ?KH\೫hxz n6UY31"`mKOv\æ؉wN\;WUp}Jo~פek\N_g>]]o`ߏ:K*WM*" ,Ǧ2o6eFukޮ*ryK=4{_!6'm 2!G/-+\J,2" ;N]w%0M"@~n[P= 2!%'|5 TJeuWF+&o*NIe+̩tL]#Xbg ĦO8xQ%\Ra@9Pll2W+)J?*Y[)zWDјI['R дo.h.K8oӔOv oy u+W"Zs;*QL rtql ZKW󬣙xC]̓ݣW~uI/{vӲp =kE Y9qSeUL>baoAM X k% CvgKصW{,+P)AZ~'w[/p/frG+@;?\!O+]KZNyEDҮ#n։5KGN]epHxMǔ*Se'wh.(giyJuiE)UjޚUb, 1nFlc@mP!ZMʉΖJ1~WNcいAkAe#A'za rn.`4]f8˫E1ӽ}2XX1(=q7IV3LkC~ev]m:wx~ w*;#/}U܅هf'gbn᧶&%Jn33q~h,촷yAL FQs n7]CW,]Ymh`!w$rKD({>)>=疄-Fs]0k^3cWKe{mMNL d*){;N}ɞH[rљ\z,Gyq؀; ˫ ccRȂhևtּ=k3*|kn5#gԿ(h@H_݄%9bT:Ʒڳ8yh H(K)V}-FQ1 -"aL73F34`UJ@"tGѹQJ: SGդU:޻O|$a޻4+#q ^~P^B=K=l7q"W4j#QEG T%o8~+(k ITWn " l)(cY>Xdeeyh#Ci B-39 >Ah$yL/^ZR9`\T12ץ?>Qhl ehL۾eÖOC3o>)UkG IYm /jv :utj$mn )+OFh͆cCE^  V~ ɂvHCw;o&DPDP(@@*z'S8g" b"@LUt~ɑh.\1m9I@O-e:u]K*̱Nhg{mu\{,[E:`.ƻR!m:4qiRu>%S]/"P^Vwnu v2vkR gS|"mQOJQypI>}7$m۬\4iQw@{'=e%-60ʳlzXF3O+#5AErnܾ8:@5;9LK}LR1Fm{Cz6_y\> Yߩ[sERjx vb+"o> 6tpqe$wA&̱˚YNx M' /*ve{3^̗} lPL6L V44 jAmv!fV-8-%8JUh5Fpm B%@x» A$@aBKpQnkt!q!× =@uL7g[bTڗK>eV^[J/D5ұ;TxoT#q;$R 5KTbjMlarIZhyJǧFh\%(V!cX}yb$FWh3 4DI~{ *@-*FwY%, @ lAx4 ,71/[\e 0knr~UG狈ݨ+ 6Lzg"]i^;εKfUSC,\lT/jzY4~>%LJ= "##ևt ΎyŦpݶAa@931@P}OYӸnP}HYтVK0{Dxh yYCZ|P7]bXZ=<'g43QyY^J~xSW)zC-v{A sjR,C)p7|-Dyx]Q5H-uBcVB)+)=='!emd8',ۮֱVB|~T۔oU3z3۫ ++^\s =qݨIJɌjX)Q/]a#ַ-o0͸FxrLdԤ؜B*:󦐍^u`D\N%ߌtl. rسsP^O7h׾rwMj<>"k3& %HzHȂc`vAggbA3>nآah "NM?YLʳ3/,mf8+WER}ZЊ.(!l{G,eB q 4:0ƥJ!N1vDU9[1/N 6Hpԯ>u# ٧HЧX(&z@W32{˭UB;8l\Hʴ فP~iq=mߵCy70+"1 ׼I^̇t/xM=7d8 tmavvO/80'}SiqˉT53QG zUbw6o,][яDe$lϨ)}>''IʄSrp*y[̒>flQ,/i_s8ڦ|8ƒE[br K&5MKm|.|^T9$e\b}D@l 3H8SIĐE792zWB,ml!MɪSgtK4S yGWtzs{wM < 1 m- #G),r1(b#$Bu@ast֮0yk)-]0@qU"%#n=/^;.,t>:eꢀO UD0=CHز Җa9'W9<&6t(% ۊfcurWKr<0>3b?ȧ64xC; n%{['8nMҋ;m-zɆdS/;tf+Kp~ 剟p_Jd$qdimQHc<~K`3{ gq #VqZ!: )|."+lyxoxԇ{THܰ2peO>#>}#[!"~O@sC3\sT`N_?:̹3;"{Ջs҅8%?pUw!^,vcQ}阹t`νʶ0g6?ԁLpc84FF[/CrB`.Ʀ$YjhR|{P38:G,o`h giA"ւ3 Chr !myzbSl~Rv.ĉ9 X|D%`5gVI;f@EmrOFIj5#@|7NP1TPU$rLr &Ζ"E&Q,3.z˛T% Z40<0y h.{@0a3C@ y#kӊf;&[d<.,]yFb29~:m䏍RJ˰߀q/HzMVZ!"}aq~5AS;Yk ŊZ  pxsحpQBnCT}/{;G +Y^*Vؐ9\pS.SI:~9nr԰uv1A`êEN{)ѹzCgOc?a8hWOLV]HlKX1):05k&(ǧ!3l9Cl?;=o$rNN.oMzEO-}@,@b`3}m- ZV'P1:+%Ҳ^;yzUskSf;oد*;1@i:{|~Jw[n<ߥwe e7^ da[>fZ{=o3NYu-ߖ^+ۗ~־R^'^CKePܞ CuQ93^wO:~b$3p[zJ,웞1?#+#6듵 t2V>RZq4q<C Kf yshZ}OkiIIv@C_<݆s|D.C#6)h.WS(^fm=cq20?] F<7~k 8  1h 8c%{ |l⏜v!u:1{I7\};O@ D.8$ܥricTu ؊%a+gcDVuHM* `dDʙ@;4̓fދp"Q1&OZTdoV'axDlt6x:ZAkR[͐FhkS:s:ֱ,Rd;UZH Ӧx0CiH~Lu/33֥H43,kwi_J  LVh -aUh.x(08rX׆L296Aej#Y~QdkY61``CKEIWT0yM}QlUЊSC๹Q|M} @,GxqvdD^x6bhAbD/XlIm?p%$:۱}B(Dϊ ,ɌosYMv>/\ )wX'|GdX^ո=üAY+Q1R Nnf5\~ @oiM >׶j3%Yo'1ti3-\&H_d5GBjSgpWVܝ*&B͸@ʤ~="@k?H]\`PVUiN58|=O|h,xc7Ȼdaoq1j$ZD`r"(ͽ4`rMI,ÑHH,͸d)x0mjqMffx:PlAb$lM Kk.~}%ynRZtq ,&Z(s>[p"QQcEѵ0WTŷ4!Xgr GLɝ N!HWXoa[}x(ﮱiQJj3O;KtAu` dڗR{˗ྍgf5z?5UR R2{168{8ϲ~6il1P{ЖҸX{;ԯq:nq=}%lhL\֗ { CTOY!IqUH!2:0DhpQ4ŢOmŅ/j (%Q n *9K.;_Wt䟂 v]pڶm۶m۶m۶m۞m׎}֎>Ȉɬc7 <$KQFhUAnc3B2 !Jn](s I2`kz$܎#ՐIϨEOJEKz 'D/-އ)]꽷36VF85pÔ /TxQ$XPk :]g&Q⽍lo~$)᷸{+%XVYÚ4jʵ"zFoߦ>KnʃJ\x5lso|參KAGf(63J6Ĩ:.WԷ L'rNjLnvf%h$ku!\pHE Ṭ,H{U#rl rʈQӌУSM4[C_7#C uY^>|||럧a?f/:^Xm1c]lk5ę_b+Q+̶l6jPtJ>nc|/ ɥ@ۡc-nlpdIٙwm`սD\N C08&^cUr)M5b5 ^^\[;y|tQXd|8?n>?WG=V`27eLpfIQ4//\F&MH&9Zk&1QGiXBΣLkc췺ݳNO}wg dgu780˕fIĿbWfpi26S ww$]Fg巔,uL.ɤ®qZH";,KؕR! P*(q,(+Ep7+Sp*s/R!BAbR|GeS7Bb ö͞ɠ"/L(Kڹc423o,1Ќ~ |czGEuv$3D*L=g*h/"hT14UϽ:$ /&HoiBADhMLjpi H30D*pLjO;c{@(HPJ5q&7F=ZYjK3YjK958Hg20ϱ,!_\Se$I9MWnZIW(qH)q^'UξLID4/./|Lu[ 5}ӇJGЏvxJh#_q&GLx GZ<K5Ӱ~VPrC# <8J2<bB!E2̈́܋Q: yS0S@e&qdZP*))h*ySCc"JRRNZN bNS04dX++AҘ"VtAHUv)'O[Pt3_&kh<jB"8`V3zL@ՔK ҿ "dRreMD>Ȧ$^F]wAl]*ʌh!ENlD/?cUy}:b Oe9~3 T, Re! u#&!CCPRMU%Ȟߴ,.QYBgUM(?dK)JÎBИL'{S$Jg@! 1d߶E? bц+Odd j0$" %͂ofZRG%_61bGDj{,f#_H}mn1g6tEŕ\Za NJ&NPjqAbP2LE9+F]}' t t}AL $YVL"9|>fW.WOsU"eDŽJ/ 4_QxLVb+?'[T wV!W"`6d2qYvTm Tg=; 9Zd:=lrV!WlңaK,.dTw.5 e.8;bw ]ALjb^6 mm}+6 &z oB  DVA:`PP \Ng>8CmQ/emQ(@0vkr!6:)JsG^r; 1 h JHaS`\=̍FS$8N/,?=Ē?0v/AAGۨQ@Y[zoIyc.*\zcx-;,41~q ^2 xOeTO1z0Oњ$f.E +dz?HX?[,!8 `vs:X"lHӾ栊WU*>3fL*ŘmN8g/TݒuA}f,֖아xcR5X\p?Ye%fȵWbmAڏY.A3%I?x!75|V%;wY.io hTT r CmW[ifر(<-n=/;je]kphDMh{,@\[r%N.UM1+s<rX*^Y杏R.ם)[mAopz  ;ND$d*uCVTiMV s X>\y Ur.r{ם!{)r,zgxpk-kˠ֐r {;|(bP(rS͕%bL9 /@ SjUуR"*@NS^0z K3RJ|ח=h!^Ӎ(-ДH/Ul5m#R;MfԱLF@3݆3N: ~C IhMD  M)? mP]!q[V'suVF&n>8yLWu^P~l6D>1QB*`D|-F5ipmx #l,am҄iU'ZK 2ϑV`b1=*P3܏,YF4h5żDi'-9mS1I3 AObNJGn3|:%ІhFP"wLT}\Qzg7LQ%I !&45^7~ ͊fE RHR#4;B=XԐ7vP R /7?xh^ ;E&+[Al.w~ G)8xh E=x􌢧wi`^>EMo/N/&o鈘}~Q@CD< '%=_ "Mi5IE2? 4M@b ; 8x/WZ8R[\g4=H3AhܚJp֊qt`7r{WJ`=6Y`<0^ob"EXM۽ܺ VT ;w 2R{BEE#]pyI*oRPkÁt>%ʬA,圄;dhj ~xaH?Lb+\#~? ѫim֣vz)l4ʥr8D4*"ttSĬL -\A ͐mYIWMo("ժIY҇{+0?5$ȹTW]vL컝}dCc(B`ls16eU{W 2ϐa46_Bp7-TM 0?dPIDB0(fAor%X h@NQ{qFmEOTŪp9IEsx/x/l :U:4:4j0S -;BU( KgN6d+[%Gea{{̀W3CR&2J)As s2a^[|>~6`cq``A2x^_PVR 34C$;}@JKq~&"s/U,}2GkzaX"d.NX숗:G'Ge"_DeSNT t:&QGNq,\a=w5tgWu ʅSqx5|]IPO06 {y>gnU@7xmd4LQ^c\Mn| l ֈBj7Hb篕Vv4o@%d:U$n6Ϻ `o7PkA4m %{Z#CI:޽Q8v%OzcTOYS8sֆ[h_Cֻ1At!vRB\m"fnn9=ڡ_T2\~1uM_]X]q^#e.~K!YE;|xѾz|ht 잒 \3EU+zSC9Xn=pNYg*N90X .{:WrŌYۑrmߠPE n{\x97Vcț7;5zZkcͅp>Y}1񊒧 H2[?d89}'~/@I) {~n 37 uxG!fv;FJZ@_3>20.A a#c'c'sʓwny0 wv<6G.$`qRuUqGGϸRN^ZU{^x~?M qZZ?CE-,]PE6f$1nK1j8l [dX~>Z*F>+@VOEq ͣ7n)ı&XeM!'Okv(2W p[@S ;}N!x%y[;3D\ت޺/&@xZ#z^|3!ٜtjjnffQ=ٽu2iFoYXlt-$+\m -3\M[{"N<}E"-b\nQp6CiL[9eہn߷ꦝS#*RHIZs0/b*!=^vtݝ_V(LBhw %=DQvD~VQHl$ۂ0YŐdN_ړR@EtS:D' -BƨKщJCل nL_.3 [>zʉՂwe ֙  6u=LϺT 6&X@, K`cxw=?1bX|0xń_ qdCK~%͎17l<3].ʍȔxsu:9ZQr*{aJ(cb|=*ֵghl0+Drm} 0o|hNl41Fg(aKi3NOy{02R"j:2(;yF߁64C>["GIMZQFe䌔FӶ rw<0#Nt3HceLVڣ5s?ӴM՘- SIcƛs:߼s`p`ߠz  Ou{2x֓|{cfSr2><\t^b-LKxu29%ʫOR 4uR1y0".T o<'.3zsH~?Xӵ FGG׀k5)`HT= m,V<5ڸ#4+䘮 t0fTJ9 Uf5%IPRnYU!W┄ L"Y7VEEk쾯9K>fŭHQ3_ h켅3 s 𧇮?WQfP(),L@WŘ<~*Ej?&۵iC}gU#P ctEmEy YjD< \Q .L㨂BMRDW6bgJv<|}|kǷ `ΌD$GJq*7㑱%z)EsŮZI M^,%XZO!?b6?-N $L?T*J1W⢁d9-JQ#Dm9f\ &R!D?PAFP6J֢Dܻܯh4o?q3 vӸMY.Y;})RiY&v_RfȨMp-;2‹0RsuJޯ$Xv+w86p^JܗԤKrM'X9… NW:/fX:#4 4 ,|`_<.␽eg;`>GA|<t$nRWEhc yC%#D=86$Q  y~4cy`<9CCuGvâl*:ܭ1b}.|X_'ڽkS6..Ry -iFݍCMcKwT-Cuc bf^nhxؠ;[Dmu"0V-VNEW?ȷזΌ"TD_{ӖX9޳![:ebwc};ܥH 16 6Kdv&|4Dk\S`1m ߞN[)v=%٤X-FutY'ܝ+uCNMqm1otvp9RV;‹cto^Y6$⽗']#a7Phj@Tr'6 L>x JvCrs*?ޥSjĮ۹}b[.֪)-<.*pm/v)̭'/ V9BXYn@.0UJ]. ^fi*}3fEǩ&x3bRhx|0mxY>v<љu׼p'ĺd$*-(m[adzA ]]mC$/#1}ɣ.ţz#HJGj缪_<ڌ8gTw"7"x j*1; m`K epX܄y@X D+79pN^HA @?hX*~>UBN4c\-qܽ_břCpI]Zo iQR_Z;hP^E#eJ>[oB1Ք 5ߞ+MdX+m!LkHDԣm69ޭ= ֽfgƞ0s#k 7˓nr:Ҷij|Kby,~cg O*w?$eFXDO+lKI ̃!xąunQ̧pH$Ca_ sd1i<^-b),6X#iQYM}>t@HY[O%.bt")93\^vam3&bK)pW$k 0Mva9QuGY)blWf3y5+FGQ!N=+@v4?!+jGfpU|ZfcƟ9/o% w{XY%?o|pٹzwMcL*rl2;fLIćCoް"ђ3V uWX)4Z-wI$;j_P_bz|1"k_zwFGC P?etxږ:LJD$v2Fh6Fq #q"d5ɺ}YA&93vRqCmTdՁc{GkBh`.4:KPJ% 4RY :j0y?|åzS(KJ2Bs NHG=ᬋ O5r vYa՚v_/afλKf<]Bsħ$IX6eD5нt<CVmdIc)vڧ@M6@'9Gv`öpLY8O0L>a};COpx26O4(G;*Rł glc%(1fd Y)݈ 1‘~Ah4 ?e]{744rzffbP=Hy8t|nhsvv*V(#;:ۻ:X:[nev)>rXCJ n`Rʫm_2U xoSip,{D4)~m6/zphqJ^D(jF% G0'~ ﶀb9:_8ǍᬨP\1V`_fFO ɣC:=O].vCrl #TJޘp赚/о_ޗ{sHKGQ¸&U` 0U橙}wǒ+Ґ9Q *Ry&mSkK%SG8IyǼcE8p`/$[ <6M|$ee7iJ}} ͸)ӹ **:oUS4DnsF()iK¢ޟN6p~=_cUv9*6܊8u14F*T71qgg*xXCxϳgKk.Դ ='4n<̘volp<4<7CT5- d@tX$z%_BF T\+AIgd.WuD5Ld(n1p $ia|c3ͦo#t"7bMQ?X@8z-,1zkNm_e|(y/!@sW? mŖBlIˊDiUu[mΙĕR@bH?mE@wm5(3"2ћ+%7|7!=9v1vMDϓ*Bviq/3o3d'O4K!t% DS0$q-F dLKB}W(  d6?zH}rd^19N+W3y/##G3w^q*EBRoSc? JNC*+cHc*qD߃E)Ckf54U dF:EVM8䲝ՓW?'ʾ4%EC̞CLfw~_\zZv(F%hؔAO g˖_F~gߕ4'POj`ef 9u~ܲ0qHmmtб$4m8bvZk`=;d!H*IȊ鶯~]>7%P\jRsLaY9$& Á](/ݞ^Λ@Q+mxU7Ѓۊ9/2Q^3e#=#{~O ߨgH$v%@4TաFyiF *b4b *DEDB @ÎL0#i:MhYi &t50jh bww," نb*\8Wݾ2GiW w ],.Y2 ckzpe@dZ6A$Ŝ{Ťa6S4]AcC›3}&,H"{ BAq&M,alH˳c`M"xNǰ,&Bj$zMBaBSMeh!NF21RE/a:UQH47>Yܝj4HheCP̗MS, h@kGh#{~>=lE _)9cBAѮpƯVzT>ސH) cDNH IF4| X4#a 29C?G*WP?{q:vA9hHjad4.+SP3[;2{\TPz9xKG9:Ϧ5)!Du4l ;)@n(:9%\.mX[ֶB|^EGXs-yG$b=,{ҒVs{?[}{c{u@CkTŸ49j [U@,&+ܲAPj%+ 'Ѻk"]E")FU)e8>,q=rquŒymL  HNXp0X5gl+[/x[=bو5=))[7NG-,inAsoz(V|>CbR:ѦItj|D/}SP|@x s kj0&R{'qXW@(Sguj٧ĝrHH%]4\.ɸH v;D?=frt{ma:P=gܡ18t@> .>h|]D`o_6>GA?o ^!~表C,xMN]N33 |V:/P.e!*Z\*6^@5CS71T܍[M AQlْ13 4{(r:>ސrPG"_s5+iۏ R3Hb%Hg_NZjk=OSvk=\Xq4kD50=B9a%RȖiTTCAJTŵfTW)HC+$?v%%A(#ߒK:$LA+cB%,%ň8 # ?Cp9ŸUk[P !@"00'2VZ461vS˂gF`8%j6߈!B "6Vو|˳1q&hFf}uWҟ2XϟKVgmv]zLѺ,K~uUSfTd ^R+H4!;Fsml)q<|ZޅYxycIvIE p=n}@91 Rm5kd9fo4W.ڢA#'f;5^$1&kRr+ZOf"t`?P~b]KȮPѩxw'o~?^N%Һ?z//O"$RؾW`U ^,z 1X !(NB? fl᥀I[Ce;0 ls\.d"D. s%0L ڻy}Cg*Pr Aw-ι ͏I+sOUPF+gO@w,rwM1)iב'ڵI :HmoGr@3 $gqpg_i&_꼛2/ɒȃhQ D}"FC A׈)!K44挰q(vLGǿOBNkv̵_ї7GGAɿL$᨟.Q,A_RДi4Q ^'"M)+5wBLTnSDs18ǘ$,#k!AL LpO"yQ6LL2&fl"LkqH6:2VZ/瀚,&M? )u%z~K`2AFP7i㷋8S&<`J+ydmm=埆JF%GԄX)bt[rm#1xfà͓Z 'W?jR4H?hоo jQɛ AkbʤWƹNMN0+?S*X%k%C^p@ p3㕿j' :xzXW\pqj7OlDt@!#Vޮ}4I.rTG#̺Ac3xdU8_qJL~$+P+~4wՖxTy}2< {4Mzcep`M{P!o%ߖ\QS65D8V"8 8>J2bb֪b"=:aP)4MtEvQ,;f-w\!sd~" U=m_ )#^3U>R> G?\t (9-R٥F~t88HDʱ%]9ܢ _oriR_`Hxe끝4(KfƇ=S8䒮K<EL4eF9 ?䈄FJQoCtN T:?jyAEhUc!Ԡv ÍM RLJ0> yOp ӟfY}Dtf\sg2 J̸d*A#*;t _ (Gӳ(!^XZ۵׺ %n [Sz5m +)j?׫{W!pg3?t=&"Px%&$# .2 ٿ!tA'kq{rj1O>]Rsz er# !k# 'F nXS%)fڜTifZ%o헲,d5\ogUms|ٮF׿ 7J 3-b! /.YM;W^Xu[K%/A;ȍ|vA9$|3hN#MfYzJ5K0kvz|8ڐLq.wՒ(}%|>0v Lvy8 ame %)F3Webp{b$U1LNuuOr#[V׿#\ii~'~KU3n;w܀q$o@6ϾۯԲ ΋/=jFsiB{Vf]FJE%NNKOAmrz>3_ 62Ur o"V T?QVyiZ]׹~lj:2/1$3㥄nbg y?? 9Cq.It5,NfN1*J`39aQd*,) r,cu5S.pØ.}w9U1E_~ܦ]ݜpO&PhugH.`ίi߿37NQ `B;M !l+zoJXզ<'m+dC(〞v4r>¢t#k _NVK<Ӓ;c=WφTF5Qt}AEUr\q70=4T!NaډvəW]wb(r~H üf'Qĵ*^TVPZDr?MRCK[es۩w̥ -ߺcD׸]"5(bys.SD:1 )Z7 |lY0NgS>ۣƚ?Dz$نD<3DǪbG!6GLA)o0OEiV(A42GeXY//^(Άξ,bCxɯ^17[CIu($'UueK,!tKR*JeGB9+FD=6 PNṆ+$Pu U FjUb#i4:#춲i8xT}䡕ꎵ.{f:?i{0U3j-Rc>xvʠRZųu+ÍL4/oh.DX_YENѬ@ (lDS9f|i8#F^ TuYq𥳐O`."tW@p]J5~=k5~-Y]it So0ǢanL ݺ1[vF2`F'D('%![ZE2=ߧIK|A43Cc{'O:XA-/jIPKB*U0CVTQlnk8\pPF25%[U4:3q-L-+ Rf+)c0d kAe2A`+ap%`.jd [ R}0<}) tiHCKI")&!!%_-V;uՓ&jvϵd,dbƂIڮMr3jf4˒])i(KꏪpŹ/hswE𺞉x/NNH/g4W`{om`j > +Ğ6fr^& Z`(AY2BON'?vGMljaĉc[9-=XNXch0 Jژ @wpIDpMk&v8Xc<;I.в)}Tw%"wzԶwQ?;, +ۯNN@R8y-yOp]8kߣ/2<|wNfyC{|؂eE̾U iZαձf_ZP1:=g&qfsQ;H$b$({!X_$qpȤ˘E5U?⤍kղ2z:2 0:W22VD4]!U!h2C}ݦf6kMg:fd]p{P o8/1[u5ST'zG_ElIg/SOSLJ$?R:Kҡ9R36+GQ'̲|+Zl:QC<(UB!jD'igymd]]v[=- [4T-) hQ-#__AŮ?/ԞÇI')NvGPR82j:o!7KtItnsi#wsκ.FJ(4e]Hj>r&3ҼlKHɰN&Y!pX J1̲xQ 1F@AQ 9c?;T*kK}W2El`!*㨆@C ۏJ{Q+fDBWΪF@U*/S4`c:*~zgutTC}|t3{r MA GMƥ ,/0Q+ a<( N*K `FMi)Tj1[3Ia7׃k?pz0 щ7ëósX{ ^/C/B x ^6Ut0ޑ9fcf4it 0eC%U&XbP'x;T֚'}@'J*w+ ,++Ȉ$4BH̘][6KwDyk$zrM}2ޘJGb)'Xc>ud^#Mig x̲{4}_9M+ "ъ;yHu/5[%(^%_m~Bt]VtH:'L=D{(>S U%V%qa<KShTw`o9ϧI*bpMSNҀJ?!UFc i%R;2rd5no }#8!3NMu~xN zAe V͐cDŽ7p2öTeH eSLnlVϴxkeYos ~KoDlេRVƆ=n)8wwJz͐- %1˱hڨ=T"KUohZ~8 SpV3i?X(gPp"t{Lƚ$ 2U6h` \kDVk~ r;I&' Oi@ʼn9ኅ9m@!U>y?P%ye:HxsA{[3VXpl =3OewP**ps, :ןel EDݞ-:+QyR{%Lѐu{{>js>i;{*ЕSy6$W]7Ќ"$zl7')a74-d*6OL#ڦ.ъEh?@ȕzu.sWC˔+KG]A=K@+!ѷ!+ˍs $rėtĵ8k#|=s7/QeEmZO6xKT uKkܱF^Fpm;V_K˂|b֒}XsVƄho7TGTS5O˺kw#bM]>e'gsE%Ax!ᆁvX˕lr(cI F-DtqOoG9BȤ1{tO^CC -6-$0 *x1!nH`ٺkЄqg+.<@J^H N y爏.zct\ÿ:­_]7PSeX /9"u1l D9>rDoL}# m)v;beQ@WN֨ZWW31K=f+zyZTU3_\%P^[U (c:3bGbL }MAUUQFQ:Ch'".&[z& džКGEoĵ8 Q Dx̱4P L'm#h|MxĻLȫMV4: 6?m|ʰ|& $h6pA%UN5URq)v^(![ bPq#E@m4u L3gh;ˑY*C_HB1FVٝ(^B\Q9IҞIeΑFf%t ;+@45X|'w*LnoC&HX7OZSXwUy:TZ}^/MM/r5O7} eWaX>܈Wε_1oR9?]cyjͱ' 0(@ 0w_b vHU,h:]x<[!4 h)搴V+r1F 0e|tvȭ]vV tIA.ؐ%m@ ckwd6̮^1mp.+|)8D8+W.h<)0#ջ&}3G+ppYHS0WodYFZqXK1XH]l(L͵NL7PKْ'^?X 񕁗|Rm,V\om{ԀX#P} A@CYtS~QŸTeGJ\?aWpTO`xWʗ{7AoGP{(*1hVn eRY%SJrwH7'-a}=rqF2#(,saq0@AHe*; ƵŁTQ/M$lO@4M4l~l?LL#=1MDah5W$^^+$wn;n3pJSV2(Љn 99x8B1S2fav˕FMZ(_|\Qk1,|ի6WC=J:%S2g^};q+m9c #JEYY@x=ÃDKeFD0S) d|ӥ5)2._G|vklQj V.4b~=ll )ŒIc"d e[mS2ʀV7c:i!r~1ŒƇm{w{zz+]y_ ؇.yz+1sQ@ P2Jo#<0YceNYH G0(֊إ ₩fha<$7bu H>R@"cί.љ9 ]}}>ʢk"u 9[R{" PbVn<+k%T:|La F lnM.åӖ]i^ЂܚaRu:6qVI>H?x|>ԏҽ祾%yHpV[, U2(|ȗ@H:$粑x-LĝLNbeb:nñ r{U.vI^"fEkpeKZ ^^c>g?6 yju~X?n5*z3z`lj!F[ȷuap;U`_866Μm3 Woʗ ؘsLxӚ"9'~rC|сKу}{ױngIt_}KP+2nB3\Aa [(¥Ďg3ՀAZ)KIݺH%pzu]L1Ϲ{OsvUq#e!h?׾5BR0@n]k]V(z6H'7 YP2r0!H;;ٸ#؄`-2e(yZul758r%D(Y yKyYbCi]ENIHp罐DTuIź䝻 M͒_ aE/Wvgq?T У1Z#ibfFTB0ԙؙTF LDSh}gM@3 hAXhqG|U[;L /̦8m0DYɛjC }3 4`& YSH֤U#rOֱ{Ze KV *2.ִpfE@Įk_((Y MDr:3&*O5&v-z"q&a8FHi3x|mcjPfuwؐ xΡ}ƨZ "0$ǨdGΨ~A6vV hSutl Jy tưuCdZQ]Ȗg2PCVR1VXeG'*n˄p-]ela?#鳠Q]Cc,C|la[D LM ѻ1MKW _b3YTfh]nw~p"E0';Bz-x{\J"1 Wײ0?"#rsMHt{]2 Nʅ6=zy.>* eܵM|y>*vNѩnYؼͼ 6 > ;!u|n6Gwby9r o@ &o?3>.`6'Przn/۾ô1b9zRcHw2[_6e6NSs|Kn_VI6 ͜Lh`tmd&s4jfdfkjB{jۧ)׮HV]y۷=Gg3 ]:A8fPcC+~Uk2i< _̀|f54'XiLJx&?9n,z i˾~& 3ԶӅvvG3o.T/ }3~v-)2]!8rsIky c!R (' @2%!:n{vi yf@QTEi~)C\Sdt`t(V4Rgef}uɴɦRJ37bLDIۑa$ $sRd2Q/~drɐkiشШt qā(0H5%"|#GUis2%Ń ˅3 |Yq?khheeUTe&-U5}3˜Q0w~փ0xS10&Pѐ++W Z>o[^͟FI@КMm8 b-uܜ<\~сYvg|σ-YqLGBw"0gL@I njew`̈X=ə`Z"\cv&Umb.`4^qJ&%Upً_ka+/*BLqm{N9}AY d:4qxjc͓ @GHd2/*7V,t Xׄ6 *kIE.\-D0Jj&q>nVd zpZjԊK>HS6 u,1\:.(&@R@ \r} ke]]NO&YaP왥0Ⱦ@(v[v}EKV^sBB$ | m|z yU%nW5վ\\vdKDg_^Vs]6֫í7oaۖFWPr0Pdb8(K-_6^@s$򹎙5~7yp.fS^d~rπe5Ӗ5?vDr5U.lhӀӹ{xc0}޺-cI?}K]~LZ8.?K@ ~r(\ (fՐw[:LR;/}۟ ^HoI:wvl]Oܪk!dF"=?m +hь#hO+d7M9b5@"("L>B;'*NW+hҨ`BkE鸛#㍳m?g?~$Ëwz%g¹y~dlz|i{!һikvVk2p icYtY@{l W^~ݠ(3޴f2ZIq4mI+e`|(V4e&ҧNj1ep?_P>H!",e.DI!<'7(ܓdžrлsd5Hy{lV%5!,_I!,YL?5H&Mk%?kjLDbi ː&^ۿn)*% b9f|G |jg^m;t\\VON2 MbC?fPQ߇ڏC[+6%:30~ϵDbyq{66 wOU $L $O-(;FHởx&72gaZ* o4+'i}51 7`8xIS.'15Р$unMxG"D$\14R5@pH IE s\ a;"iw00#ڼ"*D${扱mJ8&ͪKLKߒ^ aB=㑂U2F#XzP.DΒaHj3X-5j+Aw3~Wtmh[vdW*MtTM#Sd,M!xh?]LI,f_;)" )|#Tk167J(aݠ 1 )ۮܯ>= a2ﻅ~B?_v2'הEBFyY \v FhɺQz+V%9%:BdK9Ѻ+I,`vtܞUf\ؚʙ*@ren9 ZƉx9ڐ`.8cQ+z} epI'Cl}9=IJ ZbN^'G4/&7HWZI8*nא} _ Oɢ ?s'5GM mLA5[rvaqZm0 ?mˢok،ѹ|f%}i@O'镏5nK K0fUӋ'%;@hW>25=aG&:oN+&NeZ*'+Ln&?/jch-3Q;|_C0Pœ0W!;{KٵH 8mײ,6_^|qiM<`*>KA\}UneJ9$ђFX5,h1x^^&&b^ݺ``P[_I JR k=vrq2!]ׇDMڛU'8v?nfR J/#_4ǣ_+#e+a&[3bWWf3@\A?Y[9Yۙ9 ;[=qve┛|VGչT ZFM[Zq;@¼NnrXsgG?΍mN\]ٻrvf74szeMS꡼e@7D M%(J(zЫP Ê3%hKNGmoVDY;dG<@" #m*fK>ZKAgpͦUCiBī0U_,[`C0$!F([5xrջd %1tXR+(W"h۔m[4Qھ#;NhonϐE/Ֆޑ^2* .8ۋRַ嶗Na+}><8nxOCfA sL$f帽 ed=ٝLf]EeLl.ݝ:}v.v*/#z#+Fi LD?wvcvF.N0T6n$NMM05ut*t!%PgPu2T.@-JaJqZjGŏcb86b2O'Fg^$bGQ& \($tʷ,zh``>.PgTfvPoZvD$3n4lEX?@M %Hؤz͡Rc F)+H:p#|l`e%j-z 3 Z @8sXm[t!~1B~mm DVE!_H'i=&(?'](WԌ>W'y!<޼ao3)1`::wKt}3l3]͙@ufz +%'Wq:Gltk n"1Aj]K+1;`]6hUl~U;ٲ= 804NV" 'Ds%@>A9,ڔEﯪcfdL`{>Ԋ)h. I,;I+ܓJ<_Ew 'A@VKMJXj=7*X8^ȞAΚ``Jhz2ʹY)dD\@C4A[QMVU* 9E:YZ  +Ϡ&W=RH)LA[Uk)rYx] L (N BΑmo8bfi p{8U%BFSA)) [md8Fw}CJuI^PF5$Sy A/]e̦QR`@V^ʯ)>r ꨊCR@](gݨe4!Kr\IWvv x}9=8eU}9 k@,ДsrBG-Y?!jDC՛!0LF]2;+3< O fG;Hœ^Hwܶ;+ d3c>#؅9Ʈ2қՀՑf-PHHJ P>%Gfb+E`s70xLGGaD躠] *i&U! 8ގ|>ܟ$j$t-mC+.aŽCd3LS(/f(x벰$1oY$29Խ*Qo]U:+3=G*7 *#}Q['J"v=$aJ{"UM4CszrN=Ȕ2o!RusU˯ARӸY!dvYٔxNBϬ, jYRoGvС.6azFnP8Vh "KS -vsQhѷ"E^W <%Qb_}K_繯Z: mR{8Ii0 U2ZbCSܫYs{-Y5RZטrLXhf*l-lZ#`-9en_ h S&Wk4۠If @kh/Z4ot /O[7C"z%5'Sa6I:uw)S”))*%N?@$+! ȿtF^`KЅi-X@-}0G ]361 )C Ec0C\,_Ac㤐b yh%YJ!*iƅQK> P@U0T@*b=6!r@)q@XwtKpj`{>`{fGOnϤ%˞\yvtR:{Uqw,N8xHlDX 5PT)bYqtwn\ߞɭ=jM9>i=v ,u/to/*N84ȚbfSzG9Sqe"m^do"%QOOAAɠ5[Svd(>dߢG{:~"|l_{A 7W +ϊ+/دX@_ {-`YMųQ9keĨd =PUҰLӻѬG '7הȫ+QT=wkҪ ׻BQ"ѝ|R7¬7FJ'ە#M>3 {Ń.,C %1I2M{Y&5nOޖM+|+؟gvTܪV<r :8+Ie>wKrՉ>0,)fTkE҄HtA ~N7V6rtB:/JmRxC==Aç>Xv~:~?мshﹲ?s%?' '>@6gQ2?Fy3Ђ7{G(&l[{ۜu 6LxKgm'6#?ՕVen5[MHèvk]{`#@I/ hUƓWq 4>p"O>/9ؒ$e:W!thE;g`3-Xm3|ie2KِO$<-clZ5^CǬBMg^oW%`נUT,kp\:9fZ.zb-wQ\kW=G6_vC̝^ٓ-c?}cz+"cH̨}4ܝ.ث&ρKf_JO5BN)hjangIn!w2T|/NHr9z=M2muסvi)BWΰ vmU(%CQG~gٌi_x2a<`kG=@CNqz1_ӑ׀Q%ؕ8LoWdlK~X/bct0 6:3h,xcnޝ~-G5u`&ahE"$ ƌ% n$h9 Ѐjz1WSɾ%Hł$f|.rZ_M;yܒ,GBbt7l Ϳ\(lh9z!i)*a[+a Z ~Ѻ1"\, dK4W+8 1d&{7O>W͕e9$o Nnjmg?PdasJܔX@UU DVEMsX!(neFɅ&O-#Cxz2fpܽ{1*J[_ g7Kt$͚FO:z |;P)ZZyB8#-})sWv5=F&.J+(Єw'qRƇE΢D]i(;6ˈ7-}G_0]aP v_Qޘn\Ϲ5+ِbq$mSs@L *5侅GPE%Ӕ 4M2d0lzُDofRetf & l7+5-vF,@|"7( a.|`ϸDVa4 UPv/mT^n_U$>ٔ, CMNeL3VC݅re3A);'qtHV|_<6{c2CaV\!(Mf nzj-z2]IѻfT(TfRߢ9&]J;ѤsD x6c%7 㶢o]%fFzkNz?av`TtxA,H EatfwB޴8cO%"ckou3 eLYB 2I%8f)PN]6VOsлv 1:A!(?iEo̬Qp '#E đO nI*֓d %(R3Ϋg8Uc"+*cTFE ڳc~f0h6AL>ү %ծEn ]H(}o\uEz3зMMODlQP|GM%]j|#a+T_i(v?mLZ:a8eT!~9%xAH68\:,>$}MMlL=L\?d Vk}2ٱjILg K+d5+r̈́^@>AqחF*R }~ٵ ##c}E}JR\nc<MLGtTc4f.)Sn S [6m]]7yc&M!x J~bLR2/T$ FIIFtSWqgq<7e*b`[F򊣟-my]qA#3o&_RZT"'~8*o9TtW*u'-{wcN+qxSW>dxˊ`j&N5y$#+{! ~7WƷ..nw"NNN+8{>T|4B hnf?7?4$j"'-]zB &t]EeaX7@;c4؄)z6.="Fs,sH*さ=|2:[(z!ęwLOFGxnj"ΚCcUB |ARCTjJʌwbN,&:K챃l1Ƃk th2w'T{?o891)0kF^> 9Z?X9g^nǏ{o;#)K00o, wXu%{gre`XN:Z|' ętcglX?^r bDS#-r`%qZvʞǓı#FLv7'*weav t$}O__DOoŒӵ>ffVDbv3W* As\X}f,AsK% '@Q!R%µvE]{y#A`% 5#|X+Ne5@nW{o7BGοxXw3R{R$ueWr_$> {}t߁w.5\-ds/.DDYg K_Վ\ 3YF3$w׎{AWO{x@SQ,=1W4DG^=Đ^v`+/[.؀6=,rD7)JP^F #^do*_Eus`4?tkVmF܇v01[mW F-4͆%/Zlgo`/"J׎#c2hLFd_6 T bHǜ[Z*.@Qě)Uu z4˭wlv_p~a3cJcaЖrfd}3Nh"~z([>Km^"C)6XiųC#ȡrE#̱?OAo[sE  n'T&7G}bE3 ]x3|MlJY~7]ϭzhZgcMV/)%[2{;*ߜJ%n8OVBwptsZ\GDjiq= V~PPP.TDWa"R$|[18)8w֌˱KMie*FS c̲ti~, a>(<^݄E( Pf,zi P;fj>hև"'Ƥ;b-X,AZ+>.ZDo Wuo&3I.wVt\Nq7lkja.eqͩ E]a^Ԙ/&wI@cV "k-L #+RӸ̍~Z>6X;L|,KB, N6(+|r/6[E`:mk~ Y=+ݟivA/V鬭,le2qqJjf%Ƽ~ƼJrpfMU2'XpM (SYڹh M;;9(~Z:4\+׫h7 L C}ysjnf}Qm[ߩ&q)׵QbUR-8^/`2ε CTSM2ޗęn`+^yc09E Lun%P@3\՞%4xŏݢ *2B+_ pf̰QJT2I+lՉŎD=ҩeC;0*h4wĢ!ѲN;4 ~ԟ6+~TюBBK6?0z|D \a9eP~ȋ<]D.>pRzֱUÚ~&N;Aѣ}XƗG6W14q1viwl s\]?mw:{gLGU<3cc^QU$u>bȼ:Zn\01Vb4o8Z(ڔ686tEZg iW\0xVϖ'c/QgĵB off81V"XShaIװx=]afC1bT} Qb-/; nqMxZKV\r0ѺF,*=Y@'@4mynLOi$PSeo#|GW}dm!! 􀎡`lf/eKOrwJƀ8 oՋ )=Z"]&vg$yT#uGͫ.##8,UF+$K&izJK|UwHj,bW{M+=eUZE>O| RS'Tћ԰˘%8\t~yM s&8/.zʎw`j5&$ v q+UajV%;VMRS32րS=tV W$=UkEkN1ػ\ ᡾ ˔mX(07(Hg2}ɼS:Y2B94_ah7u+m? ! *DUQLeT~mA-@c9>%$bNJ-3,&WG7rJ8&onHL{~/еfOv?G B,2j?ñhgD3o#!*zC*Z@E}fIgA,HWbiECM@q2(w!jKB3la=ngNMQ"QNOrn ~D ׋ɖ(^Pu#x3 :]%ÉQ8?d<-bbU^{ VRNyXTv yʏBb#$.Zcf/vHK𶇅UAKuXup&$}?J8+_N}4q.GźQݮo ;Ԫ*1 To(Fl4_ iwXG%e]l>A:|#tL Hè|Ām$:HoaJT6?ozkX(҉E>/cz"$ZjJd%-O=ұ<&G÷6D'F)G#R*|nw4n/$^ۂB̮:0}̫45Ⱥ3}v}r6?'Kס}P=ybK\%ykSR2P1mމUvh g~ǰW)( ,ba='z%g\ 'Ih HyM-Hm\[~Rw}/6쒣ⱨQeġw+mm\FE7  M2G,ƞ@2rr[րQQJ[5 JgBeĬ`qu #ƱR^ʄyi%!M,^$m(~ã }wjG9UNU Dr&TFrEلry t*|0nQ!*Z[Gj+,]#·[hRvvXn˝cy:/-_c?w'kԍ2N_RykyHV eȹ6 S2/ .gy*0 q~nڵjpה+F5:tŭ:IIAqV{ Iu jpKš]F;x.ϒz'|pEt[s~2z}Q9R~zFA9d\e$1vusNidzJ3Y0t'λw*ZgU^/zV{3Q.!92YMG\+OɇvNΟ c(!kڛo$znF 6],#X傊~^wST['o|iz;3|whR@=a$h>>l4L бTe`=hq"KN8XLj$/ 3 hVq';ɑ&a%-"Mo`bP{E dhۡy]3cB8#Qc9 j`jȽw}uuk-2Vq!frO+U+ T~֗(}dj~;U$#> >R gm@!qʳJ棫[ F[R{O$%52Rac(HEnAQ"^0[ n ~de366rZA+x-*r0 q3 {RCX$Ÿ=IK$:~B(Cr8Y\Xz`iIndkqPM'Ne[U 'Z D5A܌U ۊ:g` ˦-[ ܄d\/D7G)$E! śdIK ZZH.1^3uPGCETSGPS +ŀ^r ։ۍ)LI3Vv3(5WD>5O,Ouk춮=p|ǦhLi\Ck{UV}M!)_IY+.6F%;d5=Ęf*(X'\N,ƀRd##\f0$&HW>2FEA~&N2*rnh{>o[3yԴ GKZvszk?k7Sht(~'qK`D/ks\U٪Γ˥2e^m;/4pƻ}DFmgfdt*ЪJu@>æZ:tl-Nk%S:,nFfftZf݆Sena 4n/qќvɐd!u饪|^m ^!IfyΕV̭RrNe^++ykj@(],Tf X]%Hp"! yTx.Jrx< t07_PO됸 #hF ~Ձ ă8(y&cGIM 3(FPB-P2AUdKEzQDuX6K-oaI/ FqEu_6ooo}x9rX!|1:D80tdȸۚr?jv,T~X@V&,lU?94j@Co -zj"؎7TVH ;M`ҡ)ithF-Q/-8F'I,eqpb_`w*R/S/:l 9>QC^:FAP,CBsw/o/@ܽ40^>19!XVv1`B+/J'Wg-j3>i5AZw" J>c-Y.oL~4J\=tz֭5̓ViQ7U5$2$ಚ,t4֬b!6e*!iޅ OaZlŒz_&!P}+NqX\%ƷT? xU+y$VwFF|bV&`nlQZlh$ULD!=yg /]veG9CA+ oṰL[GLA#) yMWbu _ϫ0?"1Aox3pEyxRik&o! 7@x lxsp~u%Ƈ=xNh-HT/8~}ʴJiuOe⪑ :N.1\bW5KG昬jw3bBAɰĩtќUjC-;(_pvOgA =\3~/ބJl hX?8Z^S! 2"++$~&a2}WcnQhY"=AZkѲp@Ȉrcb|H6e`Ju"KGMd\"b^mX7x?{5 p .fSC|32 O,abdqOՙKep.w7phe-ѱ2ޮl \ǤzxRIkr,M1yj5r؃%g\yבȑ4DV@P*Ҡ4)'ݫN@Wh;k΁iO Qm;h:n:ǽM|ሟ&es5~*V"a+-%䙼wc$9C*iQZc%:Ϗ(]dm\ifrUZ\;`-nh\# '>K1y#2AFG^.ܴ+|aNB NYܾjl)roOLX֎Xl nAlxN҉r)!p5;{n` iEmݵSq u'|[ X.Q¢y{wë:+[X<` :&y0fgMS(JjS609v^3ulLYL$S=كPC@c틉9@K+S[h0rz2pUٚBMs|]K4}_1v Ʋ]_ @(GzJ)]^ + p:vAzjvf(KRzQ,(ZTCy}lVYAQGA XJ[΁m$Y mIL !u0;ܪ◂p )h "qGe+2ʤҍ~kW\Iyvx陿[$~s?SE̝c00ZIKzV?08ľ4C]#v%(kxi]k L nSScP6nbvDr|,IXdM}m<~t52w0\Њ}x@Ld5PyLp\dCmR9>Ƽ7>Iă~뻾"w2 o22,*<aBx{x;84}% {d 1A*U©N [ \ІN:d9h8ۑ@;>}nLk&}o:ej}ah'v#8AQ׿t raC&+Qw|q Ր ۪- ChL C B}<K ~N]wKl.l 7E[ΙTk>3)ZqGg4vs*qʑsu|ꈆ[ֈ}IKn{I\>*9YͨO\u!41IVMfn;fc6>1,[RyYL0;cz%^a]ZBeW(ݑgt;=0'X-s fN5Ph#C^-kfgʆֽzl(͐Cp zt˾Y3!B% ld5 ݩS#ᗍP5>&B@rp\=!9V-5ۚӲsJ4vE=v- )%XjSk91HP_0`QjnxMKTK* $KKX&@Vj7{`m\fఖI`YBCn]z*Ra=Q29_ނ%!݋=z Z?k.0eCj9Q)`5AkG=΋&#oPMMހqliIjhfzSLtNG HU[4@64ly-:eqy(a\ Әj+Jo)f{e')u>b7K"O{n$wbP{NU8Eo sSh2!bJx*@g◔M::gVH>%8ԘFKAx7~H8lpJuK%IohsK%`~胀9|}4i$|% vaHhh,o_ $}0`MjQ(ӣ,1>KN0'6-쀄1&O I'F5l}Yz:EʹS#Y3 naqm`cDG W-߬tk6A '~=#ƥ7\N84躙*9%g1Hv8cU٣= 5NbT2PgwN=+ߡ7 ȉ šNR,ib4J 9mE(ET!$gL 8&&,θ<tŹ!E"JpDž3Ƃ*t%i+kmҩk8퉐LypAbe\6 <{+x 0uP][y_+Q W{ 3pv^*^_PT)w48r84hyj+ XGydlthfþ ީؖE7u<:wںQE&!4=)V8KlfAڡ64FuW\jvwL_p^RvEυJ+%{y@YSf)C$Ey‘VmINRqA-gP* (mAٜ !^Ù>S~Ju4Q>"{sa&SC\SzdIݜӁz}Au=7 ]J (Bi`@. >i5`.U9F_XW:pƗ@㏎oAJXQLʝQR=% rOgl.&E&Js`ފGD cLxqZ R蒭~vlD{1WZ pR ؀lXuw5RBn}aoOJM`p%1j_ruI]g' N'Ăަ^7k˛Z,4F0.L\a5splHCT(SX$O^FxZ?:0,[ c'=+䓯l_X"6q͔ZkY㦘]׊tB)RCj2Ňio~ʺ\*$LYA/)ITD0b]E;ܶD5%5$sɚr "> M]DQ ]ep+&U༵EXv&0RzL`W0V&uiwG6t׺"4i[c (nKBLY1Np*-h:2&;wh*h1ag#  ydwnE~N/ZūaX*dZ?APBgT!pj_Tx! iMWQ$Kjwmp LZsj9'@蓐:}~Q$fJv{ <ŹKQ0ҁ_ qBm/}L{l]ۇLYr W9=V9]>-"]4]q (75 *W `, va2)b*iT`~mD`[g;_'6 6Qt[Kk?.}S|cm, 6! NoL'1R<5Q4wWP ^ HI1/pߨmo~HI }r7d_DvgM_o0$ʪ)=w k6yqn'oƞYWk>>bjMޒ OSֳA7~H8?"̑**$nVi(R?aqe$Q+tTyAh UptYiXҢu7ѩ]7‡afAҤ{T-pg?.Hk l}jAgD] e$wɱqm8ESO}2B.+l snDy !0g֭PRFkQPY]XZMG p8Ux3Vާ;*Tx#D*8Q|hH2{M#ړJ$Ip٬eA))ơ4cŜ9+Gv1Ď~!%s%BF"f$oM.x{[H)]Q4 iXdS/)p{TZB]ӹҧu&l w*~]t ɸ9F-q>/Go>_2.mhHKBlD@<;Hԓfo&kQ(*H'%ӱz~1W%8"+64(O>[ .!AEi~? .^sGF{J2spFB9H5 {Oi/ŜmQ ݸxw>T}dt$A}t{%2n}E>Xyo|gLg OdW_8ߕUD4=ސIEt_ro#=C` &6L+._P[Jt]g"Y<4d`&z ;ZEbJGGp;`Gu 输Ȅ8c9ngYO2c&5+ݜ)PoBRm(1ixmbi&SH2G*%d =P,HI$rr S0ÎI%ّ 4lVS w[/eCZ66ضnXU+W/̒.V ?1AZM4&92{z+:K:s3!PڶB^ZKSB>|m/'w<N<=5mE]ui=Zpٚ-v&M|\&==m|ˣe~Z2%@#Fi'#㒼$FC,JAXW+2IYfMOTäkݰrzIQTQT4G঑z52Ha·uA6 %IpkUyt*K950@Ԧ߹4 q$R3!Xs녒S;_sKSܓFɞݽ×;,^og%2Beu *Y7|Ï-9p'vc|sfoU'A[Zv;Eَ. WСL?zҎQ|5x?z7a~V^91JȈ 1H?_r-~'o;'|n^huvuDU6(.zt PQ afGDU`˰XGL[y7rQDg(Mu6. ,8#ydp<+5_QN܎Qu2{ə22dløtA9/?vqs&( Q|> 8#6H!!Qх ;ٖ{?| 2O X"9a׈ ̘UW9wHkBc2FۿQ ̞~@[hC5&a3ASĽ2֮+(pZ=Ŗ".8ՈdH*Gj~+?a7_+,tJ\I|'Jm0NaE)Y#AЭ>9tK!A=ZLH˲cţ,8nׁ=T|Qp3\ #dHLXVo1&+? M|Y#3%v9Sg; SdZb?}IJ@o6KW4EEפ?PY)"ׅ_z3yEV;xҖ=L.E*H**9%^/vJ '=@ȼ*A%PuE2E)U?x&sCE>b^& [% a@ 2N)\4XXF<77kbhk@~j x:v0>GOJQZ`yh{sB:O7S̐Tܵ!]b]ɷ$J#̇灼^QP3! $~70:k1:nnc1$i ̬74HUIHSتaRfT`+"<4N#A8fqyԕn;62M ;a> :$^ n. n|vg1Z: 6?kȦs #/;^mVo?ʼ.1o.bhD<֓?SA:LafQOZ{?Q EÃbGcp!EN~]w=E62֚_Gn\wiV~@=5096Sg+!Tf )n) ;?j_Zo:C+6R(rTq^Ԋ{C"3,fʷ i"t/)fgd\ T]@=B.X]lu:>r(R ImXRuo h``SnF9y"Й(a,qIKvzyu;zzjOMQ[٧8YPt%íڍ!4zKrO5@Za7kAm ߈^%\DXguIt0y}17 ŚfXZu;ȈX :G:A#T-~[]@.sLr:+y~~ZS9:8Ή͚1{lAjC.R(_tP,M(A l C͋M{W2EaX+\#ǜgvS?g=;LƠO #sFmukʸ(lDud7qŒYcQJ\>L[u20[Cmf+^L_|RO۲J7zqf.6x}mgl&T5F5lEߺXn 5VٰUnK/ [jк67+4ƾ>/Wm$Gӏ'SB~ٱF ,q\o yNs0R3f roݱXgQt5f&}ϛ79p1)0F#3Nb(~LM%)/R7  k(56!D#xX\Ce-Bۆ>=~+ρ5Aev",oeT r(MePsc> tTwhmJxEg-DE8DqOUw<7, @6 xHk2I#)u "[&)b0 .32gbVBes.'gn$3\{'@M9^j`Ø K=n5c=̉]݆Վf"3mpWڡ, f=] =(/,PwAFǡ[!zсXBp|kh'4$4\+\y}62cf6LkۦľWZ[vSv ]YlG=hVA ae2C09!*gqׯy&5^TڐU; Tw6xr*J3;vER>2YXQ+ S\;W Yr- j賲90I ,2s*1uK-^RsPloaCnPd,م;ߐJ~0' fnו0웼h~F(fGyq0> f 1[H)>fSl-!π"wr2*<%ę'̿X~; %4W)?Xc@]x'z7hd i/`t'} .F6@! { #XNq~5&9F-`7V~땬n&I?!^VH9Apd@vU$#oJxs)$w /KְB#шlc3Qȡ mѷ9.{ gWߢm6d0jf܋_8VƽH(bޅK5v1JN5WvKrÒ[Clph㙾S` ,W{k_ ZixYxޅu'WѢSdQ~ \|UnZr/q|b{wH~V3/VO4ӏ)̀'poW@_\;v5[}#03D5gSf1P΀xx]ȷ< NsGc~pF~=Ah(ɥ8]X kj{ecʂԋq C:x8C-Nӿ5@vd6HY?2nXw n۶m۶m۶ܶm۶m۶;MڴII&Hvp%;suL# Yˌ(B 4I$Wj*X"?'&I[8r,S -ntJ(˜βI=!??=j;xB.'EFfsP°+CkYCn"( k7I j=8p '* r9S@~aрsN իr` vm:Lխx |ļ>7Qw6,+([K1+}!Q/je p SaaeZsK6vFLvcwjE /uW`u ##E]\xDr &yqG|a^(Gizz CqQe/ֻ)7I@&i7/l qu/H~zL-KOd9Mj'NVi6KO{\V3R1;8͑5zvq*c@4#6r;hmHQ9EqFW Ubh4&JD"04vBMECCѺ> l fp%+2|dVqN^|zv\/GH;Ks#s D: "Ec:VtDްntSLH-D~iROl&uS.aSNOMSQj)[>`&;ɺ6)]sZ><|.]2vcW|Ln-e]B =]Y#6c]boI1Ayj#RỊa܆MӨ(OwhfT0Al8 )[8ADXo0|>.sQt7Zfgot3Xҟ<_JT"UCc.Q W1CK$5DSHׅ66*m OUqT>hj0kR!5ϸ:i|yC_:*"VW\'my ze0w๘[RW~7lg03; |٬v_?8]X(,<J(1k4>4 K2.sYE@Zj4JBEfTUuFsʶMkQl3B ~8 Q2Nw⺸uE_V-yQyFʷmW0pZ`b_̕&r_a"ˈC_ +H`&8t o RdFy Tk j"Z%KG~*D.&Y[ cvZg +Ԝ븲|^)j`S\lҢھ9X.7Uqf5h:ss*̬nEyZZ!dUSZʉmI(|'xW)@Ėԅi `HohѨ`E rIzXi ÝdgRHZOKJR\j&_F/jC 1@ , LlLhþTH1T*YsQ[-x&w#8 $&ύ_`YvVϔqD~WgW^]a,mQ?ɘV:v]Ճ [S_%jNRƑQv,_D I]+_+;=BW]3FQO@HLSYy1@E%T~ؑUo:&lXaOGݻ=¦'k?\12~Q~0Ӫg_cz'&zxpn4fcϖ->\`GN~K//GG֤~z/wC<340v9sNmbwĿы~żzeJɉ a큹vopq%Sx't@B Mv}wRGw75 Ք}y!nLl!Ncx( xk+b?c_T?'[ nz+38AF=O"hٚݲ ̦MII/OAG`@={Fqdz~ H'.}h܈\OQ_D[z^J@:>\ċdCEco Y m ?y 5%I9% #M"mӯ8f\ԙ÷3FG02]3fau$ /`zppAp>y3`qS㵐D2L" \k3T x('CrOСRGj`]*1pDVQkt4 62B,<Đ|kz''pc5 x= \לYǘzVDI+-a믗+ԞN HEHo2ހMN?E,]n$qQ ?jCwX\#߀=]JZ߄ξyиT͍'< =lLܣ= $ZsJ˥i&yJ^^.-~n78@lW%K]_oX(xlyN!<ң(MB4 KާKaƑ !%-B/ʈL $RĹ|;)f_cw)q_|{fg-;ᯩid׾:0NlC@<|ɪU뉬A¶1NI1r % @cbSu$Hu).WmC0 dW!v4 ӻ*iT}N&;Ck|}y!r昺p퍁**%'>/ 2Q[v .E^ zbZ&fa::uVFt x\Zijmd*w lر`=(O4i8Wkχ(B#b @gʓ}DZh&kN2+ږ,VHmvnk`[f+]>ʡ DlO󑃅b5!WuWHi?U 5(" o1WFha~;p㒀bb: ^X֭4 0jo([k꜉>ɬH+Qlo>Ta]:uJKq@Pf{jC~*gP"Ld L.@c,FMI }O۟Bv0@!CԀ_8ufF]ф vHt\X&綸|isXuq U7dC(="aKPl= l57dE ]\AbkrZͼT(C37ZՃ : |$*TQGdA}zӊ/0|bQxD\xoC/W[6M{ʌBgYA5bG_ ;Xcd31B)M]T ɷghR-l8*Q^Al!nq[WᴥV]4l#YZd.-ʤ,@cXe'7>`~AMkEhJxS; #! tyQ+ҵ~OPdY2GYrHQ"Վ )~[TB%K%Iᮺkz1_ILApZZpV~孾ڽm#JPKc}c>Pa!,hSUv- R/#Y+ `lMI%- ^Xjm#XنN(Y:sfsq_'4oF9TA>UU B:oQgL6~-}_+_ج z3,܃A jA7uRg|ʌ}kMf,kQ}A4*-a,5_)w)NNlZN¶uu0@mV !fJv>9j,U' 5:P 8gk1xuIJ@-I#qFgQ>ˇJShKlm*x0H.}&K(xX?*}j7 E̥ۨG63u6suy=3uT#x|J~I׹}kGyjs!UW0LP7z\!8"l~Ϧ!sb~ b7DI+!z3ا`=#O8Li $c=fsB),pW}o@o1 i_Hh]hD\a"t@Θq (p?dYmu~Ԉ(mO]}wݰKarZǐ$X^|}Ě#Dѩ ^V%BnN_J?y+D+(=e )2o#D4rE{'rR1o|.:~V漢RaWN49C5]:[nc&Q2ÛÒ {oexkH-D@nW!Z?DžE_ 8`a^~* 6 b;$q_5#zNi>?];N-+#`p *%#!;8k!uZ2ޏ9;QUoԣ{9*DdpLtߵV,IJ3ou.ե˵4o]RP>=B7 4F{x9G[^a+ϪDrV$H@+)Ip@qQ0Mr mOvbcD!Ygu)a 3)lvHa$7Z+w\ŁHC<:a٬%H!=K 0TלY^,ͬQ-U_KT>$lmե6}Zȭ0hZXl{O nuO4*jd vZc+2NCVWK9{T̙t b22T=P9C[ԥȏ,5ɖdVKqhle.(gƢ+̝37f>a `]0ۧM> îh6Oo)?lsy2;!`Oŋd͚>OYst7mI~ňSkԉ"L)efHS "V\p4~x"%#H|̣pQ2+ +2{:  놜6pIWQoJרJ$V54îӬ4]bcޓMcd޻oc\4;1T^yWhV֢jLWh}h|ۮ{Tj$' c2e =4@~V{m5T23#ix977bHI (`џMX* ,B Z'AW=["5pG`nDv:MA5iIeiYk1C1 ᨾZgknTf]X=A qv'kU_vYxG{ #3 &olZ]?Cdž)97" yيcdNASGCWŜ'n 5ӑh,| 6Ւע~s~Ln muh€FK2zĐw-q (D5(օ(`HZkE JWVl)) єtOСX po|Vџj}F&nA'g*DL7[^5@u埗->qGlL &#`O1wxn2 -ymOHCkd|`&|w' %K%$43 wfd˨6;MؖC^TE[1=)PU*^Xwt :5A5¸YoPɄ-dRDX1Iv:(ea !wkl1hkP9d#tQSZY7I8ۈ^]a}x osSy5`<-^k.;57KfL7yv0c̝P?47i؝lrNG/8+ j>O'׺tJ:>.tAvHҰ%դ_ wɁ!s7;$\iEF(H6D:+Y|eO9Ŕ@ zOI jh0o_ˏ EADs+%H(ڀ\-.C VP[͞a?HF{( Dz13)TƄ"7Q8olld:qǫh757w΅@wRd( s7Qk~zU+AcɺOmM)ksr =D&#Wvaq=oTԁ;<"tD9K0zgjK N,duF"p z($C czmgB޶tR](mP2nb\Ek:h[qi]ӲV7BMɏdYαj3L!^ MH HT;oo /|(p0JQp?ϙ v3\LYs $KN'pO[z&^dWC x2Kk)7&V /ڋqAؖJ_T1鑄3Л`R.rް? 0HUfcVnH<2:2c.ݕPe2F ZgwbwW*HWuъ/*~)X}lh|  u!݋2c˛ .4H]HytO|-;oaGQCd\I5uHFx- Aaz =q;}H偋\c&]9ECᕧ'8\8*knnj`dM;3uJY1RbS? "[!)VΚ.R70@}'86ZAk3HDJSPQ!57"٭h\zka+8p?:&c  jֹb}H6ɹbT TԊ$kxPdЇ8Y2u٥럋]A/N~,]C m%68ƱYhQm'˘BAiwHB1ee2Q.2ALb%ǵx $92#צ'?O&~Ha>M7Z"pEu~%;RS_qǴ9 Qס1M6 I.p!r6KwRDz|^[%M7Zwg-뛮{2owI1L@XA c*8,âaf|a %vHQBER3 X >]4f5КB{sUJ;i0 ˔{.rtEut1 yI4sRdsoCPǥ=~>g ޺|(%m(?K[Ve'/-׌%ľمZ* f7 4Xd"K /qK8)ȈAWV)ۑKW=7Q `Mv*A;e*zكCbGzQ^>'(rAoE쭊EK˂+N9e>@>MOArO.y\ѼdPBZ߾I^qwjfN|)`z);Ga9oe.]SĞ)ڗ&Go'{ІJ800EYY31yUS=;\]Py_PX, }ԅ3.d=*pYn玼쿥oLRyљNOpv`$ P'|1 |[h > V]CtГ+23@j(j>3?Vԭ0>.NdP]T(FȓwM;kdx #dAw?'RI% aU{,G5ь|U <ҫ f2f _GXZ V-˶gjie@wњՔYo7mAhET@\Ѧ9|3T*TjK|:T*[ /&] ݑVŷ,U.m\U޵P 3XyH\.-A.~Xx{{TvPz0zE۟V(+Wt5=șp6~"!7hA6J lE5guXTC ʺ9Pg\gX bRqsvM+2u>J eX6k.]NjR o\tFTm/>?RQ7zHO_怼 jYb:$󺃍֥'905;'|*r#:!$C&˗Uy @ ! H]5G\TF`v{RAN.ktS. ( '9qzÌ6sUkQ`ry 7sB &MnaD7xİnӒ!Qv7X8ctQ"Pu0F'zPpSM7@iׁ H;<ӵ]2!ᵹe860.`lDKK fJ#VYZcH(<V/1xbi63,  𤾴x= o×yzǵj[pJDn"6W_̒3f/֜dەaZb|*iDȁHO } +H2Eu/z׸t;)?fA]Ji롂 \gbgLZ-Aj7JdYz vdՃ̥wBMb? 璓X Þ{첉(cִI5G3|a.|kŃgA;tflx3r8U hz6ƃ!z4/r'IQ A9w1|`!6r0 :b(k&`:.к.-3HˎENPxErLux5=82ųa>=橔,>)W.Pv(QW?-5Do K5k Ihr n3O.%i1&T:NI'Z OÖi[<$C2HPa>iU]JKZ49S"ԾY8o|k3R0.gZfkrbkyr#pKƁ\AM }]ږ=XC8X@ZQl u ƻ}#X>%rGQ$xϾ9*80c-.DX[6H NEբZ^#Z, AŽ<<^R!k%RŲр~C3`;¿``H#`/)E" w9ADIe7tC3nH噣ʰTEƱ3#wvMcLΡϨOLaof{$XL;Vu nt{ ʝ~}>~n|H]~\\}RLNA#Lf@bˑ!!-pY2bN.0}v: aDՍDM{yE?εRE[&>ޱd`j&?BLFcfmVE^RoTY̡psKɿ-?-㖈]~fL3~iLaka>C%$):S@I@&|a?fy.~Y|uǑ*NOTOcI^l!Dw?7 }d@#Ϥ$h l3hw)xMdg⛇" ,_žܕr_.U7g} >*#Nq$չ,+ZbO ګŵ,S-hr,ݵ˩MܕkĬM~q=k8ugoUZLIk5D|,^Bw;֟@[0=T@Ar bPiV />Ӕ5L+X"f/礝=Tzrj[pF';:/&5%5Tnw5ݨ]'h.,̥rS?3k $"=e9ڇM[(Ε91݅Y")U\'A}i#b'Ϣ87HvH4L"LPg+Ȕw`)ji)ly],V=U:IlY|{oQyĺV,eurAPGwof vt!RX~$f&_=݈Ȝw^}|O5W)}8xƖ?܈Xe #?LKʐ֦;Xl0.L\={0KM捙EZER(&Vp=$^#RAQmT:F9$I5~E1 .=JwRyfd>P]c-NR|Z;`p쏋*8G2^"v)HB~d cxAɎ](EchUz?&͂!.ܬ=tW: !k-$R<c ]{K}ypDz?F47IPɲ/H-eVR>HZhVZ-XH7^om˕yHu9܇z]jlK})qAhuS'HLz0lQXϷ/d!<>2nUxflaunnǐg|pRՍY> 0B0bGF~][t0aRy:{V:ק"+x$υ9 QhHw&곪ĕ7f_ˊ8Ia>HHp2W&5IfT瘨Xc[Aߙ0砍㙬1Jx[3klCAW#;M"oF}dv3ɝ vp"2'#B/`D`U;U do b/8?j_y^0M’#JWi?xtl ]D`\gs-zp~qguh٫2Izn)YO (lQԄPzLѤ\]}f&Drgb;MbY.& ٘ ֩x_)HbxK(:eŌ a2ؐpMPӘȳ{FobZ3N5& ΢(7;K | Ug#S󫕆0I*뉷 u6s@ JN]/ă.DH*M./WjsVG-4Pt++g7M ;XD{r(Iџ{Z*vbtǃk^&Gޕ뇋0=1Xxt`1mF+5V@:NDCdnv 5]d}em(V}rj1\x~"{Ηg~C&wsWp^o`S_-o)}FVĮ k ^|ғTMDNZ*^y?Bȳ&X9>p_r=+]w\؟n E1ӍCQ ~y ;vI/*-?R7 1h6Wbz%AGbѱ'$rPgLhrFnT)yFm=|+.u^.,Ef42y-{f用mބjFcɸ9?\$![jW\c}ߕkl[?ug9Qlh<[jzBOybY'+A4dTi;Qp~j'XgR+߉q6Z {l0hJaqgٝŸ.7GۈZ?%~;4-z&;Hҙ#edP#Ƙd0jVE{m^F]u(+TGRFK` T0N,'gt+ygMIArF)EHMo;R#QF5a!Ee-s4QàYg&$GOGr75nP䀸YdRSz0={=le2;U5Hƪ<^\l+ʐ<|tRX u`7 ABd|[݇5;|qyʻzr<6pv[<?HSPkRB5Ϳ39kN63 ߌ38c;g2\%8}Ucd[Uσ+H:cihI|k6Dd10zgPpWp0]R>]Z7q5^H1K-_›B4o6;wƪC)WҁxD$Q{ܐ| <.<˧!NhEjNz,`\AW]{> 5*rS+a\%w^V\d?3Pc8T3v.ܰiZR^nm!Csd-vȳq/78O3KQ^`l]V > ]oN]`~c-_9 oB2,-<4k3ŒB& lW|B"qx}?٤l7$|q JPu׸ >[]װ >kS\! >sS\٫2.cq4BW"rgqJ\FQ4"3U̵4i|F &W4߆qy{uf&4 +pgv E%̵O|Ouj. |Mϸ6{ULe۬>c#M%9״m[|:ry3p6]C5lPfx_$LTɹ{(4-󑮉ea]k#$ݣJiQKצJi9L'd?6XcfN?XS]zd?3|wY3ݣ;aԧhw,&Ǯ3x(3 suXO<'㞊m&y"d? Oe? .z@.>VKy&éXNNq7d?(ݯ~WIF[zd?~j_4Ok_{[&2)ϵKj˿?Hjʎ"KBWSӮbtfJVҨf{w)F۴k4+2kv瓻^w95+GgV[`qQjѬ$&3|M?Tz55,^F%Yo٢O|:m[ne6gqʰ`T-G;YLMСe= >RQѲL&ץduc&cavP -.PW? 8FNZRyhZ^"xbmĪ۫]6ߧ\-4Xp5S,9ii;OF~Q-RMzY$oj\a刕j׌jnv+7VucөLI3fY(yw;THZ8ttvaq}4jɌ;JڻMYt9\8\~^QvAgy2ƒˋIhen@-giCl1>.~'˱8g dca0I5RHe.*=7J fȞfJf7z}@ &&'ZK/^$g1 r#i|?ڬKWyeYdȆwLj`^}o r$ KцEA۔9*nlljGg&0[AԆ]iԠE_Vd>{tҔd^*y<#N%}r>*fکnqGU]{]rVwQ"^T$@|M<z V]B.l׫f"&umh|8 ɑ`xiN} fGf5l(-g7f6IiN'+#{fyږqXt41e^UF-^F6'7^{eur<5ҨÊap,R9݂=VudԘ:t*`b9X}i|t0TજF׍ُ./-;.c&*PnwOc6J:U+A cDԪ@& ĦXڣa󔡠Ug-Psdڡ:|)ke'5CmݯZm{~lf_Vsنgr l5}W~k&%Ԍʕtz: /e|ZkTfBLIs`Mn}CG_Fbf(3(kC(ݭM(ɿJ+!8sleog@gT(-Kb[+ kK0ejQ ]]5BYy =8RD_aPQ^GdfJ'N}y2Z|cT7*@͖6{ci!ER%z#]_Sj2&&ϯ^qĺLcc gbT'"Z,dל]\yv,%Y5n]gY Ӏ-}L{~A@/6$5o~YK% &~fSUwijk3m&QKQC־bľ}4$%jKZ&˚> ; 3"D@5 *dњ}go!CCQ_.uk+bDh4,5@F11qa#Iz761C e jy@Ŝ' 3fx:&1o#=0QwNjHьZϛQmu1V}50D`\ho-ABk`}8}T5 Xd{{l`?Ӳ-F9E* kD._J۲[=HƱw3Nh_)28JQZHB?YI؁$AV>~@J+^IwrsE æ~S<ETCǺٕ@p#:iߣяCYnoYTDJ@b@"&Un8V-)׳Hs殌") J: GMeͺH† Y\Esx;Ņ:^5s_*j4FpT!E?]ZTnÇ2DJ"O=CYo#$Cd7qMBM9*Z;Rfy-w/5 9=˓mqz\#[72ank2\MK@A eag7K~zv>*Lc^?v)wsN|&FKJ7_L:5xE;{Jpp|com1z2ƥ:VS mw2VMoSmk*|u˨n;T5,;EP ?z Br+ -eGƣӴѯ[S?z1e ex;=.k0{}yvϓ ӱWAjYT*NYo5nQxuǯ搋+]6ձ-Q5fm25q;D ټ԰O5,۫e9޳ޘZFSsښf)oL%)~W%4Tc DkShj*5,[2L}Q`uNwٔS '?xA-la8=B9/`C#6X 42,+CrO2x`Wᒶj.v*;Լ,T+{ER-Wݗ.o}bgu/G]VYS#ŇtN?S*bj&e"9){P vTA @7NhaˆPqfys^vMrң )wA6{3LBA ZwGぶ=:m"f@z%GJUJQ'jual $ňQ:K/-7a UkuR(ˆO_Z @oY8/"UOiDV"G*x+X 3x(7\Tm-*M9/Oav^TjV Ẉ6aq[q`)A\ܬ·|/r, >RiGZ/=v ٭~ *J$$$BKGq c4H04fYU3C!}䥌+ߠ9)ŕ&S ۛY7 mHE`RUs>!o69j%Ҋ3<ޕX,ƦP$q.] T6ypX Zυ"j̑)U*eP'ǐ(JG)?DG03u+gmւ|Xb26x*jhhRҤޫ$M$y᠄䮣ڤxh#b+izHb`#",|,{UbqqN֠s`'`\ @i-vg^G i5iФɳ'~k" _+K=iY-ѥ+>8AodVt7?nMIQ$i]upӊO͐ 2R/gJ8)zhlOAc[,qZPM+Xm-B8ò1Ҩ W{QK֙͟73G/_LUЯ`r콹nkW}d+0Wiy}%.lR ;n#Z`iv)|l!!5m.n ]-z]ix:!V8PeS%cldz jB8,>}|TM_hlOPF&g|le켄QfPeGKT4d@RyK'pC}vZ!u" g Nx>a1cZ(N|tXԬ@n̂zO'6!Aֱm^wkp ovyFO,o{Ov\ίVM:Wr;~dxT"\&8Vja1Ǖ&>$sGzo7"}3;xVpqf@: x8DASYTHUo0dE\tC—EX*l(oխN xL]mP񪶪tGyoĚg6} f<:D`1R7o P{nJT{JױB1PS&8u - 8ۄ_Ad _Pgή;TC8 O   dT Q-IS=y" ZMfмj-z `),hla%KoΗ懰b)el{ H-yXd绞eաbfpkS蠠_`t4C J\ ?yy>جj~~޳]5={.}+![A%-|psH+8p/ PMWe9"DmKˊ;02 PCr2_#qǭzDd+%9勿|ۮW} KAM5w|';{e|Xn Wj<2W~ B]J%S !>->B/]օYcܹ>Γ.Cw/(:m +J8Z$?IQ9QƸH9Q|s|9A}?WG0r yvr7<׿dRJ,yNS75RZ#1du&rE{C^-00ok{wɍLI&hU(>]FOqumrR(^t'G-W˼ڥÇ@צּM@Ý@-IVqoCS| \Y? pgtr/˱%Fmw~!J,)Y) lwj9ڇL+!m'ڙ_G8Qsŭaۘ~_G5ydlGlۿo'̳SIYת"Gq9z]/vXg>o'2ى-pR57VSߵ KhzoCH-˚ԝJpU:E`wX *0pz&DU/k.B*b7զYJટ8mQk˙Iu&y2x(9 P祭ԙ vJ=u/W0Y*T=_(p^o ׿G_ -N\!pOiOOٜS[7Ȭ ]@GyQ,L$LAzs^:ŜMOpރGRmP)#i"pͳAzCQYڭV1Йz.sPy,$Cմ+$Ų^ٓ[KN& CoӰMiΣe-c_)lDjxu-RO8 O-}ުteSޚu< Y :\.VR9uc~1^SOg(\i9ܥW$!igq=ܩ (-KjiԘPj:C3q -SD* : D4%uIX@0tI/h ]m$<5*IIY0%kb86'TLW¡'G5!$r~Zzr&Xfv>^(0 v(6V%wJ|˟DKIgԓa 3` -t"d_hwnq`˷Sׇ4NnHrHugQz^V$FJHDb$οjfϽdL3p*j#-z(4v |dŬ=D;/p2e {_nTr nSկyQ=<*ww]@w'n_LeTdhǧH+J3>zǻH8%cG]P( c ޔ {oxɈAXQOvUod8mTzyȄ{:'|=faG14򨤰Q>;r"m<^M|Z"aP.BL,VC}-&*Tw-[csYɡ H6Z5k|^2f;ҚIԊoYToT-4~:D 234Kal#4p>.PEu/xyLN|qAE9dri[(A"ĠyӍh&I*E$|^P =VSh$ꫝDӾ+GD?)`Ro;s$ǵ,MBZNqAгݺ5984_=ҙ^I,>b:GC+ 5,\eI)Sj"/&V/Ȟ+?Tnpo~nhaJN3jʭŨcmk/piZvN7AIP X~䰇(-hF7W ~b=Mq[bpmAXlx ԡҬUR|KeB:U#W; 6_3y7A!ʲ ɻ kRTc!SIzFijs1/an' G~M^ΠKށۻ*?OيWtba*%(?ÁBxVmHKNm7#ݬr%3 0!]4hQr}+g2'[\ۙU䦻~Kbi͌Z1V.a cIU[$Ҹr\>s;W}|wv:Mzq<"?t7 @p5iu~-z`a[]d"?i29Zߍzt CԀ-+!XS k _c$@_]oժ>^YIvG1vZRhӱdԸѺ0f{S=V9ҶTjPS sLsl3 8\$}gdl˲: Q;f3ݘ4t}Tb 2V}u0q/.7 "bd0{HfjLN[*EK'2*@=rwj^0 -,(i ШYź  8P_lp@A0y/(.PJ!͏ArQb<2N^F%,xUdx vTH!&I rG]_}5ZdAtJ"mj+^'c16'-~7C`! +~J l 9w\(TXE?C%1\X;H=%f!Ң(EI_r]8gh4/hvG26/yL34q %*=}@=G+>Bͥ;f\K@nj\:L? ;n%aR;vH[Tr/xp} &$C ^뫐*vpuwG5MVs]<~b{-euXΪf\`ž8+խhcW$9Ur]Mqʻ-n]њF~JezP'-]3oż xt:eS^UK.onA\֣>qC$ BV 0-^F^MeuD@Ԫ}uX'Z{on:>7ή+ s*u7vok.;YMF5S9> ti`s[FQ Z/QS V͌`Y,.3 I ~bB1gЊqR>Đr s|ga\/:9RyېqB FOӚ':$a! ,RzYT<^Xq\?p:wBc`JaUѸL4`Ug)^n%{QR<@W6)"Qh8ϖ"WĢM-/uci OPkWӯ-`Q `'kpP[iYwCctb}@2Ʀr?:^K=IBͼO!^#B2NӶ{6T .>l 06s9ww$22 ˼){>~@ԶIG2s/f;3ށDgz.G|xiwϏ!L|oׇ;IM[NQiFmDpѺtD1]ؠ1Cq w&zwG?_ ̠m7Glx(,x";(<!( xb9M>!<דܮUt/>cuM7Ћ#6+ߚm߸O3R!|ƣdȉ & eb8;i .cpVd0Q٧ JqlQߴS'FVo˲W̝Ʌ>2S>B2\H}WA6W*֙?/[&k ,][k?cQd'D+t.,g_1KC|uց[-UmLI.m)~D8\):a %Be(>q~^^=|xixW dTfq ^O̰ Sэh֞Єa 8k6N\b:A: _rk\Lנ_Ȣ eV2_C!6O\Aj> l_݉6بw1o2 5]jpLl0\Dn طp CJwef(hit3TFL+BpIN03u]Y+ Ue8uLdPt[<7>lj0D-Jnr ?*u PnCx'dK)GPًyeAev1s* =ZcҦZ- q"TfivV26?)ioY~6!NLmf3kDΞmiK5[I<0H] sԶ,-PVq^UNoHPʖƩĦ=7<φݧQ?f2r `a2=ZK" IF,+f*L6Nq"'-/zpBGMߚn3dQD:BQJ|Uaw( F`),sBb> V,)ߓaI9s7Sr8N2"ɂNvUd0An<*j>Y2gmla\Gs{ai6n;MbO3{/WN1`8M+E8'0Y|>A(sYYl)Go%l*w6(a~S[]"ɑf592G߲c?M q9Tژ]/WN/=  ~-67gqO-n%u cjn>F\T3ϔ4LO83Wg+G33sK[Ƣy ,?fA*3 9c74{bq`lЉK6*) !=5ef6gpJL> D7ZP3GiZY5zнjv44~lTQR&4؅0Okѩ6{X`17&'/?հ*n4r&HxH *YEM8B UvN9 9VdsdJ/K' qTDe$ m۶m۶m۶mֶmn&9U* n}${ߞmr,-PYEM2V-{%L~׆XƟ`&@aq"Fa?W,+ݠhht? d;Uo5%$pϓExd06}14;ݾgY^;f57H+ˎup(MLGĂ.ʞY?ZtH]7 >T\{\(.5).N g;*`J=I>M"kXisԯyWmyp`_ǟrjg9f?1b@T]j-ӮR_bi0$s41)J!;q|:#ˆݢs44ʎфk1|m;n4^Ig.oPW%CMy!ՕU8ݶk߲r,zIѱX>nWYRHLY?VZb~P B~ _ y {+`6Ry&Yd>40w(?]$Pv ǯ NPi`R;Ø3&ߥcHwȿ ۷GT`l s Ynj @&4>" '$ T*JڨgjjNK-+y;~ ʐ>HOS;4;8DIc\'̚AƩh~Sp'ξ8gғB>zsW=ܖ.C5;Zt,Ox<XA5 8 kB$hUyUӟ&*0?`sŏm 村x\=ZQ~bP4N` mV@-0eSl6גndǺc[_4ڸӽzgCEBhiMfR!}! btU2O:lS{gB[,]HBDKQH;G\V@C'ɕDpH)Ct7 eNNoIDǡy;7+'S SEvdMD3XM|?ؑ2|N4{c&f;lc Yх5<nܷW2Y4ɞHw< P#ȷi5Bv5W"4,͆2vpԲ7W4WCEbw^A;Vxi ?(7I0Nkm栭؝z!gjDRwv\' 8 @.|qvy4Tp]8 }kgӎ8>/;6"/CR([jBjJFp *a$@W-"Sj޶7>kdhUȢ}o*mb4t,Ɂ\Ÿ{Mͳ%c-Џ{{QuMWLMRxHv~UD(n"*ȈXtr2%oWdFymiq-N7+JYDomVA{>"{FJdmNꑸ,8RJՑՉ_]x*zKr*6΃O[3BjZWwT޿w61?8~t h[^ ¾9g $|s$B PK +Ʃ/hM5W .B =2 rP VAD#0%FMqE0w$Ϡi-:d@ĞqN#8]췭(~)V‘|}>I3yS[O/Z0ɨuC২} *tŰ:dQX[c!Նƀ'S4M4 Vl)*ohrXgxf9H=oCIw7}-{؛ؤ%>\C:hrݹk%_an_v̑*CkY5Z(hؕ6]~Msvٸ/ߴcy DUīAvS}^r&]VRp5gvږnjuo]7h#&S}%P4,6<ԣYkfQ{鷯7Y~&eE8:J=T.Tb R2$A}ߺ֖q, /A&ݨᢺ2oM학sXsH.`LYDbt'TUV Mv6Tq1v2!*EHzB!Sɇy-51hGӵKji EJdƨkXI#H̦m۶CG@,(Z%P-)iM:,6:d,~ei]o-w}L{3s3YR4P}*R*@P/3lpԣOket!p%* z:Zr_Q( (@^8ݱ4tf@2tCuxBJ*?RhA)JcDhX/9&upvZ\&t6{u3!H,D?!Fޑ膟d'} FLl~( 1toJނ{y/o8DH7k<,8Bn;5H4a؈b]{ "Gk#jihs~^ipl-'k4ʭkO`py9]%cE-q.$}YPھ̆?|z:zgwxœfW, ]nW)k9_YAK 1Jj~XSQ;ئzY\Lй_mYmjLlW||!lo*ex!pdiwTͬnܳ{OPdb⍫$@rz=௾SO0"|d&\ pY'Eq& F;F6/*qտۊ\3Qr{62:\68:0*JN)UIՙj{~kR))pVI}BC'<9s9a$)):_0Uz{3ޓބaLA8 !HôcqKk:CvD0"O)fAUVu35"(L+|[cY2 8lTCv)ZMgFhz-jׄsIo@@i~Pbk'f}kk5x4z4l{8 /[8 kW/;J Y)./~~FVV7";)Z酴Z6PVE?xF 6+ "`iV Zu''ϫy#?gp'[&L/ b)svPm`(@_+qqInA|Pz;V P=0iZP; hY*y!pܜnv͑?)NvSF(ŝkL=gՈ{h%~'qɊ3X̯]ob_<7ei혫6bǂ%Nͳurr[l6tS}Vuu J!xj2ϨtiMv<+ycVZ/8ԍ/ fU +8HdYkY У*= PKb@F" F76?|vݛ(P=l*HylT3k߬X>MPC0٠%$( 6΃^ |ʃlNu;1+H 3g{M) ji_\v݋SC*YufZI>u|U]t uH}ŗ1ɽLJ\Cg,7Eҷ6)^ڡVN@MB(Ѽaf~ӰEeK-1v[@*K%"Ö`,QNUK򸍎d3$6:=7*kH'󄔗v\J;TJՊB֥mKi5<8I vn! ՑGU5}^!oq^TُTl4$3bf ?UOD(;W񫹧!`jѥhrݝ?Q66IkWVuߠ7 k' }Mw1kS X?nd 0I>\[4|ZxnQ|h'pʔGۿ ##ߋPvF^}oЋx {t/r]sA4vKGvM­$mg |< 9X[Sy u lhRdW<*o%E $qkW\€=je;^MTꑒr0:^0Eo2]2RkݪXvĥ|ɚT-QvW^_okd9&ٿ,d; J(Z JUoکa G G}]|vթ[՞w jVbD3~/D۶$5wx_0ä|b3'-!َbBDUT+Mp'`eT;"#^=> XvԳ)26t.H-%DमަbcƔ_eA6ilh`-@0m 2ÀQg8ٞMnAbΜ..|!w_Gρ!CtTkyNwIZ,K0Ůݕ]^EEe:{$!KiZ0m;0Pý^&UdssAs (  ovvQ].Gkp=k8' uNxm@Pz TOxu0 bܫzj F)hLtB R\\[+YȪL_OnkP?qT]^ ?ʇ*]Ϻ- wz_=Amxq{fNdϦ٧Li%*S݌Vo?ڛ/\~ߊ_>n=ד, E/ 1G@jhhXfh U5:6b4 cT0\#C- P٫-I0Lvѭ\)YS)N}ђ]}nM\9W qY&P`ʰPw}IRdBa>&{đ Y4ȬH̍Qf~~s6}aB I߶zvV}jU @IgК "yf@zp)d>HTS+_4PlUoQl^M#c'5v7eo#@Ned_]|=Ke4{̇P]O" S08ݘe _E*<kʭTWSEjs@B_4X='2.VUYxnUIm3̓gaG&DZ:Eu3j㥀n^wGnɰN)/sW8t@R0e|tC{]7Hm~{6㬆n-^=@ Wwi$oKQ#s |^<YrXFjWBqGc+Rc_fZ4\Dy\5 6 WSZ}DӠ/FHO½ /=)ŴMb_8~ѰBLQǍaubwCE%s|!# ƮJ14`w׫n9[8H_(j$Zӝ5r,?W*[,~#vҭaD=+M{GJ~R ͻ﮹"&a 2Oy[9\D v[MmJ#h24)W0r9>!47H&ذ2qMSxyoaRIy/RHZֹ5gKlC*~죵@"}eF7C-}w57˽ '1ηee{Sf@|\) sY#?]=m%Jm@e>>z/T mBV\|%jU`t!X,sRT̒U{nidp>/!_Yp7\b"A72N׻Uziǖ\63͏AlXؑEnVlϫc ӧq&iq SAҽAOj^jwTq NhAuS{Blgˆq-/$z4:l+ag։68~ɍ#~DjZw|c0٤˝Jꏗi~|?1~m%<%Oy.|UɟΕݳ1u~Hcfz_3g_:w=O=ž(]P__H<`cd?arQy=+]kZ_~=lI@H% 'Ь}=߻ UTw_(V2!ץg] 74x0g{X6iݡqd}mWqo;[n&f}2bM&P5DiME,- pp(]Lݱ6#|kp]lY ,e!s|.࿡kb}٩i$Ė{S\ae򁨣2B_7rovgzbsxpFlvZ-UO7ogތc9;˙o#GnGǯ_{y{ _sҌ9ׯ 62{ 8,/q \3F~0f̈́ u3y97aw- 䇹W!k.}oT>d` #}{ 8yژ1f=g-M/Ro.WXNJ\,#- h?9%濸l_A~j@:Pdyxm8ǗijzI܎`殹U@%W2Um;,V#*nZ(%_W:;HF숃Wq]ӄ&ET5Y/M7RY\>t6>ޓte?Oi dE)ctoSGz龾{fYvO}}.Sym[\jJ8Gzn^,>bźc2RxJ=n$3q*mn>r>4CПXu2(tTD?ۦI):#aSwIdtwmH7|N?#0oraggF']d>_9kmswN5'VRq[µvF\;Pk} 4)z}e: Q ;;榜vMiHAxz;~쀹/mg^d{wor LU`jTe~boGg}ØwD݌XC+.6+ܸ˜=lPb@+qo_Ѝf}ָ_ԵJ(!t\-bS]7^o_$ Qf;>j(,]A2^ B̬t$ȃ;vU+L.yc=i)=vbʗn"u'rdcw~vGnas\5CDlmOɕzDeRFz5qÐdL;fVgԲ'uv|s{@&z\JVR;aAc%YJ%9ߛLԳ u z.wBz%3 0$o(4;1 $BL~;.tI%y&M~{wJV)En6W`IHG'}nu IA9+Ͽ⬝ڥ:ْk]CRe'%|qLH`)ҦZU8}*TLw~wH|z7<z %H l/XV%;c Vᐘ5\JL+ӎ([~$ԩdZU,#9Iqb_?Ew{Y:n߄&7 |WA˕CYJkUOte~> "02*vHL fmǤ5?}re `a:*OgbL}1 f`;aeh0A b0{M)COOԙ# Y0Nrr%)1'4%X> @(D_xPy]dN҈TBa*]-jOiZE#آ@ |X gD}QsTVӒ<ů2DǸA8AxWǥl(5ເQTU3*̠?p܅NfWh"rU.K|nGy1%Փ`;}r'4 J<26& J%:XUγ FNFʟuqr7ZOAQax҅J#CV񟏯֒x#9lU6phΓk.%d'>s|;h U:ټ2Xp.TtF5!Xg"GH*@U 6 ]rXjz+T}J@\}߳@A-}ndCdɽuöC+a~O,EPg`] Mlr?U)i*}Wo#'+}3@: DJ۪_w4Cۗp}DN!?f(`4h\CO|ǑQX>|<$J~M>c!hMa@ 9:zT^Tt?|ՃpSڶyhbA`;һghy+C;$YĥaXO!k&*Ix9~wLs7bopu$Zo07|\ٰB5>?K_eև3jT"jꀘ'fE'Wҥ坌xk゜'4XBh;W>.m(t|NT[ʀ/?MS gGG91?][ԯS@8[H2HM0UPEGofwE|A<Ưp)ɵ7x1[\ 1a}K7 ke=]Q =]a{( (}/dkEݕyL1煃>1Uz̘t-DjP̿$Wш `4Eí#Tsb)l^ R°jU!a|mV4g'ʶgAiLHK*"@R6t -Soַb*WMA/-1oÚ]5@3{ma^ _eeщن}RhH@䦡3R#+[nLԴP&UnW 7N/U=R"n%b<@EsY[pL/|V}KC+jtEZ1A|jo0NV70i Xf"(da VKF6N?vR*6zvg!V5$kĸGKGROصx"qLqXhmfʂ&T9i½Tkr2]`- b2h0u_a D[~wjJsCm3QUΛ?3?FaoGa(FpimI'@ Q"t:sAjJgRHyN m!V岸J/fH_IB^iӼOZƲBAļLi z%J/\B?Ttzޭ h2]R]z^ kCyJ?|.1paS07~e/`QadAEr@S7]ќþdr$ >clYvisںo߿×<{gќI4ҋ=GzMdSR5$2B?x܏,/"vGOpnbG$!/&_, OڶYeG>#iZL$ڸ.RfAC#a_ydasYj-#rd)fN\BNrS{Bwq 2$ǜ8cvKS[] ѿ㴦v[ C7(6/Z9a4Ch56WzBE'q]lp+Ps˹`34ialrd..gg^))#8SBWW ĒQ *E5 h4!>*>t\`r&ϒE=+THAF/e\H]dl16o)\R)0_WϟcQH#WK>#^ëQԾ6ptcIpqFj:#P+2A$@ڛ0}#HIP8\@dJ(8* 09iCVI)ZJ >N8Z*ҩM#R%0m}])+)Fqe={ v=}w]= }-;nٝԕgB7Yꈐ[o\5Wܟ0vsQOiBʫ)4b ܲ](SGJ'NBpܷs伨6좩Nx"8b~{A1PSG@wIiw{΢Ρ!HQra~]pJZXMwvC5aUS5qutWvPKM2m*)&&+oOV&p5E#&ַ?*X,56`r[F+y1.7A`d-bOs}i"E5oP>)(W.'mAcZ2v*ga6\HHRk.Ys%mBGT'nD@x>f'׿tD{bTJcz԰ܷM^`/Q=C2dC@ 3 vdV!7=~d{MV⠉) vMvJv2B!!.5 |!⌿5 )ԋ8+2k8)L"Hr⑅v*Vr:e,TPI.r]Sf; .w)sQ9)pȃ$8{M=sI';t|Y_wxMICKNPC9T 51PcqP%_7TXuNҹayWkTu,˫^=,HbixէYSK9D܎5Nr'tn-fR}ߘf KˍPIW#<Gj[ޫ5l#v O#6w{E'TV_F)TGN.!^Ww41jWiکtHR?&:H1qqfJ]CBH6JBi] 9xJu+(JһPb04/ xַ䑃`2WaZ44aNβQ)+ga)8*wPNm i|j̿H6I֑]|KUK|`GAZm_*e)oDP6k~>VI䬤Vj  &0C:O f~a|Pa)kSrN ľ mWU>Si ' XbeEv^K$*wBa*Mxj֎oEEnYIUc(#JHܽsD1*w=`=7ˏң칤ͱ =6ؤ2\b4Uocj,]Ce8jzVAQ.-XV⩖:ٓ AAI} ?7.r_&>Lof맥_a0t'&!AoLE t_N` p3l ,# X>hsaȻo;"}ìj!bSOxU?Ҋf49z/9t|ޅ=VxȤ'"o\r8vkcyD NQ솝._/|7#u+cD,;3lh1u|͌N=CZг| f{mIq6|Z܎DW"8$ިcm9O4dޤ\(g3r-^⍸U٘nrv2 j+r"C2g]1 u< sYUѫyY̰ ^pc"l7R<'3:[}}6^˚J u q߇]]bjF@)1tk?gV96F/bj`7IPO2v̚D~"ƢRJ('F|z?q1:okUNGf(̟Ie\D|[,kށ}ÈDtx(I5}q׉9Z%%{K]w:>Tl0꺇QުtZN)u՝6Y#sR>Qcޟ睊 0K\:*TAF|s.jJ9xu?s/ jy3(2 r0u74ַoebioGņV1 _ḫͥ$udgCR rѵpuh~X+OUp>C+Ie<0+1^8-LC9/5ijLN]["ݧ+/c';w>%g$^O~`Âyt Ͽ_ Q2v?K@YӘ%˹G0@IH\oY_% m])H(EdT|W&t3IKwcl,& G%_SJDj_Au <Ɖk@v\ 4%Ý+X<1PNpwxVcqe[ں%T9#* nMXo vT2͂&\M`16у+o6$.*:_#<bdfj2RGk>.Γ/vM%?%heS7"D8OmH?X (RkKq^ۂ\ʼn/۱tV$O7PP&RD }0 VtvT{Y%&~եQi}rշ!OX~v,8Ǯ3o[DQh %Z(N(GUkWBvSD+n 6'hd%@W;06K(z :-X׍cKDGQ^>Fb]4 ny<_;( v_tIOVXP&W F{- 5t P^ipyn:P2\Usd:|If kU]&gZؒ#<}dE|_a=L׮ZrʗEi5>e><$9![8F6RκA^I#4 uFa*,ŀl.(: '9w("΀FdAJXGh? 6L #Geͨ0LZ9p<+OM ծ'98˟ӪMM#K&3AXvN/)1,CtQzH9e =|]oU x!oTc{:jؔvcV k,zBTH`&Y=kswc͖,<j OZm4r񣰋28r5Q{j%KffݒBsh_l) ޴'lQR]~bLY%`'-7 <3eT>WAbbl#.)BU9Yz#ݙQ1aWy,-vpPȧ)l0tD/bS9koc:;R"G#dPI}}-+E3W9P[˟߇ `s5|)|fch'A?-/LFtTK9ɱ ƎkC`Aˇz6mo  1|/*ci_V<6=\IdR.׌ cCiS@,<+gn&ےb'[;ZM^yts䗤v :_$V'$_zu%gMS#_D"}*ԵZv@9ljWIn*Z؆$vx;(8T +Բk5K6n ڞKpaLLm]L MZ T3yQee;G'S*H[ RQ"]`]l)0I+v ᣒElj]z-x~;kj%a>@?_< >`'{]BP> hO0.:0oH}!ռb:Biʲ铒s\ $+Dc)Q2" y\cݓ𯳌|31 +߼Z̞]6 N7f756Κ/Vu8~cw`4`Xߝ;߆z{7^`ƆNZ7;?y(Q aPԧ 晰_7V׹Σ<%7o/5Mi4w?b޾\_ IG;s޵ku'mO5%iylGgV؋r2g-QR>x:jB]{0!,BzrTr8,Pv%xX{DΏ &{yEk?%nA"qH$Իη=s#3I')or,L* [\LhG}rCҩ3 [JZOm,u BpH{l#q`DUir2;cKxh82')c>MOu)e\ϧ/oJ RW{ۯ iK 2|s;j?b> _IkHF^7$2%Ny,Q +|9Il$FdܑCTP5z :D{9 *u\KIPŤF-9b咬k3\ RRVE9QT َ/vi1%vE-HMp6#ʓXˆb+=" %jEBo<^@jᥩp/ jat2\9TR^'򨐖oћ+aJ^FEqb"9X1`tv{/N<>B&@O5a S`B.\CC|\#z vZjL&Y/޷(}i+[K %eU0r?EQ 99ٔE#"7 2! sL D ɣ6[R΁fe^ЊkN ‘CLr[Z0 F<1S&ʹʬf &%)+QWWTdpO=Q#ݾyIm%-9\{C2Wvcm>^?u_=o.2L/ަXu/ :>Ni{*[ &^ 2CfFTw:E.^spΖ"Cg3m/IR/x}~.,U^{O2[ Zj덞+s4/9X/L#ɺAzD%W$NMV)mHTep%밳_5~~cnԂ7 ' {XrmhL9s N|YԱ3Dwek)+%YҤRl1@K5Yr`˓C @1?q3$쥧])wO36*FhC^=hڱE'?ҷrm]c}}pA\"̾@:cto*'骪cϻg~P>*D("UM%vvZ:h$)ea|JTPBqJ4Jb;VoH˹%%;VoPҵyv qy4 x( A f)*Rk'/ZTLia 񨳍 :H;ЍDظH?BSI*䯂Pb8џ p LLڥ;Y(_}(fu]A.%FDزUܐ%kdKsܐyo= <+Kf&x#EWpOu lN+W(|m&ԏ`?]&W!& >D]GJ[)-cuBY0F /% Ͳl(F9\I?[~r?֮ZW2ێJdT]"T:z*|hsPB-K~$;P5w ҽ?b{`\#Hظ,(tOV¢-# {yT2a\Yue Wm>5R`[7"c{5;a=l*o;Y%Lag21Z-u7 y j<Ӭ2L=8љC~I*#D R0a\bݎwaʏ?IA=?@8*}a^`htӠ]Gh|O!}-IќtسsZwA0C!cuOfuvk,u\eb:6U|YT$(#:Ԛ ,-y|7, hv0&9b! |'vl0;AHLFx )SŰ= "-"Hi:#m`q\~+ ?nžR)qfM$RJ%pqڶ 7]#bEHZIqO*F_9w]}lW+M:ϙ*xxD_R_ĬGh!ND'r2wH>csp:uF}[Lo3| MUx. zG_dlxg6١'rv9v**ysv-lؖxÍ+VBsQ$HZ#1k<gÞfݰnlU:Pb=rGo1srq~$^]tZ~$ oS/?rKz2-Sw{ن' ذo4:D2Xñg/[5[qzuS5lakZX8WY>/kf)\yZd 5=WQq֎9il%q;Όnb̬a1rNf?T^'SB8nK^7BbVlD@$XSX9@ӫb:n,;;r1kH' %&:RLΊc^qr(}_?+I9~NfO:L2 G,:+sƖ;(u?N= WtMA -;Sŋy;65÷k0C91G-; W49j$6BBÎA#RYoGnK>nKAO9)_Z*6L^ŷ$w8IG^a JH-zE7Lcߎ\! 7^gv{߶O9M\p5`e*:dHq~3JvP B' 8r ] e"/K#`"ʉMzy~¶-&K!ɸisu:k\;c.7"' $o$!{e{/8Z2tNV)L4r5;ЕNӞ~Qe ]&xJ.pwD ܽNl r;X۶GdV/9G VG@tz;a:VHhMq[$"Jnx9{f,gn-DV)gfv Ruݖ@{e_1T*){Q~/!mǦ;:ڙ軘ژښ8y꛺w4xEsoZ*memtN#!y MJvuHr>% _Y(H[aԨ E4MiSN/?/zW#ofKJ (XKjf >L9rz3b-K [XYD`z0ɤƋ&A 0I9N9zp蘼FUo߾"͉+,}M*MK]KsjpK%"dH0;k?qBfԜ! 䃣l^t7Y> =7C]f튳9|'RN훕uz|TxaHG!9 3Se?:u0b,Еe7r~g4.P+e&'ǫ\Rհ d6+sNEMɣ&16Ik fͩS@*5@1\>bKaeŎC!S/2ϙIFrce0^;`mGc#e*v*(Ml,~f"DzQA#aitmI#~1wո6}]'VF5'i~4_z>9ʑ{ J$^^gѦ<:/^m~Oxe_V{#ѭڧW!qh 1O6a𾱇YXYG4=ɮyUtZ+̗}7&o=nSX -*Ȯ:Sf ^j L7A@7L dFF,5R-E5ÖZŔ@èhr ni\[\v djM$b Hѧٳ26pc#P{,1I`C()FŤ/ !Bq'׾) C!,QNŦp`Av J<-g$#w LeH5c2F8ۆ?衬{HDj$&2q3 7,kp'T.0?s⨡Z0Ch_39϶jKЅٗRH8/陼nu0ep=|GQ;x?gzOU̗X' U!*p "B?Ȝ \V)D_nvBT,$D:=d( SZA 8>% 8kO 2';1ށH;(. ¡\7D,uk*rgrx!|ܪXD*F@]y-S|27\v) cRՔ6R= N^{$6 ǖ0⎮6"IIϒB}(pW^$ 9}F͜<(nhag+GD5^p˹jcdJpd.F}ķtGG_KuHbwsi ɝ J$1s}EPr2t0t,UBz3ފs9ꨉ %G˅`ewϕ0s#+i=mfYˇܲX,|p` )"Д6H(4ǽ3J^)R%Zぺ}srwhh}&E楳' [÷ T%Zy56^L1ѥ9qf `p*\umcAK&1ǚݤ6w߽5#}]SikU8yRקkF656 Q]kdyjFYlB)`hjY~R*Fh5re75qyL_pF1ll%[n B~j UlbM,"e ?hNXԲ.aIbP^V|ь%Yi/HOߖ/w4Op2v=7; owX̬3ڀn;2\}bX[wxt4bs` OpD)yo*uy$JHpȇ!@aayVHE{JwBa,!+L`*υ\KGtнQOܖ;C-B 8&@5Y $2}jDz4d"2-|lyAE*ߊ:.wFb sgn1p0h'g -K;}n9qGl7zbbM}Ec6e*)ZV֚#zyA!O!=aZ‘)0e6AgGHyTzAT5 38aȂPw;~wb$xs=d`L-M z+EA=!Q*m$ſЬA !`ꩮ<d-;o%;L.ӄ\͕J5;u@3ZtΐJ'ldp[pҐ+{q,>O;W*n \=oqy:G(.bm#D_pHgJj4+I'r%{3C2`tI KǴ )}5]@( e:8lK/uL+؍ ~u縬Z 5i*JA^'vʡx/lzl3hꯩ@?^9r|Cuoi/Mp N囆B17 ęs ֕q&4482Z 4ZyZC6ԣT]˾u欽޸>#~l^v®؟.aGƒ ^33AR1@r!kA0N*'*n=i +Go!gzy?SW3 aTy8;:G;[I?Q\Qi]gf7cu4_

%V7$=1$l>}_ 1 t._d۳yFIwM=\<@N~e~8fv߈%Dy^TOiOqWV5a5s =#VwFotDv]yU6DZ%馨MvnnT.'B,!$jd_9CmIۑu3-IOS?>&~[ߐ9:8Oz%ƳY^/׏V~tQfzmU A}ٲm8Ȏ/$ƻ% &t~Iocn|ӳgQ5P.=J6zvȎ!)<=E1 Gr!,z(X7ȂQxMqDl?#ؠ AΞ 5nD:mnZseriaxE@W` u#84Iæ0?Lm&0!/G+^+ m0Q0p ,|yf[ ك60k &~Hn9Mww1 }&6.9~|?3k}TC`zͯk_YP<s)X&lm۶m۶m۶m۶l۶====DOLAEdEddjedݑx9N,lGcW{RX0'*%-in44 6ٓ i2Xr` pEQlقDI b))<_H 23ԄyX̌\yEc˰b{f3`?A]݊)arfNwq"I{̠2R.%{0~` n U\]2 dl+;Fx_8 _.%ƫh:<S 7w ^ah%|fA2웅|+jX^#"C33Eq%p&rVE|/k0EASf-83/gPi>E614Cڨd7;.NB‚DZ-Tί|/ 0*X@*+c?NfBC+"4K2뿕X-eKV-3oZ-@c](Sqƚ2K3}3T.JVBQ[-@tֿ2P@[HGRFqYHW-iImk%rJ{ie ̒aうh+y]6YV{|l$ E2NGpߕ], /oBB޶l6uRk2җG}o6lEWQ'w(ͼj7p$|r-Q6(k{qd2vGN5{pytd tHѫm4:H&LӉp|v,m1qEv?ًtUA\HxW<y4m &@7" zNx?B`\Zd& xD 0 ^H('{'_jX[o""Ax!vKC2Vї@x \bȷ`J?w,bxLA+1v%p:v%y=쭘I˿ uzciu[qXvbJNbG̅0]f=*fnju$G ϺN1?M$?JyO:K6Wǂ̴DAWBB\~C(OޮVklh=dC ~1 tg7dv 3Ek]n;VH]oҍ߻p2?09a|(r8jH{ۤл[ ihm.haW\܌&iܪrXO[Y9(&][j^ȴjOG师wk;@vMbc`)& ];@fm3G`e'3M.Ν8(0M"%{:8et Y2HZ"1E}% dqR*~#DݷUciF Z|/|giV p| ).%1R$2Đ`$n~WH2e>܎݂l?5}2zU1t ̿.n]SZBڈqV =׻(jKN#SKZry[q<|]tS3+SG.B=ټ7ka*z gXnI@pڟXeh 7vukoS_ȶuk".r~3Sax?BP7S,1k|+y@l!컋ggsx$kH5+`{, #̸kZ$r*b!|%_0AjD.{t_A]tW @+;VfL"Bp|[#t}2st p>>̅;~#a3' 46i,GN 83zdy`-mg6 E|DB`5KXM8S!whFONg݂"3Wx G2US8{*(r%5g4& -JkP| i Xrp;"k˂I;ohL<p%튱-LWl?=yo8旒f+̂JY{^9TwI IXd) :g_:z:%sj^6+e%ϢJ:ds6>7VjɨwĕD  C2,ߺ*hIW `*̞I{>UAf#]K,wfZlp tAAL=glo·:$xw72wwi 䯒)'.n2$q*]'l' rhebl=r kqAi[zE}b~)Dlq%7ٰAKo)'ΚgDF*T`ve<Щ8mQo$ߺleE,\̮ od9̌oݺrvG:/78b GpmYfW@ajɤ8plFQM & -H4%!=9rB)8r)\IXjnyCzYd5T~Ak#@c6[% D Fq-=. DfΆv^'Rijwf"ǽSqt|qT0?f u9?=ǠOK*US}R0^mR8@E5܂ȷq9W{7>A3U}f!cDh),,ubjk k%cύ!{>cvTo~a6E9>ykY'3U͂eBnS psw&R5QaPaDO0G/{ v~9q>#OJSNxA/[Nw|OP]?.|1ӊٶe{QʙطyC,%S%1DaNl[%A9*fHk?ۓA2^ y]>BCWAצ :B5$Uǒx4d.߰~ OQ,}, b^%X͉{Yn(Sė5N*f#uLCQx[PA (}&]g=@u{bK:3)(_cGf yI亂-K?$xb=$Üd8ˈ98=1 a^t1>"44Pd~z1O]mOjګe|5od'OW{[yy !+0plD)3{Deοzmz\AXX$VzQ}ؙ]65-8et8ߊ6w~lcGKwkp_kgv hۺ#G9-m#W+CYLH@+¾d̠0hʭVnݘS>[=\-a^Z=|c<< 9[u`nJ.ʘyTE.K^S'ƷQ#/Y  =EɆM0nܑѧyq(usH'<ȍ$zY4طs֥e& Q9{4hw> cIorpCH3ީ )CVM_8.F1n?rN41CR~@w)o0EKoxdo8j^[T1i S'mbS| Iۻ65I8R c͛ٶ/sHEEɽnVBV~62)Jõ'͛Wr, e qtjӡh4S_+Pt T';2N]n@\_.[R(QQy)tp-}:.5'0JuZ2p0t \4 ۥ8, ͗q8|\.2_o53ܗ|t ?݇mD-d5#KV3P,~x8Nka%*V {oS962ERYLϷT&:'u0@EQc+NNv/'ymt1K4\oD;:b]g TFzbť9\ Ef$HDn(* G|1,^C~ta0 O82C /ej"k<Lypg<˫Ď(uhv.d!R6P{WBed 5XD{8|{qTHޙEUʕh%XzqAgXZiTfk{z{y'i'ێY J'BiԬ_,B#PL|mi7dEUtSK>~B&]~QI+"KqrPΊ^;W~[l\?L aAs߮Փz\  ~4 _n_ 9m|t# iv O/p>μk#9J2#ғ,H%[TQxa5lSO& -4O{PǸ}M1rܴ{UvRߐb+g'AkafɁu{Mo}-d; 1c``zz1~dȋ'*l=n |oxCX{ׄiy},P=4!5d{NC-9 9'-,Җ*8yUV_OMEeQE"6;b\iaDHnRJ%a:ھ&n> mkF#ͯfsD6$q8$Gq l64567&O?}w+$ $.+V}~LpIK(Tv2Ots>R8}6q+}9o>윷yO;OgNO;OfR 㚩YnUZs ptTTLӧaRwt0ULXgGS2Te*/U+UoU_NTtDV⪾9VtN7!S UKr>U71S+ZU69S0UyU~xkWT}VT?O|«ʭ櫿TW+6rު/^K諿^Uz6z7ErkT2ήr<,VuZ6U.zT? SmLO{TI-}HSn]1S[U|d/S^I};P5W;9;){K{W=SCJB<{:'Sfbtr٫FǤr(3'3k$gAx8oB5`8A4#,H!)c31F8A0c !dpnr @H @Twݳ~(V#ⲓ~yGGx|I`LB0("Dtq؉Y|frJ `a PC2Iœ$/i%)TTNP680, ik:|3tpnԚ pnF"1N<66e3)=lc\Xw 4Z7+uk/}NiHVH@2?OKL\97^xNp(!ґ =*jوͥ'@ Kn]σONHg")*usեbrXv=5KXCZ(A/jK!&4ُLap[2cW.K3Zup^t㪦wPnTkڽέ6WC\Y}a;OpSq5a梍˩b/St.ݩɀVȹGT<4)IH/x2:?)^m_.se-f%$fwYr[WQ=,`{חRLe+Z[:("䠀H. p7Gff1:HMoV JPsVwy岈AC +<9◂G5 4E(%PV`5a:ڝINUVPZ&캄Җr :T41M0mA)+PjZRB$t*ou=wx.wTb.NENQ BD4.._&W핡g{n䵎1-8錇]fh՟*pjK.mKàq!>2+!k֖;j|Pp MtKӴ#.x'׋.pg'AnJG!$&+m1 (s`r\)aA`XܝH;k Abψ@vE杒- >0s-SUC}p|2BINygnDMFewC t']9K/]Z-:L{\8Fyz<]z"I&:uj EB~5N:t4K8wlRθ!"9-,F襵rCafKU"qֽzT%,%k?w!hk_\%?@}\WF~?h.xsI혿gno%ݤ)ari/K%!9O)gQig%֟|]д.!V^[Y% ?5DK(A)rc1 aKXxBCqu|&j7{M뜑fevV>Ը0Kp6bM@?&|oQ_os4hpp;Iz! RfU a+d\63Fo{hMZe!76s4 >  $8wgSXdXh >Gu%?=j3ҭpςbG;.wש,D[\*T=B iͼmac?!D]W3\7B۲ hvj$V+%,f#C-OWkc 1$uyI@ 9ٝ0L%tkTWqM4Z4t vl~0"l`AO@"9P`@!eK#yFim[`$1ooK[%qk6*]Ͱ3(5ׇyduS3ЄtlI>R~1X[M.Y8eEM>ҜVPe21" Ix۶[ !.s* ض3v9ʒd]d:B÷B#Jx[N(f,cύx&C"Ȭfa0b²#3ɖIHY뤷 dj}jԬLeoE g|X^ [)Qs;Ȯ|Ʈ=3"nF0nw @_~Z{iS8˖|Odn`TW(۴CѦWcIy׾ OgZm9R&Pzb_)}xL]qB֜J* j#?pv?xi"_`~IC9{{.S ~GJL5ғVUifX5OavN08qV>91r,3@G}hyB,1(E8|S *ƋZ|xd_+'H覽;[d8K?7IRB|=Ǎ[)Ell ;皠hP|Dc'Sb~Ue[d]fU3a(62bh%# I Y HiJƙREI |EPY4Ahs:oJX8t~~KW%3,X]aDY + uQbN)lDD<6e~je<^տVY a1nV׫k[pn)V Z8q-UYz)f )L)nF爺V/[~jq I_`d$ Ξ.RDl /"UV_VXNM++-x}Zˈ|0bxZWJ> {-̅DZH:~/ȲHX#U !5sUIy6M[FV@.EW{ʟt"zST?P7BhɶFPKرVܩֈ&P҈ AF`K"8Z i\_OĠm@o%ga|ZXo_0'bTn(Z'!5eWa}t:TkaGEGB9,p|F+%|c !4{l7*86 E"ǺaCO-.SȻlx3FFRexĽy8f)ׅ;ɸ9nv㋹6PrbG]X>wkP:8ψ,'WTmKB<Gx$o |I /\Fg^& ꄇP%Ҳ&ٱ]2 R3velc/dvl&ڐ4cT9iMbdfv@y"6o6x[Nbn2>IGdu`IǹKޚcGfw)RcbiSr~ٽaw}v7K$#8 E>!R wnl;Zk-&pX~gcQe|Zjp{o" b~RrdN+Kߡ($F=!p#PɒJpBph~OFf`n>wE6fOp5sb'CGhJ}1ˢGBE#74lZ*)M_ ˔#Nꉙ16Gfݬ!A+xAQv)#lG^?ÙǒDC<8ӅOx5D@!݌ ŧqbbAq_-& u0Re]!uOhXӇubU(Y&L;ntO)?sKqq#UՍQ\%m .>^I| }F% 3SoEV=ôD÷rJkDž@~#om$I(-^ ;RHaĜ<G^#uK3wOS2"֤!f0̃Y@Xxƣ]n{pܵ,>wC^LxPN/D ݭ2}nݰ~1v{3"ȑBR,Lń^V:7ks -ƫ]vKVKVVD%)DBkVhR_ rs]m͖<,\ĺR {U%p5isWLQP"7lV ?Q߫bO+0 Yd[eСʋZhmw5`$,zfǾ//A(Sx#pa%^uHy٨"h}F{Ch*jФZ-]?h@dc; 쿠5fj!N,!v<#RL2hl7eӺE˥7,#=؎{7D1/3 -B" m_Ejie $ǎzXf|q_5o(1\iك}kPy<1gEkXǝVl|# vlҫiζ=̖=•7Y +)b;UB7cΝtcQٱ EQ.lmWN/m1wߋ!rع-FYِ$s~w8&Y&hMPo!' }vfFYw[p\3sc5/wQt) h윆n]>rEV}u.өWz g+/?y͎lV=ySsjȆsӾR&KkUʻ¥CAbi;UoD"ljR'T@UYv-yAyP};a|1<):Z&xQF< P`ڝx GdKBJG=0< >&;l1Kb$`G!`Svʵ=]&zoD^0u1㵋V#+q3 ~ WmĬe%JX DؖuW=ƕ~zЫf%>*Vˎ{Xwp j*큆 (=ܦ_րkτg 9WO ̭ʁN2^*\ӦAhT۔#!ZS!wǖ{yq6\c6\Ubg"0 P+V, -Gc25kh3Id%I#*tVҟ'Cϡ[ -Hf$|W} #ȇȑwMYrW/[AIwmU;G,a[7qA vyR+W!m<8 FA/j𦍝뜩 >6H; 3͝>:GK僵V q );`. 꾭s5, zpM0McMʝ Fd'ݢQ^^_Eb{݃ԟcAyV.A[˕NjP>1q18)F.gsx9û%L6 Ⱥ_p(S)P;%uhrLx/ ^ǫMMjUr0h-nUU5{}l\Ík _MfgI8l,H,nkv5zK?,"лFִ=aF Icw_ Hb*+GO G 7a(޵.,\ޒn `+Sij7M\C:yDWn{-N L4 c.jAs \ Fn5gd }6hAk DOXxEwt]5\x[j÷d'yT̋-3ܜ:ڮaƧϑʍN85߷x/ EmBu!N ź78q7~d!n0mo!"}ȖD5߃=Heghoڏkcݘ[)!_o/puMXc)Wj[ڿYKd|ך8fPvST_rՑD~ܘ^s,ӧG\ૣvN7L p3)>Ri mhb}=z:5`x8ar,]/CFTtIjl`̛p 8x汼=P㷋B ͠8_ֹJنD#a,zÚGM1A ;{ڂsnsX+Zz@gF)v;_/K ]Vj)?oGW4_)u͞n>s{}~; G-A;8wH>ckƇk5F:f>ݻY6zGFFx|&I+!Vg$V/a1;d 0^"f_VʞJNWx8w <߱! -6͊p;:;on?p(eDri78!e>u,$D7q{:6Z/ &}83z4(rDvaoz ֣ynDmtzaiOl{ U&]QȮ&SN=ݮ\5?6lx1 nl;9Ͳ}ĻvoRw :m*""s9 -^&0wC۶u!(UR4ljdSdQ;wq[1yxj|Ȳ^t&v=~pQ-l!{IGh}dͮFcmsɐEzn~a EK}ho 25zlp U[lY.mk"VݙNz+R-łcN,ݟ7yHV&Ey:wCh@A9N9T!Kq̼ 2 (uQ@NnhzoշsCPqI?0x֓ c @BLZS{ A-oɷo\|}3wi7Fpo\U-=VA)OcmC OL}K|ZZ4n"x0`<iHf,lVwݹ=DP`ܣQ$ߊ"~vӚ{:ɸ~2OK&E>"fM !|u_JcGa~"6tl߾4O0~c~{(cpr?0J 3eW @#Yd>P^{i.,?þw>2jψiiwiYNJ8]w&H+b?E pK G[(nY~rՁ{cJ×?8FA,iT*{*%rzwdԑȧ8aY3\>w_ͶG>2~&W%d~-h~6s.l2]z϶..{98@7 ^Cz}}K;K}}:O$Elн'긤tYYJ|,K \Bqp:R#6̓;!o`u n쀫X?vՄyfPJri'X¶tκb־O?/j sHPomZPʹT:pSѫ! VTEpTk=;bAWpf٧Ry<\^5(W)*7f]G]zFUjC&8xwU\_1cQ$#ҍWHn- l^Wɴ[gg?Ȥc,fct <kOLv%r+!ߤe$ğwåg¢ޝJqBew7<13cm+hŹ_Y^{a]7il;3mzX=#p 02J#@HLrX$LU(di&$e'ggjٓggjHZ/OLL§W_\G_Bg"8?[̘҆?韛g]B~Qπ,FT Ml{3uy}՞O繃S'۞{LD2ǞjJiskk X[<%NiTF?cƒ1{*5 _XUN RQ0JX͓Ch@Ahq!˟vi %F@1/I\mZ_:נrzI5N+G(S1"wAvKһrmB$oj]r>\!aF)i7nO/ܲOmt̠eodNYtPKE.d '4kLHZ j=pisUoN?/:IoXkqM>F* v(*> o OG;n4 E5H?<ԟO~RV# [L&4B1„-s "8::j۠##@jUՑ_ZdE`$#<ywj)dH (H&Z Q2#U(kolIztUZ [$|?"<>*rg$5*۠o990mb<(}o#FnLDk[$]To^xg$C#W ?$N G2|}G u+M@|!6?7)Y;%8[s͏r+ }rr>mLs1MUOCR.A1zXzu83{}cGR}y4N,fdFh ^sx5VX% Z$vxL+p2*ZdLᝪԍ$RqYj-ӵ)N^,H k628Wn>Tm-0)ݺ@FN@ӀTʻvmGͰ}U$+[fۊSahf 3\$M c78c/d U&y2JppyI/T j 0ևI7ۑ7Bt.fBx T`,Z2RoF=ǵA:KX]h5hpco~_TA2 WnKRj~+I4vg"~ =eYe7O+,I{>yɯ Wj%vP5ًzKa"dyo\),=]6 HrD-QLH6V&g[~ׅܝŐ/n2?¨ s{sHG.eʍ[dp씰 P Nv8D& +4i6%q=[^@K8H`Kq*M72f:W|ߜ"{DJ ) ΅6A|p3}93^) >C1:Zz]q)yCb{=y'hcw8gz%RIDHߞrh%άS2m]<  C 7"`e*6h * ~;TF cb[{Ϳ6~56ٖQTvwJ'6ZKMR&.nt$4aCqVNU:oʘU05ѶMi`ILBgjŊ)g*&!TN$GC(ꡊ#Mo2.8ԶAJ:ITS??ZoY ȕB9sc`415t_W*o2`׿rUfe|&,Ů WI%'HlY^qX9>}abx]GHНfP`э'Óg2iav$SО*Lv@T7 ;`(ٿ'b`o):@Cg XU3 ;_Ky*W=}W,//Zo֠bS*ljq%MXb|k: 2z1` j9O]`vgdMv[s=@td%+YJO^=t{tƜU"E6I9~<|cY/#/vNlyQ[/hixXꬬ\܇'2DT؃_$-ԓ%R|C(Xc+I7 W]烇*n6fpZ鷿 < 'fK(ByEO)A3 \gZu6>'wx(O&qq޾k ``r(p/uU?,(^g mUbK|̈́%0RM.G,:kkf/C5m gAk;(G 8ٸ5IBظ6 gJ U`D^DBP6u=Ţn/̌_ TFwb&ϝL7;?? ̀̀M;0$[%1)HjS>t@|SCg pJ{M}{wHZmJ!CS"Y"$F A.`S9JK<:+IzL)C]Vi`QK^X$i/! g^SeYImq AnzIYsx#wh\ajhfz͉/'bgg aZf`aGP]bPhuq`w.N5[2Y&: G%>0wOjU`f0%hH 3Ƚj=9MI+) s7% kU.٩KcwZުŻe9hH(R!@>a:4Hx;  32}P @\E[ZAdYN lL| ]M|~]ɷ<+&H[bKhӨ⧩{%IxsA?4f@r> ̷&hr 9 O-I@6ׅP#;{`w`mN:&U_Up1׬1:BHMESWT֪S"4ZYD') *2TZ=(N)tiL}(MJ1})_*M㽞l#]( ]j,(6v9A/ j5_NOE=TQU1ع|c*q:k!򤍧O^h`gqA(_B?&{l7;~Xd+\T-m+ܪU("lú7sz+.I^n'XK$+K.x/f)mHgY=_?ˁ>B"͂iJt#Jx9I{I X.*Uv=KBA`V,` Lu,d)) eY sB2Gh ḬGd\.߳.%HwN~ٹy";^<R|ĎP]H"6 } t"&0KCfV4PXNUQR!ta"b`Z0\ _3~lvFlo'Egt`m/a sU`yKgTaΉad@M'HYLрťDI&L*[Ȁ^ 5.aR[ܭV&cߒƑC"Tf@25k4,<]>d?Qz55SB 'eN Ï˔jbOy}*sm}F|)Ay:T(Uʨ| zw7lW?:wt>v~ -a'''G j 3tw~!F{) 0N;?"+J)(X\6iw@zbb7 ,Y=Q( :Qa8\P$;!E A@ w6)Pr5)IN/xg\΁C5P`ZE-#'YW d=Q q* ~Э~H{W46o/;N~rۋe=w/;Ù9qbG*O[v:yi XY>Mu=_nNz^k0IRVں \Wd:%O|{K loY[2q h#R8Yi@cr0 IH,+VJ90|?I7-fY[sقT 4 EaFcmMxÎw=Nk@=P9xUe7GVc;3xe\JQM}X!AӺ'r`3y[`AlWDtɯqw(QLƬ_ ưv<14ObVz$O k!zl!:4dda" iX1O3jB%/ʡԮN+\5þ3iarLL(Qvi36-msz5>΅Hi;Gu{4u)>y0ɏo33,ِ fD/m0}3̅X!^ G9[KXÓ<'qW$pr#?E+ ηx@\Y0H1c {ƹ8ҷNY +sg%1dcRN<0< #|Շ{XZ=u(~PjdԮ*`=f+Q{Wl=8jвrRUKDΚU+1""af-Ɇ_"P>4#O@u?s >*Wqo^3{5rUwҚ"bڨoKpyqִ=8fI$/I\d@R[) me,܋DIAާ*9@ 5=Ÿ>{4i)'BYG܏7Ϳ> _(_4ƶ6Ml2=lQe٤W٨J%X[YK(Sf0W+tM.SFq^Fdqm քN#JZ^S= vM!OzSTBBSpݶ%m۶m۶m۶m۶/mۙ_Uqoժػ:aufl(,l@ .1^pk_oaڱ)`LكD4x}L~ :os|fҪ"-<8. q%ǡp 3*`RW,,8"2*&fv1`t,}Z$,,3 %CYEiА6KUSAV9tiz(?&Ƥ1s R7/Ǖ:d2hMRYgJwq<u`2tؿݟDgy~0] Eux,e?Tjs=rȪ|7}=$88`)gg!F@ɲ ib-NQ7*DF.GmR^n9.LY 7ȢjgFr mZ ⬤?}(E` zEK)ᛟ )fQ%s bX;(%!zL %պ*|/!bQ,$tj_x 7%p-`B^ff?J%Pl ;fy SBx ֲ%xuˎU⑍#x뭽(O9w ƀ8\JV~ fNP8ttJgsFBB7&n&$5nmd)[{>+^y/HrY)uU73얨D _;#-m"SƵΑÞAvEGrY]Ho8'es!>߫2 ܀6`hPw.ItNzeP-rjḠq-<1RC݈y)tN+3؄N2aϺP>>Gc-yJ5F3Yz GΠTh9T9L | p 0^Y_&Dd}bl"kCnRx̑NrMx4W-E&^ O~[S,dmAc).>ya^D:̥jJO^56RuGavGw$ք?TqF>z66YrrVGB9EyHQxT H-\Pdyfz߼ :X 2W# Ssp"$fi?dqP<ٖcFtRh0!; YR%0\ku=-ne\ed&ֱINֲK}dst<Ǜ*o_jm+%jwh]P+pY\CRV $e; UOW,)͇my%8>۸ϜJw(9d Z 7:l`G6!W:c:41Db鷻B|:2I dkብٳ76PVݐRTAj]ʺ8URX qeE'\G-!CS*i]ʸΥ(d݂y߳g`hiyuJq)&819^LchMorRSzAދKi vhe55WMm - 0`sxЯ X&'+tG p'šo r5\C~>gGe4F2A93mg-Q+A_2&SG.i/J4 QuG=+>ȷKo"A{T/ِ,{qDrY^06!+ɖ]W/ ږ #a؎5㢂U~Wkx̝ng1KKj["@[ !$֧s}HWY3IIamYSMu ֔?UDu@7HziicD$,U\)][{g ça7ʾ,ѡ2U-,?[`,%uze6|l Yw.GtoYB9^] .pwQ9ZHuH) +FM4BHP4ͧO$2R7ڭ=}c76fba+v[zlV#Z~6oZ43 B=  ș$%m居p'2 .]݇c֩O2'l!$ǘ)5 D4OX J,%$D7,oQrs3A͹;4!զN'uۭ'71v4ˤ[s}D2_pU܏;8QZiBGJ$~|>|T2GRx܁kV'ӾBDZ#$6tIئ䇢pZ .g^F0%PwYp]ar0.s6S$^Ȣ[ 1%!Y)\d)v܇-Y54yJ7ӢM ^?PV۹I}o@,Z܉7P4-M:UG4U4 #)Ligzkwl9K3g"Wz*8^wwfp9pt]"_g$*B26~}PB&9O]~cB~~jRWRb/ŀP0N DtDH's|c `:&Y||<&zթ̚J7ߥ g6n@H+G9DCv v2=.zsҿ2cKBjاscփ͹%vntRY^3~n2yf~m_هM[|Z(JO&tq)-o ߭gJi?635^[]X0f.N*,m^GHQIDE!RB Kz#J:ͭ+U fW5EIZp@hB$jWBS߸ pZ`& ˙6r#(!VteGc^@Gr  eJ&:M%f!*{> nb#nں^5PT d}rdI6CZBm/TL6Zm3Y6%/ƸaJ2$P~bbaia 6^qAr[qy\BKejV#"=ń1ifo[jg꬀bꉧ5lzFR '6 J)6Xn*JmD<Acz퍻(/_whZޝȋ$I[['r,ciefZlAR Ƀ3iG4ϖ!ln-]MW $tmzB%&{>Dņ|%7ə! n<)#FGM&adb ^'W#W;FM S}z2_}\Pvv^ΨlF KB6#5sn]tcFvrߺˡ6Z)æWQ(X;`{MCم z-w7ؚ"Q6T>|$Q`!*VT.u VK7DAmtZRCZ&gMD ZJh?G\9![ZubZTy24%H hÓDמC>j^f؆ NݩS-` :Bsw崏II٬)dW2&TXB뙬&lIqK8 (4%w;rP[.v9\/Vv{j()9Go)ŏb ֳr(3;򙸬Yw0"hP֕-N5frB,Ct+e5Rx]GZC^HG; XrZ4RPѡˋhr eyw+Mt7c[x{XH>w:LL"C+qVHحvȠyτl)Ƣi$&E <<ۈ6ʇG;MvDe^Pʏ䒟Pt(5ax.ݠњP)'z^~"mZ,p9s?(!\p'(W$_KϘM~c_/ws{3|k^VZIMH}ϛ:ӤscA(&~"0F7baɤ! Ygmuܚzcn=1u5oZD$rQ:Ԙ^?3F62`PqHP[-Cb: լpH o A oݚ B~XDݎay?[)aėBll<4BYT{54iVЖg#eORfPdlͭenwrKp}t _4w3s &GڵG=&61aM#`?N+t![Q7eǟ̧^ss -u'mQ%!_L2RkOPTTYNCI*dX;]g:\o؝w-m`=Vw_hECM g: v㎤d$OytdʁG#2o|Wx2ߺpN?o藱 P anO84%. l `ͽA?۾8_ٲK:RO 9m*.9Ih 2~M+T%$?g˓.u&<($>露^pŞ Z"jHW(b㤙DF "P7<ҌpGZH8^}輲 u"1r7tSyBUN6.-t8i\ܠ.yxy' %k /pAyƓD(9F> 9f* z|+FQؐ"8]8;~-nރk{7cϮa@_b1?x^+h^:;:Inp[Sk-seEY[j~yQKRmH E \<>+)Z\[@`㇯K 92 xhS21yT+ꚷ2ĐEnIdCjuM`_`ND 0©kkb ۹OlI)ha.d* VBuܣ*ӛ-{oxt=,[jg &~&B/ey>;)Z:+O16a|Bp/&`g[Py>˸?y3`ZTjsbr[v, :v,vŌ; Iˈ[jH}^w܉$o H aU/<+U,cgOSJݏkVV:e ,ɤ#5\ AE D]`Atn䲋H:t/kE7UCՈkם^Y")YKpinڀv [*CSѧL~3Zj07 xkFP^ÖGec׷;+8,-I]K!{'+vm: lR&PPwS[7[WO{fA[&adEU+~6gdFd`?gtD/%Epo]%v١_C"\R4V?|%Ogf>! |jf֔ޥ_W""fj,NiŒ3״=#Y'8icũ;&1pֳOR.\T`_^\j tj3i"&N:S:Ѭ?al}O& {4&7ѝVZ`2N/'Hh^hg7\i2G.HU#zx\2*9[IwY`P{º:-Fjn^cH^1uM@"v @1XݵonC Cy%OUO.Φ _l O8M&PFSm!vnݱMPm{1HQYOLy?$9NG_zDqE_?-7%>@B)cx:2ZbU}GWd)x5> ڞV6ct\27]I[]k=o;zs-ʲ٩e9KXj՗17mljrl1ik?(">_NЫhki!"PU9Xg~@^4|<ƭtXȚ"ڜ .Ԫj.F%=n׾,\;r 'hGo4nɐvC ID1zM|K{HpyjI 0hg^Zb^͹:ݵ6x/꓿/[ AcCPyD$xŊdW`( V[XLE\ߎ{+w ;NqD9+}\g m`g,m21 ]B^,wMgOXP[t~v01¯<Ϟot2ukF#*~Եq/=2MCW 5If83ٍ$J/t+i9N7E7ۣ!?NKiRv]l̔1l"ِ@eЖΌ2%]PGYOed NUPrPorSz;3.yzn5TYg҃)TFiEj( "[O ܊xϫB3,):̹j*REC;C!f~)oopk;/[;toF $h{ҸA>?.ؤ4Jv&ހ;?ܔcQE b;5`_qiRaÕ8j_g]œōd2!}ZޙlƅToy| ٢V-n-@ʹxX YɄno4(Z J}G0G*cM\%83_}+r S 'AE<2KJಃ5,Or /9IgR1@a' W΢1.LsYK2)lBt%1KL]?_<*$_4 ׿qGXB6 -d5;AR>],P#H^ӕou0y)"!sȮa,N>R(`G uIQ[Zɐ?plRydK{c~'3S(b4]P>MG;^e; wD@ Kk!ebf7eWO1g I3oEWr^#o%]eGUŗD SǝOK`'=ȕ{s᪄j:Teư}r40J)-D*&eme 5҉+uo+k;4V1.nd`&,!(ϴMYoIGkoءKJ2f}D޵+R9h~)LsSj\RԱq@V^U۞fE@~r"#ρpm_(eLbӯ1R]kmh%Sk ޢ?Or 'f3 ~(vAEK);ĸϰ >J(crODrZT$qB94h*xFq !hq[xm~);e^nsM"Ɬ&c~&EKyq~#*|'c``K\^(yY{pw_cp]1F]6N)gm^֙(L,s 4@]*ؚIzĊyUc90eI$ɉdT7,G||rec0[c9Y(QMW`RXx nف ȬRm uU1ȘJ2ErT)Yk-gKHac溬`ʅ E Ȋ 7fTѷ3[1őj2µV{,ۛ_noЇst;춯Ogfu%~F\0o P{ Qw\QMcN~oj${1@ 1amh=b7y΂. bp](7haUOl S_3%KvהO[$3ZUM>aCqxšE, 9PAY9`@ث\9g06{})A|"\9JE{ŧbNǀ0!G1>K2Ŧ|c]"AbeLNI֤t&<g]x ^L dg܏2ew}ge9fq@.=`lŘ IEȜ|b㜰E>&.L,jAP|xARѽzsN":6OoHHFZ J"f sŵHk ]\BhZ.vD@* O!*dsm5 )@C6*_Q`Ҍd/.TBIAܻ/Y΅e[u %7?GWE>J|7)Bʭ` :׌;-3Vm`N/: R.o/:,.D6/?v>-uwp1EAkPK:ҥ0k]oXwf-/@%b{RT)# \;f`rɆYSJr=zŸ=_= 栺$ zD˻Nz7r5Mo{c,9|z sĐj1@q=?]!vоbM5^_f7A.Eܷ/j$t1Ԕ/_w=a1L |etOBFEGZu >݇+Yc@Ր phU|LSO : fʛC4!k7i)e&Ct_;/bydIέ*!JP_|ꞩ,TiDEVU<ȇ/q?< -*` 8&SυjID]чO'TAA܅X?OvsڬZtHjRelBn"AߑF"=F5ʃDl9YC|IfNyt$EFeg}AP$0YB'i؉m*!I NYPB\WCNL1܁~K]i]AV*=D*jERh*G2爍#(pXbM E3v "TB`Dy `H}?.ZQH.W8z=4dĻo9U`!ˈjBaP }(t:7Ёxj}9n2³ҟK[+4c,Thrc?i v,M|c* pe`nɆ +\"o%&Dd'l%-G4'b%2pRpW9|QP(F5ʨOϊ!TJB  ~%5Fp,6hKzlL2HV M/OyEOC.0Z*(SCIE!'l$r'!L>H7͗Ƀ!$O@5-t!˺P6<(16†^&XV n qb;@6uSpmHS͏R]&*&40`U,Rdgq}xpj~=Gg^/NG]$r"z^yO1{~;)P=)G4jrc]Mii3Џ;GoІ2LHV`ѷM| Ī|ގ!e**#9^CV0EtvBvothf{;3H'G=FLwdQML&GM+cӋeo׵A[KA7Bu[kԧ_y~c׀%1"( hњqs7_T$ #*Du̘&KKa3 n<1AaǰP0REZ L٬)i $QP! ' vbB4FM^56/*rw-eaSYeava[Lę(c'cuDmtk]YF)'"SYOaީ蓵?wPiY ߹#7ki "6XhPg"팂ܩvN58 ,u%.M‘Q*έpAvR|ܛ0Ƥnw[Y n YAJK5ًbސ? X"I;S3bMV K}JhV W){V:9sHM7왷j=YO= e⾽}zq^v90ġOi=`<\Ej1nBZZ) O" ŸbZ%X<=[bZgwvo܃]2DD u `ަnvh:B%WE_dAأcÈ+HSh Ie%PM:+HD{$O &Va::_]=y>i~'m9nӕrNWqڷ4!n ͸nZW}[ w5#gye0E1PdYnTG g5Puk("+qb gp6CuhYG,ky0̔<ط4jlt T_:Bx8 VOC>C?z`=L->FnFc窛0Jyyđ+ӠޣsifFߥr28M(~h?tյ}H7|N^HrxsbX$jd}v4L0 sqbvWgM!·;'՘%W kʎof@ztlH6{ `c-% :5՜>\ S s'|F~;2>znL Ք>ϟNxd tr81z-?x_A7X^)8x)$OӒ_)c Jcx}+2$U=5SmdYH!J+ ftzi⒥câEZ&̙xᾚ;Q/#xi ا;2bYj^JxTNBuI{.3ڨ.f}&YHIQc<0x~h|WqWvmy֮EL[նVxx~xiWa/WZ8Qj5&Ěwߢ`vI\]1·U`7P22곲'0-daLdLIPRt"IgC2H{ckADi4ָZOTZkS-EDQ eP:M#$>7<5.‡U:xY ZU: iL~_1"b!+.ro4$p5nwPjkӬiژͿW}3h Nب6jx4$΄f 0 z܈ڻ׀!&c#-9\&w_RO,E),O?,ag0:vX8@CLU#SgM@$ QsZ%B{T{q^s} H30Jܡtl͉3e8A GyWQ@^\xPUעbn=u.Bt āK=1§KLjƛ|-A֔6FGX8k/ĥfB<9[ߤlnՅ7.ѯ40[~N=O# *p);YU>dAŒ~3M-r!9M0U7{ D=eTס aLa)?hHylFx|oK1VsYT'+cRdR~OMb 91dR˄pQ5;I<Ď^HR#**n^i)w E[TA*&I5RgF\qπvVD \xidǀyÖ)VX0J'EY\}'| LT\j7ح!@QAjF:Ied2LwS kg+"8V#Kަ%V֗]fZ& ]4LWV H<5D稚z `:PPD3[Jz$R*<+?aSfl(]yLPX B؅' 9Y~A9s!gO |pfzQr&0,Sqx=NځEe>G? e:-tv <7fw$tX@ׅoKCl@C$S:$LVY?բ`y)L`0oy*!嘢KaƸC40}7_[[ U?U^YW~zTEnlf gEl#=9e=IJVofaO<]xsû2A~y Ls&Lit.N@D+1%ϒ5XVP>4y$, ޻eVO6Z$&U'viQ "-ߥq9 k? ELvSr|x8mk=hzu'n~mQ&b1Aqw6ø0Df.4`9|Kߩ;dL5,戃TT])\kqpr#xD%̉LL᪶69kJv9uFg;L.h7<ɐ֮6شeԭkGdTvT ̀2чh>Vk'LKtxA+,T2ͮ EJ3e%8Ky3#d)[j۵cUC׼Vc1Ӫm]VbuM6ӺT,B>Wm:XєDŽEW$g[ea1y~`gUߘ+X[q4|#TSRKQ/4UT؅v,1Jh]l ⩘sK< NwEE+ ][o2INq7{;87;!mI%ְk=UwD),x:{rK!)XTu0HJyw2%B:޼E|V.j({BR S'X61xz[L-Hu-!AAo8pa3m\]^Y!2]*;XX)sJsA6-IEybRHT_iI2f(F&UB-@ t0XwoX?:S|r?8bM9}w;CP9(ELOY&dbXaH BD塓$FSq'aOP7IKY`nI![2RO]?KGa5BTn/E쥳5{r@`]s 4ܭ^w0 @+J?_ X]6WZ YY ZzR`M{V{:x \H6m-ge`g8v8kSr0%螣pe4^nd`\QKoF^^U/h2 ct5')ҽyAXHXO =,p(|4asu|[9ng`C BHdY= 6Hޕ&x8\x&}߻L'ek(FyzLwTD<[1ņF ѩI :˘UT&jj֢g_AlO E\F $6ލ>\n%N%SdUk[c…9mwb+f’W8WSzΉ#*rTA Xo >+ϱI@TDRSk :RZLZ*Llv[ "?QnNqPǃNHG(RDWNt VCt_vvkmwKN4q(@7b!$a.RLCċ %WN1ts52R-17]{Y'Bq%ލEjDnMU{V 9`~`!U6of[@<6 a!_H.ACūZ21ZlBMx(/sLh:SB=ԦLFxr^wsf7ZvغMtv fviK8s4agRV{*^'p$r GJXsB^ *z;ZJQ݇fwy ikHUب(Ve $z'wcdN\?;-\}ƨ1`!'+q!# F'05$dpheb &GFWC=f}XĞ`46h~t:\q򛤏וEQ+p~m;&c1[P^#i69yS,wkb0ߑǴ7j4?bc5T\^%M[kR [3-YeK6DVxcgDz X4hO?F6Tc(\\>ѩNIԽr)%-Ɍ BKQ"hM̸\!1>a^PIfj (Le T҃ifS&c˼M2Ɨw*+ϟƴ];-H5# l!ϷKJx '*R' 62Iȏ \LrAPɓAT(GH$@bZ&~tT.Btv׍!w7__7O0Ǫgæߐ>6|+dA@>|ȒeGF1GA'~3{Sࢭ2Ǿߏ]Mqqq= 87O%|p 1s0LERb\%Э# Ů!fwrpԀq^E9Qda9YHOҬK6 }$zd~FzIy ~W[c$r*;He͑{i[9,cHR.1'-`$'_^ׇ"B˅.6d`5$bٕ[er<[^H掑1lZM@d͡)]k w^|GtDrGKL_-/0_=Nj]FAF3C]qVH/`x%G{+XWSi0 O fZqڊTg;'h_H닯"n=)Ksʏ hziNdʡ=KaV? x|IJV00"0|R5 -3[)azE=@TT8VZ_u\gzH)Ƈ!0@22{ˍ 4hQ٠b6„8,qD9z߾UkIqe5 (bւUECu l4h΢[qيKNdŁ/+D%b@+pcs5>~tOLOw!54{)D2yPHF Gcjœ`;DgKSLō Fa!\VU:^p/e[1hAtDKH$sY .~$A9Ᵽx30#HH)p=L@s>U@구bIX- v0] qHfD} Tv$^n_Pd@Y#|_y[0T8J3K -<3yA5|YaaUMrxk/`?D0VTV^4p tB&~^ᎱX;, gX+pGgkDM!ۂfu)VbNu݌NOiC-ZrKTإMfc4]zN|85U{R4?QэKR;q 4wuJYK /Ra$$ #E{b@{ȪOx&.Ԁ*ÌKXAdVS=?]ى?ʼ64jHҬ!CvKQځs]:\s88gCe*MwGK|p;U8Q߿<crނWlҴ tmJJfqC+oy#}Sk 2ɦj6Q97wJѱ6J(;/ ctյ~w+ {wSB\s' h~{{/ٗ}&_zi} 9ܣO2-~Z^6Qcwç>͊z2χ!y,6wx0;Rn pd͛՛d;> SN[OY>+x{C oNԘ5]28hwk% y#"b"hΰW&R:\*=n,Je=; /_lP$uGe$BrT8 ˈ~I-K"^_6y! kS9 i%wV zW53B@! {5=rT( 0\Qj]^9`:9/v[McZ ŦcgvHeQĊBѸ=U\`;5 $q/fow.nlW;ͯx]u|Yg5]&f*ROnSs*gq܅?ML~'qq>p[\&)*(763M@Y~.ыT"嚟D~GO(|1DHT\8wq 9"*>e!Y1vˢA 8cl1V'D9y 2 }&BXaKITGE? h$vCCSL~1Iܻ!IDbhOHԁ5rՀtπ DvI}~ l F,\!ʅP\mzJa7a8mG6B`_E\bhP:mGGu7( f-9_|H:@ve<t1 j?P>dWmAoyQ}2 RP8Mk\5ACzAQ" \vJQ==}t"C 5J̑z~7}7pDFs)}rz*qu!q&mՁk4z "GHD{,֠}|P_휼B:9fyFLG˝§dwTe0q?$a4y^U} G^k?&!-[f-Caj6]:+YSƬ&ʜZb9Iәa6Z>XH@VXZzj+e4NR"[]fjiԕiblM"yS}deفy'ar1Q?g@Z*r`+( CcJ 1aMi'R7lcl6gј`j{m4Tۅ6LZ]sO2())2 I™:ğn&n( jjh]6imO!⟬d/\$z؍jҮ|(V<ΒoE?d*-ك.򚺃| e~oXvI/CB9f:.ȦL˙ҹK4~RpP7A4 w N14>XRQ+h' 쒣\j:,;, gB s6dfҚ$ԲLy3C.VUc^fԂ4,FӼQU!Fb5Sq#;Ћ]&i.PӰ薎XUzaVjGi ۱ VmKG×;5X)1mr^1JVڄ<ǸVڛYfJAوiyDD|CH>uG54Jܱ NB$P_bq:Ҝ%Q}E\=T.!Le :תg0ù w G M͟*Cp~vR`~$r'E͔:e65w)vYg`ڜMT47iyĂl4 6yҨ_$N0WB$P8H4B:3MK_t[*50DYfYEH> @JR;af9IB zxR]S$rFF<'u %VmK4y(-y[IF[3 Xj0Ukq .'o'vo:w`]84d R矶p#:s77ǧboF,9'XvyTNsU.< b(tvoFT߉-"XO  VO}OcpFz.0; ׭.3޿!{TF¿ʚ:^`u7rP$ dx'~HCAs:Kyqae1R LnG_-:p7̭ ֆ 3](0*ee馿W60Yu|4*To] Y-urmUν^Qp6b"I^=)2|>йAlw!n{W`S[6y}dl9MB/tf.j`Z{j*=>8@X0O aVR$rwT Ma5M92;3A_П%ghyTHbђ,wJR)ĻUB၁I2erca&QkJ7sVJyH? -%Hܗ x""j<4čKT+ԺŜD6bp7B|w" X-''VL (IDYkaO@ȘQ6XNBT)b!E d*2 -Xj̩H {zư-|yQ'@ַ|*03*Μ)x&P]J9﶐,@Я8.6JI'*VUXֵ}Xݻ s%P$-NV@B ]GZiKg3HW''T }NKZ*<NS /b ɭ3UI34=fM٤OXtch CVB1*Ѫ.n1@d0uqɨs(i'7xv/կL5 OƕF WϢҖ:cD-Fu{`'ړ!G _@?Țz4DHG +w~gi̓9o4߾ZdIff[DGIT؇þӎ'/*nOM=EϛS*iu o m-7_ހڤ~=i_L>Lȃ߻ )-dH@Tn?d%V*nk 1~޷n-cƇ>c'c)oadp{ :vqU{j!` "*kv[vȊ0ՀyvUO 2酶`qI/]?BŅ[aUi`=b{Z.5c<=[vӡ}61T (B>gҖ-pNrGNR%.Pʻ TM`Yީ:$j/Z1VHU TGY4,PRN䒦*6S654Y-D"M)"E}H=8WI% 1Ť0q/ؿ^BKCVϪnXq@)\ )Qzi~X]O%W^V qhi]"Ћvkr!"rWduuRs&UR/y6C1(׷~ǖ59r^.z|k*xJQ!$ :y(jhmwEWvf"#,F6nC.ٮpBt7xhh+GBv+o4B\Y2Zޏ麤WkLpv:I.WښcR.@>E]MH~F޷%}#&$z$Q$WR[NVҵN|syn^]2 72hoyy"w u]F-L7oߛ{J4Dzhi d?jj*?М}3b߽r}" p$KeUHz^Po̒Di=ieGAP:NAY}/&/TSӳ5#j'1ʼn`edj/S)l= 6U\ǚF"͘64A's\/`g4|ެ!һ nY<6?`Ӎs'x{g_W.xo ٛڱv$xЎD&SI_D(k4Lm'9R b%C4uy"˞m[,͌T`w{ ebݽI~c 6pG#HmFLė(qĞ$5Ԙ.Uf~쟀vmSmˍ}Ǚ!8.5Sqx}տ=x#n1(@ 8@'I/e뾿b}yI/(2aµ z$ߌ2q=RWkose# p-װ:>$BPDJNȣ+0R6XľgV3! M(14B;Sɓ+@TGnz(x{H١>oHzjb.jST .wiaM `;);&2 Ya}m*z?|tibM`eƕy4:%tΠd"$%>Y @^F[܀QiWp*LFp.F7 & G$ܛ}>B HSM01%p0iЍ !CMi yFa[+nYU0-2ͲVP, ѢDL#n.wE=k0sawEZ#*Y9PUpD w4j؋z~Q3rE!Ud75'(]3r;_4hD"f'[_6sJgwE[#NA?.FI `^+"v\#o<\(QN)kxeOI <Aff5pa`]-\ R2 cguYP5ƫ @ou9zv\%[g\2)=Qˆ3yҦv ` V6P}U2L_a߅} y@%Wv-4'?>-OʠF94D%N~k+2V҃[J*U4VJYz5qġ GT&9n7F^ad0xw0_m;I: hܰ##+hL!ㅥ@k>@Sb9fuoJMF\;xgVUW9&*024#u(h~d@D$IT0PNNpW +VDUr0^zD8CoH3X'6l lKO8`p;5tH 51)BmU GFQQ36nhr%Z{~wE}@0L(cd#O-OP[[S%xKVA{e[tzmGM\YER8gYn6ѫT]fPbnʲɧp d3˜ ߜO5%:-c_'0'>lkQtVmtG2b@f d1V:TRYEO!h8ʖ-*m%E?>qo7 GZA-V;k_Jg2>fMknٲy~K[oDf~sp$WpFOg/˗ϊ\莀ߜ@G蠙&^PHDm,M1ɿ2֫`.|v('Zqآr,>)ۡxK޻Q>lIεNJ-wrOE-=4N_kIz,'J("ݭW0[U_XVw߯:S%qv?'YUv]pSαs`NnvVu=/InZSx168a;Hlą @k5U ."H*@h"TbTEJoH(= oaFv&P{}NY/!w`05 3c^;i(&{T(Do߷3|Fu:sssL,,gb^܏&H WlooH̗\wC/(&sT>Tc}a)~|!g /GjKBn%޾pSSw!o9\ѿA2pۭVʈ#3) k_0?aK 2x$#b($ygf%z.1܅1&2wr7xxYͫ Ju'4")Ru\dLt"Jb/M=q$dyFJ 0~v4ke^霶k6>UJe}_Ps9.^kpHhWQ魊-Wc囫%7 ߰*9h^GmG3g5Oqdž:U O=&d0awFBFQ$k1Qi&õ|0mYq|A+)QGի}cd'# D603x'K9t~}|ylY^+ft^KӒ-O峤7(jQ+R5Q?V|Q&sxQgpݨFꐔ5m^WԿip$o o<q/ΖsӅr>([}?<9Z߼~9k7:sӤfXqBjxÝ9>%C]4 B$$20J*XK߬jC;ݺAig|MטʨM At,OGe=i||\R Msd^]#/6IX3jɘ޾͜I^ARY`(۷L58ߨ.ާ^(`ˏ'rz(cK.O9DƁcG2/+X6}sZP=x}NUG9Zc﷧k#H?+7BdO,z?-kJh7sێ[غӟH831N6:> />Ô46<.HSqe)|Ju/)WR+Nm^oxh Υ(.٩ku|å[eH?S%J޷Rb1M.^ 6ץo4 l4҉a36f}R+,0˪ P_~l1zvLY׵Eӽ]z#脝P_m1+ H 37E4tfB;0 /Hy˟fT;q!O( ߵ>"ٔΓ,ִbVaW٤fLKUNN"o}V{& zIj׋e%<c܄nW.͉Pڐb899e{JWtA:!ϥgEG+ŤV3wT;-R;9<Ͼ"n.ۦ`8tм"w./m:f# KE&72RFs毛oai')%w)ҹȅZԖNunoeѧ]_l:׍M% 1gj9ͺz&/on6݇ȸ|]ca]=0}Llh˵պgCuad=Z`.Ȃ˛[鸚&oLwy|5jaNɟH[f8@cQFWӹnp>yIYr`]˦ҷ8s5t ک.uGDܷtιz؊F&9IHv"t&e?C<8S',/ } ;] exepPж:cah۟}RIR'n-|/epV.sR1e{g<)20^|C^<[!%#{ZN,Њ;pTF%do=;^^zō2~AqM\UFD \ⲡƤ-}5z~Nib7R]ڥ9ߒ]t8Z2cts9]MȎBt+(0n'WyaY1{PDc/3*ӌ'ўe?l|9YCs\˲6:!oZTrޙ/*FmZԖ14d>.3w*ܛJL'߳3"?>qTJWr 9mv#v>VsMTDNO e7ǯ[ TasCsPD;ׂDҎ=n`ΘB"st0Jp춿`eq&2 a ۦ7(%-$Q2~fWqg' RH53]#OY4Ct'ÄǓMUΐiR~T2U[Co$;\ƒzO]sk$&йCKI6zz5I9ѱqeyk-8ܴ@Alwcöͻ+ZF,|۟,F )YOx6]hOuAR̹79/kb@v*ӑ73];,F>2ٸ~LΘ/65'*D@Df9Y-7CӴl_>"igGs_fe+pQm*at.,6wCv)u/]QK>L',drY=MzDq@]M) 7j|fŨЧc/ڡrF"L%Ӫ?$~R,ѿmA41꣣Osn?Art5r{T3-zly_W~~UϿHsVBTU!7J659~بȈcKnyg(O/_'FNv4s-K˳.i"k^i `,\u5+:T,ϵ6}tLc>+iD =CXBI.Ʈ.bZIש-p$үz:aK Ɛk,zz$*UL )~UZLxg٥?D|<G-ŇFxLc5/pAp2|bqge;.,'dqm+~5ꞝєk0$^\}iiju#cȻr6$ #ƚM$5phC]yU3lhi_H}A~һkRœ>i՚Sl4B5E x1/k}|> .dt (^>i, T{ccxӸ֞j>K#܌69oLr>6[Ȩǒ~fe|j?bԇ~ T(nO0?7}cοs*'Wv9 U3 "hYw )n%* oVs}gԕR['zWR8>I_VUT>#qiyxy9)'9a3֬, oH$ {Ь;WIqI[oRЯ37 (%zp}^hE59;sVEROr_v(vlfMx+GHVRHaZ>r :23Ҁ{ÝVV?O(Vh}V*nĮ<6$R@BQ'Sj{Kxœ}.?iPyAI&\'acd&RCPz>L֡KӢnX0%2G,ԄYdϾ.ިM\F/9|<XFsRD|K[ZDY%J1uQT&R] ݶ!ŖWo'ƽj̛A? f|֋ܗA~E23[73 p: P/8ـ Ÿ OiG?pOQ4]ϐ8Xoxl?`U:(bJ!W.NjRM t@wNaze2@ ~} !o/6u.̚6 W58@:B=}{W|k0t;r!Q@Se?t,Q*px뉢^DDZ]@f}أ==( ?3 ~܅G(Xl<ȀT'{?_c_+ چUm!~IC{g%c; Łz'JZI~ -@@,>O ؏B Z.HD'  Q{hGm!@Dw'%:=n5!`0KV_p) cO%vg=8wVCI0 L?5?G69G1œ(ĠN{zA Σ+ebYP ^ R?E+959]?EB ` &`a%kEw~%xBC(4+eW}jPwe=<IRW\Ȼ?g_`LQ 86aEtٲ#$5+8G!9<1pDB/IZGB%`HؾD}*w<@טhܴ]R_ /\N }[@D"#b9ٵ8YN4S/tuB"0hu,V)p1(:(˚e%Q ?ДA%eY|>M4egSEwe_rK ]epMÚD.VF 䋏-[v_' 1.p@أ{llc@ ae)VeUh<`)x\}@y q 5-x(H('7 ^@9bI,1,Wߗ{"`+ uC:-@"wZGtξUa5XN5Jh(@(Q0kYH}ɺkpXm^5vUsa!8Vnлh^ s^'d&O-Z_\\(Jz >C>fdq^XX0N*mTm-BHN #aDb䮽P$@Y0S  ߫OVM3Jo)y@NC-IRO^8sX9flU0'!{;Qr`(Բ[@b;pQap^ pY*O ʨji'!E07?y1B4N1dbZ xk,V΂Ś> B,XA9pBb=/Ģ`0G^: V5:*^>8Kl,ͬad_ºB{u}&y`qY *ĒZ3@'iDUݒ}tp?-5x;TtW ?׃ EUpmP! %.I8^e[%.z8bW'تeb {yH2`F"=a{KϜ تbՒk!ڕkL`xʬgkfq%Մ#0 2`paQ0g7^Zn# E . u/Ffؾ\ ]Dbt U^/ N?s+Vk)@$88Lj)@^er6.=Z-ܯ@rԑsxw*h/ zw"0ʃ;cY:^a{5n+N:&>ۅ>h,', vA74 wKd- : %SmT,m~x$vՈ{R d6jK4V2k+z9DL~ wwkcQ*SbMX MHuu3n; #fˇ0;=akqisC:aPu&ܣg?p٫+r9L*RBcNo tV`@s?Ćt;7\,V-@l#=7x I j"]P,n%8 zn/|Morȝaԁ0nj}oZ ?0ʆ~X[QYǴ1FlƁu]_| U]b@! @ @KT''fx%,PLQvu ,-@+:]sppa~ǯ@9}>L ̮3΀~“ɠ8NGl) ؓb:- IJYƅE}D}\|f1ӱkX ')|$%"Q1JM8) 7{3ƒ,G π 򯢴^U'MWLN[zjCw `, @ Zȁv_`֓UeR@ `K{&l˱u@3ClS*^:@ūzSup<~q;˚.q@ff﬛ ^7*CtzԆ%lq ha{|P` oOfa9;sn Sw^Kyg8¿yX԰QakGc {աo| i$fY}lp䗻D@X\ڌ]O^Cl,3ںw>,|}[$k~xS3#X(߲k#~,\`ojKJ*^T`.G%yh/05&+. *[d*Q&E@C-UJ\̺˺w?Z|@D!Uh#ThmhzF=̝&/{99gCv8;r`/va *~J[+Inp8r9Z‹(@#c(PRuFBߓ{< ŒTAJaөxo;j8V+uQtUW偊׊Ÿ Vc(2G^ܧ7ŀ#Eړg-z5B/\nN]U1cࣗA Hޚ@.e3-ђ80-ms%LĦJ^YԷunko2d ?/W]OI M^~*0d5c|>LH~g'#qKf?,ӭ›K M2t9!̩aCcRRaFX%%F0 3O 2; 6?{_8Y,RY$tY4V&!4oq @TkB%k& ?~*s5Ȅ- 艽@yXdVar֚ѩ@dZv*!6%CR0vf,V3ω{ F9ܱy HfO48ō?'w ğv*pDuԊ ƞKл`igdawhZi 2υKlgt PKdRebK+  "bin/WALinuxAgent-9.9.9.9-py2.7.eggUT$@`ux PKdR7tm HandlerManifest.jsonUT$@`ux PKdRp  manifest.xmlUT$@`ux PK% Azure-WALinuxAgent-a976115/tests/data/ga/WALinuxAgent-9.9.9.10-no_manifest.zip000066400000000000000000023351671510742556200263120ustar00rootroot00000000000000PK EUbin/UT o:>c;>cux PKcRebK+  "bin/WALinuxAgent-9.9.9.9-py2.7.eggUT $@`o:>cux c1.۶m۶m۶mm۶=\ܺ]gQuy;i`{9?Z@* vtIw_QItnNU"ꢲ9"u ][;?FR*pL{>OipqW;~'1ʁre(:oq!URx0stҖNpU .yAp*ͭBs%F^ ]""PqsO{oXGh{uPO.\wtbܫX^HX :!0_;n4ǹzse8TJѓ'~r"ãan4eKհkAhKgu,&*xA|clF )ߞSw,]g_β!_"c7stwnl+z]HdiNnY'̵[Xb#*({]x?nVL г@eG@#000010000130010J31iA"UKE+93=KQ#/*=I CPI4;"3=D#df%&h ͊+h$g+e%[,:A# qY-͋/WT\VRNL^APbWq0w{lnIV=/*%Mf{%1K^9 UyH}vh2w3c63+hcL"u U!;( m$~ li#HSÁg(5ۺ҂Հ Cq 2-6WeuZaIЅo E6} uO?.3dTK1\5Xy "#2#Ta).&a+Ȼ.@*)N�˿,9J¢t..[lcw4 vAJ&diD|_w I-qSF3>x__Sk}Xy~@-IXx.#QSsxL0;df*S,:>`pFatAs% =="BpkSoUdb=ź%U R7Mb8ߠ g"ME %R P&b83TjeV_6]t!Y;e+O #@IxP*^nAֆJjL~ 8D N[v76#ㅷўu^r@$p)q(GgZe6R23㘫ts(:6xOy/9eñUL'b"IaL({ny*௠T1v:uD%f#Z1j2M[cg"\i#LLLLL=m,X/3qvwз1u3fsgR5TR3Ե23kwS,h LCrg8ϝ/A \LahnjBoigqoDFZ"9QDpQMY&֖$VR}|ҙK(B59meg˿O2batkQ/Yt#*USi%%>6tQhE2Y7ćyk3o5-Ϭ 5xjd+5Z] с|z1*2t εvuo"/O/ڨFl3A9_jhRDJunY<]YD_kL1 9? Q/Z7>v v?O۫PJoFx)VxV`WiV=I- 4냹@Hr__aRgOg[NNF*e Pb3'KV֠qU ߛ-b<9 oSl|y}h{HvzDY<-$솙7Eanm>5Ha9io Ϳ3Į5xBgͯ 쓪,Җ 7?vo[(^_M-//ǖ{PB#$3vQ0gؿBߑ߯d6)J l>T9"V؝=…1'>-a×Of;/~DwT@e0rNB`< @qEJl>y& 1e(7nOl-(\ؚkׄ9wdH}5c]st||R&l3\5J*"9z8 l4BS:|W6Gf#-~ <i-[}b> A @@n GGdb$yΨ0Mc>НH+Vl|xS淡 c3=~~4|d:OTrp}H {0<\o܃`{Ә S$}7A 69DAEr2N٫ǽzzgہ yu~!5R$i߲]k?R*t [` S$ۤ3GM{vXqCh2faTё: B"6~pNO/k${ O!9d?yqw?@qDI@ 㙀f|Ǭ4'C}UKakZEt|aE#l2=QWEr'OhbRޟ,MDB(6/RH=/:4G/T E~~0GG ;sZhNs-G-@ )bzcr1 W标^ E ]" .$M-.b=R+jkT?ifz)Syݼqv@Ѡw .bsX~*ɐs6  ̞u Mm:&i b8{ O&[Z[)Rӏ$%D/UJ|8$A,[8~oG;uʋRΡ Y]SQ!h<~@ X ED`ɿҚ@JsN rluV}-1Qnw.נ#5*5zHT/x0~i6(i-YD$*o'EaNgVKФ?|7l2h9ʾ2 ؜hr FUF]nZug=ǻZE{%b*.>wT2Z5O&.3j4"KPD'`*āGr/ں Yr'BDU ̦{$Xg ^ka^eh@-0 Av`0͙V\M,`>LN4wU伭[`x$reⅪF^=#lʯk_;g,Нl_Y*(TeV=@= G{9< čUrB"KՓFww@l?zI4Y\U qQJ~Q=7>n[#\N ,8+F8\h~eac$(hSۣϹ/u7O~QV v *DИsU\V> _B <, ]1ƀi4E9o v،mv+" ˃~vYl֜Xz]ΑQ\j*F֬,9{퓺0ڐkV]acCӵ@kw^R62ow?{'o8,?BR(LZ~߹)(nJO!ZUJG ElU!Y?0IҺ:aϩB*`ݣE!EhR>c*z7QuqiA cق7>EylX tO9Klz}DRo rJ,0X6@@DRûcIp c6#۠sw#/t~,ļO; #V9x4xGCU\ ?ɒDggSNH̡&ܤ(#qEw{- HHQkh!x q`12>Kc&^x۳h(QD<\-fW~z7o~wN'CY.$B@&Sb"/ioOTEΈ^OZPs 2Qڲ BsO(nF(n]+z8?h+bZYvA?@SZqY&f@kB?u[ݕGCs)]Vf ΁`%OYYR\mӨª .BXGkDb{iͮQ` }V.kP0fSUiԴD#JfQ!%(V|(%™#*.mf /\)})4 2*-辷 G^eJص ($RtW({Ru/ߔ2ٖd[%'_^#TQiahpq~I99h:}7x+®WR9yٳ,Xkn ?M- ڪHcԨXYYt]~aBf3{pPs_RkN,0:==r"yyrͶt/FnXi ]֞"4QҽP*}icW4+d4NcBVT"y͹,BwY?Moݑ@J&TEZvݎ( NA꾛}c aΧ uRAF'&]RJoȧpCڻ"dD%q6GW; I#|/x`x]n.Z_,tݧӋϯ:qy[%S:]j:.Ns@ L|X1ݲO.K)wg)`pewTa}~Okl1e"<RSCHl r/ '(a 8=z0ICHaJstjۃsocQR$e&tr8)Uaso}dQ{Rث.ڞ&' 1Ẫ|!,O!pX wPl[ʘ-[Rl>lz1opx^x^CkEʇ2pf:IFmPtkޣѩy#yO_7-Eí15_ y夡 3ߑ]øVKG]RZոa,=OS)7Jґw$sEyRL)rU`&4*ڃ* =0 0מ}Bws/ggW_L/aľ=,!d ֠dDa QNewnI\ē<o^nKNE|Bݤ$CEURb&b$XGf   "Sp#r?2ѧf593ME>jEEy'Ez)seez'E)Q$Y-zI&#OQ{EVYt|b!ZyQ6ʕ )!?\(-VDv.2A=IeaoǸC,dC* 3`D׎AHVwE˰Y?pV;¦Z{SOR"aN^QޞEʼ}xy߅!֖G Kב޾fm_mb!X[0Ux)1ʎ=wkk76y R8U7w0&x(ΣpR.[!D4i7ԑ%+$<}͖4ӡnxyϝZRз)gˠMC6߹ݴ}<-V݇4!ݧlY*h"E6ɻpizr!h֑ۤ,3ęzv3BLLLDZ/ K3Z_%VodJEQҬ V|Íl"1bi4X9iKZ'Pif.l-:~8uwKn{AS+oYrMr]+%X5?Rfl_qKTuq0M4\Vv3{7Yunɠ9_T?Fgn? 4FZqmRYXJz["x̒n>-A F}@BZfk8Y2:ߗM6'r['J`rZޫ(-ݡtrIf1?+TQ<<8_dP9q,pזtsNag8A El(KU`n.K1d"p GR0 T%"5?$\ Q[JSښ u/_%o&,T33Z{'Eul~L7Y!p2_!+>HzRX<2C*MDfxk.=(TBEk*(:)7bqfRAS5-Qf E:7'#_1FaJLvXZsԍ"Ǧx:=+4`=N PD@f$&uڔAX G֨bħAwgvxFH4nVd<"CTChmo!_6UsBLn/ ڭi4o(Y7mA3ڝ{!'+̺H<`]>v9K;0 n(0hH/oɚ)hJ߶n%d4t7Х҇'?+:hSצ]q.#s3U{Ӧ{j'c̙BNj[EW* |ECl%WjtZE6\g[|h%{[]ƊR}[w3Bڇ vz+zZ섪^9Y6< jva.(q3 ̏@{np;t٩ rlm lerHDT]$ȌaxkM4vGYnk떅\ ;NSlW1 ږ/٠{ܲ9`_imפ`O[gH`NN-T\sd.uxW$1˕a:8 raG<2|b!R~dqH k]!SDzp*mFq[.bTM'LW[MQ[P ͍= ^,`]2 YVA 69G~ߋR1#vDj`&śRu=&<^|K/"?J`P>cpw3]{v 2{lK@uk:sPYAQ;[ 7@K7iց Q4#~3wԝ/#dwn^HU,a,r>v!7Ur!7 P/FsF8g}:w[ggEoVjgƾs& T 5B6,<01ӱ ;l* w9(.w"=ACr}Jb-ZO:G l]mȻaAiU jݍ7F;՚;JV:r 55Jj%oo AR<l;7c! 6%)yV*5 s#Aӣ:xe&\'$CNwb26A0}嵊UaP!r>W\1*`8S8:™z0>/Y@PC eSȨ4߫ȇ9ꨔ".Sy y|1O:Y O[aUsi"ݼ:K|õR΃TIAJkȺnx  2OxYYb1lY L )N][:NF;5Yz usӥ[ѺO:r]O]߼j6+at00{"}XA9˜T6Ij }ip2%h5E5ԅ_Q='[vcVrN-[5 `v(UVnvTAT'5c CI& tUS"hLtT|ө ߺVQs:Kz:G{1Gʠt"cf d| #j6@nY5TvnUl`D#tޏ7v ')vԬ7S>.i+"V>/x,{Wܢ?4 8I+wƈ(D> Ggd!mU^Q"A&ooI.bOs~ sQvhtM[R\SvpQOu1e Is Hv<*q"kɍ|_Ibge-8'GKODbVH)ɍSL$'+1N$d?&0N)JQ-<$q 1ldjF \nIn\jeƓc稖-16 )=_҈UPuV?Rs{G6mU@G o'ϒ,x8.!4s' r Yv9FU ݯPCFhӂjYx妵FY(@{N:"* ~-z*Ѡl#rRV GQZsVy)T>.Yȃ^i m_QaBhsnKPK6YLntxXd-gB]ޏ? O$uS  0_]ߦʫܽ_!!Ci*YٻLoړ7H>5IQN{21e:y"]S{W d9BŨbS;Enh/.Y1A;O{8r"aw,Znfp QlKj= `.\>iMf`L'ѡ?Q9l^u/Y$~ f fʝ,Bqw`Zr ~,U*O$i<]!,c"p̴ҞF6 yaz%N3PrC恛YM/Z=33NN%}`^S9wټ1i.ql/W!vKxm|Kqn,=dڶ0 _g"K\k/p03= 6r,)>;,au# t+XL$gERR/az-,;|p/*֓a#7ϽS~Q<{Ϣeȣ'NRug3  K0,eNض8Xԟol}EgzFCb$=%/O㳿C>|_Z[u%zsh  #ncodhL4_`OjAۍp*.Kj)eEPFԑ:JypxJ8sp?Aئ+լmBs );&\Zň'N\R&S`h RAP0ܻ0Y 0 LPtzHÈ )B@+3HW-$ N ! :A@t~*vӧM]MLwormC#HfՂoTAƛsRhK\"a.ٓ(7I&a.ۙIpmTD/Tq}vwCq ʭaϾvcg4#ɾ,5Ħ+Fmy(/T#FhPEΒ [_F ڰ\HW2𳡂Ҫ1fhhzdZHʼ% ݑ!MT֗ϋV+NSZ%t:.)]VVr3R+"WvE΁ȻdNHrţ+7}.;e ]5!i_l R\ *G037&'˯5\ p39O$6y-v?V(fZ*oY R NR RO"8*Pzu1J+suIM)xB~\/+¤ \IUgt[?Tj}WY5HLMGPP:;:8;蛙o3{b~g08 nQ,*;UTWGc PN?#:xRc662zqHv {^uH Qi#XeHns\ChZ0Ji )H2?a:7jv M #A3EHCx?Ay6JOlQlZ)Vc;SkK..ܺZ2ɍ\N3;gAm:S@~+wJC C%@S#GPuT*!ILK2 dFy UG De.JH<``=Z+wCѐcx^jq Ce3P 2B4N9=Uhܓm,/De"J1Bz>"M(}D jwhq?D.F(Lo uq32^~R=%=|N7!6-Yqt1P+|0ʖP Ug]дOWߟLu!3v5ZVҫ0J 9mUTi/Z9\S:K9dElPj 'HBe/%-Og=<Ú@Yv\BqqP0ԊOqPwDEQ<˧1͆óoj,ݱfx*{xNMy~:~cMU\u|yxœzJ򽋖ZKW -(O nɛaFb#I+c!y m~K{n]n5e;&QT. ɶNlZݷI{"+Hw.(eCU}t7{ !^tW᫭2\0XxyQ:H:~jql`hbIGѲX)oM#8-"dy1YL=RpBA{QAJ~~oERu):)2W]z!9tCf85lS%)40Pްez8wIzHnX*u$މV[yRk4J8Pӂ w7g_4vJ+OA.c[Y'>¨IZK3ZퟤE&LOcj|boP.8G"=ґwϦ 0cSg4D&Z!-% 2ȇ\PܠgUg4gm/5=w 哌`1 {s>K[v7B6D,9C* t&H=( :ՙ[PʭFd݋! i[G_r-l$].fhX?m;LuV_Wi> ] hà0& {VaXzl.8ƥ,k<ȜHTLn>MiSQ D_-dޜr3]1oob= 6+| V6h+y%{f0_`lElLf\[ qp=0]`17 i!S"G{Ś ))邆I }sA|Q($tq$l|Cr1G*_T.Ŝ~-BoBTlӘHjL?~='~`٢'3pG|Cae3i=X L&Cpn -tE;~W,mT&ZS%w#T1T6xj|4I1Ŝhȇ6j/xkj賏 vRy@GJqd8 Z4`.8 *!p_VH`vpLVU?SxPR~9+(R:3m-;mޯfW쮉G-`@npybE]5,L % ɞTmٳtCIe]"[f5.ɍ|\S{5=R%eIXj~\X(M/&+ˏN"q|ZaK:2{iY >H<~UoJԈxAA}hb5 Fd2"li@4H)^F? d3}' 8 #wuvzƒoy&AL:9Z$֩7pG֙O3:ce(tpPg]!ZV?wp;h|:aldoѲMC/ȸeioE@~m_i6Z2-n9Ic몺sKٛw)[6?Iz(1 Ǒ†ׂO|hP$R&ظ {E>'hP՛\8!$F=; qD>8,o\{+F P)ƯhqS֖v8an,ŬX#/DMk7$0YL<96P[@V®4+ujovɀFKA,i>̈́\Õˋӏ_ Y)4Sޜ_^]^ySgH`>s ~BM,6e4wzyZs2!㑷~3hD]`[i⪶E;L p"<&G3x( ;aR*S$Uf>ƙa_dLM!8LMd9  C= Bn}n=HuDE zz_wVjO++g+qJ`v#лuMu! Na9.o;wۣ\dMϥWnl'+|ף*0?-[7O(lB*KC,` 8m(rn/4] I#?Ns{^⢜pF@W-iG K^\U^8F㫁9#u;8HP'1!oiL HD{7X싕>f8Wjo+JR!wDA/y`hj?" qj_ 1r+eV]6Ա&ZnoXYp9VM;\UsU^ 8#N%`Hl[hP׃cOH/wUܥW9PIMG#)a>H*N' R釟ftL\S**JHxm8fcmLl`Dٴ L(#4`$e{Ial-j7P#<|NIiI^B jyD4(f)h4#oSޞ ;NeүRHα"5q"V%n[N](Ggk͖}H&tzCКؐQx|ӌE}zrQV$s(w3NvH|C%r:,Ə^W뗑B4\;9d(s)^yM& u\4pPdRT4DvIPOi**pI!tp^k˱ڧJș{q1YKv +`ߋ* eÒ\V:wYwO 'א.xjOKĆXE3yne~~)k~>}]yUZ[k.,Za,, Va oS.eMp{jscnGc?G"FDž*$eD xuH>6F,-\5cV#\Gua!Ľ%%mtux PI(Fy0~>dR:p{:y$.e޷zknz f&*眈o qz1VUXxfJ%_$F,oi2E䛋Q.)$B^ll_ج,Q 獁SHF%cTWnR2 Nb "9E#"*ʹbbr*-= _.e!d,1@mch/>)g@ OȢL,D:JsB87G"YP!*,z47:]*$xg >*7r` j sWs<^~ ѸM#K iCoV3y[,*CMZ96Y1o)0X&c%rYI/%|Ck4 ǟf'Jz^k٦ukUGf wUʀ2`"Af*'%5aKo |n Ah?% a? {|cAg}R*k ;}>PC)o.ߋdrҤ_-O *bޞޝU 4l媊U=t P,a;6ꖯ_^6${cXtщz8@*#}\4Gᔮ/-ni+0ê7^ZKmL+3]DT)&ϊQ{}r6Zh1S P >]'V׎*hϤo8Wh3qPA a<%팣ڻ FoJMEicKUB*@:OZh)Ų{q*V!]oJ VPDՅj M.c7$b<]2mWR\EBΣ<4h-~|IVSƻK7ۯٌ(Hz߭4.~%)_,Zqne\m5)y{[/nd;ζ36UC/cNY#X !0vUiŔ [5} S|J?-&!ԫnF}zu\#N^ΑZwm};T)*4wmOܻ1XÕM +]7bN<-`UuгQuI1'v].W LQH@4gX I8wS &lC7U/Ʃ[ A jNGM&tK%_[7!K,655\-Ms_)LA6gUv=l8\Ɠe|ޫ\7;C!LVвݠӶ}h{1@q&s2H& zM~+\7qҐ Hoҩarnz1g58ZZ0A˔-zU(Z T-3-UGpV|PhѨ%#)镠8S)h &A"R|YG:Kfdb˒W;ZmnJ[d=)«x:ݿ wϺVf9+gL,3(rK@s Kj17Y< ʡTW,p:rG'3O!l(N52MQ(Tw0,IRBй+e;E-pV5tɯG+NeheNɂvM<(:R^^<#a\'ҏU!zȼ#ddL%.`bF[+zll( EmA{DR"C mm [nש `C50͕vw i`QFp9o,x\ન3N>{Ia(9b%I:l_C=ݴ'wP@Mx-e~s$w'XWoÔpF*k &U6}maOIrT rKm1ee$[)1hrH@+ơ/8w̑R^ LMKفln_5Dizۿ"v 6S="l BsvS-@E%F^3l`:UV(Xx[8#9I[[NKbQwDM>ȋʞedrAntvuQKélg̞#.Xs>, ;j'1m3$XfCD}RzF _x{W8wl7}֠u懲2gթ#J!lNTZA~w"1<K䞀NбZ, gU:"0%Ǖ/Xrb>dvs ֌2h3.9HVy(kHcZ̤J9sBblToç3go?=lWLJ14KYVy%.Hmp \>D47c>R2|8EGPh1fǭg_9 jN8@s muF< "9_ߐ_ZB8#R~+? 'DJ UdgLPu98/d ~>:/8doc{o(HQn"T" IŔy'8z}\=w RWDEhT{C%g綽k/P$^i$ffks o  #< mVl@o&T_Z:otv.F;Cpd\rqb>u&fH^ Ȼ@ ߲. K$`%jqсlXT"OBEKRJM$V3N+a=Y)Ԓ5sbEjm^e%[J";9ROFÚ'DLjy_z1Nq6y9HIp3[ͮ]Oِ!1);^'h% u.;:pAwCWAH.Bp(âMf. )ceW5"$g~K&9`ƙ?7d%v 6`ΑyL,eNkoj4[U>>A"B5QUSl?YŞ^G/+QWc}qxKCU2N!+ic\iAX# 9UY>P^%$-@!TI0a̰z0ͷZR 9rъZxh\8Ja^N8\kr$Fo< 7 {jWNF-I><WS|K{5y&vw8r$L)g._D0k'G>\;9b3yXsl#q i?.G'З#wq!Z?6@e{Ft0"XFZMn omoszNC;[|1#裞=rʙC(<\G@D EGf ֜C k7F2īG(:*ܗE6ٟ.*&5_NCJ?LQ Թo'Y!:}'zD4>^^1IRt\VZ X7eX ):^Z}P]zϨxy"<^wd6gh|>biom2d㺣*<[Q[ReU9q^̒e(| YvӐu.h䆈(˅; $1v^L!G{暉2j=W(4ŸxZZȅFcRX"YXm4dW9FX mG{t D Z !0⣍*|4L:@R>m#;!F._V?e->VˇQtJpYz9=~Ho¿~;Dhlqzz97Dk"m?x%oő,y%H5{a%~^ɖG?rFn vV^@ACK:m[vğ:@L)]:)+uy\nct$p2L(([ fg}9HZ$ ד ]<91֌Frd?lTQ:ΠQz0j/eT7Ǭ1a|]ҵיʟ{@Z edö0ic"0%{! BҀ0RIs|15rs?ru{Һ8?.H}ojnN'9A`ϰӎ.eŗ._K'#kSKM;+/Y3MNto7f6nnNŻŪi 8b8uluS˛]G~z Ld/˒f60 M 2"Ԣ\oJtfgd0_H1S7 hG=yp=YC TO"%0} %8lh1aQu-i5LsúIk WKVkJ gȞh6]RyR=-Jb _6`bHx*U@xmJ@3[è&h{ #Y#`Rh[y3u*`Gw}봴0ib=CLfrbQ6P?5|` If):?K@Tlo`Z6҃Mnp"t, GmRn+AqtR0sqG}TВpHj"YsI {&mSUS  ߹]L/o{裒Z@ $eSO<7+ I!Ϧd5MՠW7X 'eI;#zJ{O۝ VcRPo uxd=.Lm/אKBˉgRzSnAi=4RgT-H)'N#:"s3.[Jez<1ghj̅W"%\,33x]xe^Yk#-9wHC[ڀ,g74]-N{wApP*!g!ͥ D'`*`xD7N_]\fV{~ʓz&X?p(~K/ [9#Í`tJX.9['N$P>ASNʪq6]H@}-~r/̟ ӚċH/@b& *O+{V4;YH9c/ Ud4DeEVT@+Ws\/гfV :]mD6X䒇nd$CnsUώ Z嫌 -}'G+l(fP0Fwac-2Sv%,4XLmpRذ&T^q kt2ؘfv V9'wп?>?͸ vPPf7n,Mw~.N;sc^O"t35L/*$VZ T'eZϰ"\ mQf+S.DnKMo3tW獴Q~yv xM;w'0PWqݣKjī n ?mGFnU[<>;%!wgk5L+eN!+`² [X?Ϳ17k ;d^cU/li/n%t-Z,qgojĴ3uMq}?4J'.f(;swM]Z)M (R"\URjX̧ K0MCV [`1fzo%qptqIJxcXla2v}­Twn6hN5J_ͭ5q}z ]wF/j"G5i4%!*z @o¨#uFN 5 ݬ͑E}#D.fzH*Z~łhp:Ei+LyY1ٔkT!m:- hva4p.0C˵^0)Kz_nd ]ӈ2(QIڰf 2sJfH". YWQa[[ftDW?̱XmLAi0:O 7)<8r-@*6/}V᾵ej+#)Q{])4Uggw~]~%]d~$b]LL[~62hY`ga JEa`W"8F5ߐӺLu3>/HDXB#mzBoo4,׳aYEj2*ofq\@ %q"C%&%c,R2 8y-ׁө;N'˿@+8˝H ǎ{vbʽ::db154Y͡<[ ϐXN=!O=1lU@P+,@1T_#F U|1eVdʡb\ As9 s--dP# U|A4sF-ϻbU(D Ѱ c9_` X_zr9_-8d  **7AX]}0읮*_hl:phNR"ba>(ػz-(>2.TE]{Q_D'Z6sǐrz,$bo=rkH{xNE-wa߲g͔moRѷ[5#ёX~{*BZk<ikv9sIty[`kL6!ΐً4mH鉬@qF3ܐ.d/XT9U0rjڹ 5/U_"*ZML2!\5$$dc 2Wܹ//a?!'[CF=+ŧ&0J+o\FKA!L l,ikyntDXP$dp<+;[Օm3MBɀԑda?Z[Kk6q "kDD}r\jj&j8Ƶnk9@GETn~o4?u {Co2T泒>o$ kWT=sow>XeF~P"VKsR鶘pi*qJyzO-8Aᘧt{f :[*{0V[LgepW̳7Z U`%SUO ~z Ӌ8TIZKJe}Oajׇml6Y^|w_cnhjk| @[kTJ$U*1$H n{-I'`OǓNŌv`BtIXK!FO$0<#~?a|Ch']'s&=-EDcpf|}{e$=<%)-,u NH) ` nByHiVQu|z*./2soMq7<"W~⎃r)K0N@ӿg cw!|(W0'Sre4ZLgY=I&'yPiq2ZTA ,5/.\6$ؙVGv+-) ,kyo !=bl9{>y黶`ivR_Y1ǺdXud3*^-'-p}n 0Ɓ.k(ݢy١%9Ҧ~]]xuR1> fJ%c}u^4qM_ J,joJEg*tǻ`*oErN. Ht<:w- + ⰳ+>1#iv8"_x '<ȩxyTWK"?כ)q™ @FU+ K{qvF:O޺Yn(l|hS\d;JI[ zswzIĖ*H.|f&hrpcG9=~=g桧F>agxhJ~i-縙ET.qüZ|6`ϴ0ʘ0OV$Vc.UָLʪCuFVj!Cܽ_QVjQw9 Xna |}hqP9,^J:$ `sjy~\Z2EG'A!CKxhſۤ|V2fsBCԳCR Qh7jv+%]>^k,rnk IgM+(Y$R6pk!zELؘ҂W=8/-\n.(TC-'Vnchhx}WSH $;f"`[I#h_}Oٌ۳nE|v;?|摺+0ԾӇ̧Fšfs]&OrE6 z c.51JռhoG iOXHoHYvr2JU3k뫹=qNaT`x o@I,}/`gNR[ޭqq#a2|1 O4!{+F-#ʯ'N؛6jd)d gu@ūn^ki[g"j&礧yYW6Ņ<R$t8 xQOAP x۶m۶m۶m۶m۶ssj滋=SܤRսIp(ROhoq#q@=^lB2)$t<*m*Շ|XBvJߑ(}%GԔo*n, r,ށNq;?e/U>/D > 0hu9#ls9³TM / *.깼~6 8TN^X{MbߜĚIZM1B)ytX\?RӸ|t1xA_ Prؔ>ɔ0@M}H7 Җ}C7`7z- m>5=FbmL O= o8p0`2p,a1l t~ Qqvoejd qtJ굵+>۔3j5006b0D%):_Y8U}1;#M$}x'ɟ1J$alA܉ c09un#ޗ1ϵyVvn}basU􏝗 8$8!1a.,[ b)"^ 0Xw!\n2G}*m=+ A|d,?t*LXiOأg ⴾa ڏ)w^ o0^\LhdcvKsA&7Ej@_w>a}Y)?}|C{>`]o7}bZ3l<typ.0vչ(g22 ]دH5vwva|8s1#;7 v\N^֩7ntsPl~2=@avX}͊ozInw=t)!<+<,xF]FGӴ$#vR1Q/y6ޥ !\}o#Pݜ,.6ژyb/AO:k: 2zL F^@>ȿiV`ZNZ=X4ytƪ:?Fd24o :--)5ct%(A,ygu} n^_IK۱NKPUB*$u=ڃzw3O GQbg`ڤ)5;|FAžZSF+>+'[p'oY pT'gW^qNURj2|[e;'+3i&UZ.[N-kB_DęL`BRc 6X@ݿ4H(7ބG j+L97y1;i6PU a,d[P5A2\qi*-i/Ÿeز,b Q7qm(d(9GARP $',2ϼ~ tF=EyEW s-\o%tFMSP{4*p\{lj/\r"~&Frug;QU^WzPL78NƸ\M#8k(jc$gr6ޜ%DY)7xʎӒQҷU Pf%0 O)Fn5_Mw1;=/$xWx#w"t:~ :J̺cEF4e|tAu5M78ok#ۗ-#YSn4\//$e*#'>!S,l}D]k:D.z\uqYpڢy 1z  J9jӋۛu\Z iAs`Jf+ywWQVM=0.l~gK»fTxԳG?ŢyI7hUy/dtdF\DU,A7,:v2oM@2Nf^sBF :]$3R%7m[9j~HMtwlm E>e 8؋vCQI'v}r*8A….D W Mlzml՜Em&$No^Mد_YΙ ì.!QFn ]T _@k6k*|ޏ.bXd6 0?Ip+8W-F $ sƮM&$H[Jƪs hR"ni} Hɞh2 2$nn{*mV{jFU|<|dD<'p'5o 2z@'>Ld`rH*jFq%^"MBEoq/S1NaL%ܹ]QmDgT($5ݝ?$ Nϑj{Rܷsx4%#Sf8ȿS V&wK+RNAg4ٳQ`8x>w('qS$g &}fwn~䴺UryѴ) [j1v LVx.Ys"tTfųR'm"KW2URuQժEݛBfM_WUJJ5XUV%0GчІܨ#snWGcN{~3[NCT~{c&Q6ҥt.Z+̒cT X)3='˯ȳ3࿎Qe gYx-HFf2ga6J卑*A1q\iK-'Oӫ] cxR26Z)מB^uWG\j&VT',[^P O`2Lj*FZm*Ĥ6Wx]s3*1dVepȂa-q,FPgY sj)O_Q?nYϭӽㅀ)ˉUnᬳGAInb\ ]tp@Lݞ N[.ԻgAefCWt?S'd0?P,#2Iٵn|,~ bNO'(9S/[%U/?87R@b;סl,mq@+V(f&3Y*CVwG9|;$@O$}{>3h  }u-xm`)j&U~~O1~J{{ΐ\*X')"& vL۹|HJ\m_)<`'Db$U9khCp1yFRۍ#ND' <(|(Ի+ xy2?QҔGqP?ΏHهS\߹[~4q()00b$^6>l:D+@HsZsEX*h5dP8 FyUt\~8ADUpLrrQJy) Fegk3"<3|`Jmc|Y ,-+{\@R֑B }>\+ZN%Uxs SCLwو1L;t˨l$K4A2uS6f.WSXE踣&CWy SyċI]/;O-כ&i˫1x&tEp̃I,1&ʪʤJSB?tJIq塠?%/8A?S(OtފKjfm!|yzSL iEUE9}]#fyueHD֊\՜ ,P'C23(2iǐގW;<=>7's!pT_v3cftE5?dKXA`+$Kg) 13,/E+:둘&E\y =YPƸ7pK bK¡QQJmI"bǵmkĠN[)>mIĊ_ jRT*?R  hgX GB TN-XZM ZO Z"ez|ʜɅۤ H}|}~HهPwoUqwTwEW#kJ< )L9h+ӄZeU'tƿ n m} ܜZuQPўğs:\. f17\x,@o7/jIA \nBCnCJe+.nWP)\X߮n 1'Ie/6?ݪp{ BڷzδY cZ<%2r9x<;4 Ja=.~=^Ls1Lg'%@g9Yν]ܥ ;nN @ɨ[{#e4iXD♉d&Zd&\B:!%bpCٍEE&p8@_ڷX71mPܟ,%) ~cSB2 D[Zu-{ۉl)kNox2&fS‰uuW7Z\h n!tEk1HK76#C#ؑ9Nˡڿ'{vDBܫf{wDZ|\s7EBMGՑuTtZJ\H}8Zr5 ^*AZI,,>\>se]>;K4-;H#M=4N 1E.Q0CY.ms(ƵR O;qm,%"ֶ55wt&$G?rTkFUUvq9 Z%>cؒHDZ|<(SZ4A{猘v"?e0l.c3Lރ'{V9Y%oBy[_niDMs?рo!-|4Qb )J20bY$!ls0vN˕J e=Qj8Ųì{WKTYz[0m[gtE@#86LL¢D/WQR(Lvh[ڹGel!su"F˸θ^6=b8AN֚?OfIbl|3#τ!f3f_İ+H98EGf؆O1?:u@ĐW bd~:|rO>8Sͪp'vY(/1cr#<%5}L~N*P,2$Y>,Ҫ Ox <# rXQwWuߍҴ-~]əG.d;cvOUEߛ;I0W=ⵁW3pϧcud1NTGZx7&˃ 9fRwdIU~B_9f?y0;"r7}@P\,8'Rxqm-j'' >_'#U*7I-3IC[y#@q EK稗9 7OgAb9L\zĽ6xpGJx,"*#LAqDY;1%{3>+ gzo7nm$XgCp-EJ;9&[_ E 3uu,za`>;;VkJn;r_0YKY5Y@*RXHsb;77=RJGmPlQvA 3i\qhA漖-0!e|:wlw%nGisJrZdoaxq}Hzԏ0O6JeJC; I@G{iXWإɥ>̪6va;qJ̉V:%QB<Q院 :Ўg5x hF%u<V@h ` lФ ~^4)m*$ \/"*~_He+ DzBJ?wFND xXpGE4S븹O&ߞKr6Ǵ2Cnv-TcS \]lb09[en?*F02D=v?ꎲɥ< OU%[pIgv5t5K" Ppi,<; "IcH߯}5\Ѹ^9Zrtzc/F<+0g:w\Y.r׆VH+/KT3z]iNxЇieMOYE ϯ˕-+nGzONB3^cea'3= }3eCMe#1T' NRl9WE\fpYdt/f gS`2" 20@OץCNFG?߰̑v9 5J ir5)eW,@KG|ws^`u# FAt#֧S͊4Iii#AnH4r͝LӬ,u B' JmDB~Y{PC \w^ tb T)gn<gi,7J.BpQTڥNb>(f~>lIʹ=.E76@Ag! (BQ7-MhU^p*J~*AدĆ4c\^ W˥A@*^iԲgjM6sXC2`(2")LlH~rX:(2s"ipoE_FK>3b;{!3y$c$hfZ>|durZm-/?#B_H-j%;Y1FTGu2дU#]I"k 56c-XMTW?1W.`8 o"$@أPFM*Z4Y|V ͐[+=P\.@F%}U9a|v #Vw A5=~Qj*s U-C2mdҤVV[T&ˆndOOcϪG/J7,!%)*!Ao6@Gj.Ll>Z ϼGq3*+=D:ڪ٠lJܜ`[rW4Awix1mn_{;cf.2j.nFBoOȭ87 kGE `S1R`d2=IfcPk!TlʀD$`L|Cc̀g׫GL`Y2lEr'vJ|HFIH HGj­֯LLּr.蔼_P9eRV#!ˢꟗN![A|ȉ c~p]B|zi_:PwRXshUeRn9yS8 I%w&u5@B+Q#ɱhqcY 4)ţ2Yjo7q`%K!/Y81 RKbF4CJ5vle@eY3c-%=[Cz$nȦ9GvmX!N#lLݸQD=4"Q6Ji~g=:Ĝv6B=?kV+s0AGsSUw5'J>k-<|-ଫp}v#SNRP땤uګIJAOjAA\L(HR9 S9RٸPVfu9_JZR{h&,~BRf(AI#!4rC˘"'2ؐOY[R3^?(zzjq(yӹ8V-QR%Q%|wcIdI2o\&AeCNz!Qї$$OIڕ+qCo 'XWӺmqMMOg%Cj0#Wyɦ4ܺy6 O} N_aL|0Go^/4Ht!%<mrnp>ūoGc7" G̸Nz0gbc͕P/c>ܟt*GAqTօ#@Rt4&)t,qrC*ә2C̦ZؐKw|b+ڑ!g ԑdh _:hb م5Q,Y)N¿Pt3%Efb+V!1( ,ˏGsR ܤwQJ8#w8a Y>;K ^#L=ܨ̊ڞ Cul_G-AQH7zjw"ѝSW[. :AV)lɕo c] *5T1䒫qIp|q&a_kJZKm)8U9Ԉ5t7_52Ii yr`4 lxr\ ["V/wPdb:7I:?!D’Ij(f1JcP N~'캆{ͼ%r=4 yTK7׷ykRjwy||a2>N$ "B6 oב?NS9,'ͿׂqT{1n^$9DǩΌiʾwuef|c{9O'O!q'r"'s/O cװ"y.ɹg*p<<5oK+YUP)cL/_ )F>D1M “Л}I.p*A܍CF|, P`J P Y|#7̶0早w  ΖdZb֬-B8֌S)q;&%9{moi; HHṕ$6b\`K"*{'˻!=KYi3ͮ4 hkK<2+̇*Gb8B'3O}3 0(=He|qc?<`U]rUcdBs&*5Ok9F߁:DƬ!$ta5O&HmB{j CbcBJW}Jr~ZaCX*hiM Aِ 6g = cX+qSˆpIi:pngv$@&T8H]Dz2 Qa?bd`YadobˠpL~bu#(p˳)Ic`wMs3u(k8c"m~8Ub}]>paϑ 9(r~ UQ˯\?eΊN>~40HO}TV!|GtAT ?b!(WK" ))P W_%#|AX',5'5~#C@ӊBT5&\(> |jrwX ªW4h8ʖԑOu cňRp[b5!rrUdnDhFa-&3d#x>aX^h**q,"( V *+(h虺ku%ðU氿NkKB3#^ls/i-Gz,9Hs803ɾD \vZ/qn/R$D '/͝*)~TҲ8ސ&* 6"wLs03+,?%nQ' USpu 6JHp9:WTe_ +.}*oi`6BcԲ h,cj k9{P΢:ɰjԕ>"z]Htb-`tה OwC%ay9{V<J'LmK^UZ?UK-H0P!|dr3= JٗK {q}}%^Jt 23,DH.d%*Wߕ7|_M9^~zjjZv?wPP4/'U<{׳-4MWߪ.άا52Ba?t<rnd؍.ap.nnm?@a =%OKa)CR_:dT x_. U(x Q%st&ǹC1s),h(fSD-( wx+P|h>>thybs5`Y%53㞥{NC]'.ƪ3Ybbwϒa4}Rɒp.LJ /ퟣ<1CG35xm P55zX2 HDali`J$w3"rdҍƥ"T7`G2P :F4eSxE"e HۗO`#D2ރ[3?ڦ%Lg9 3 |G0^=7#|v Y'@["ǞDl!;«W3By{/yo$/\)%)ߎ5%|n~c|uT3A[![I1Cpi8=LJa{ߌcfV%9/'z׸dgּy홷%'#tF쬥9= [ ^ ƒ-~\EY$xv.u^8,\?bƦM9'C~/ Q@6GЯnH@6AUt2DPx*$5/oƸ5dV. 5 rdi䈩R\l{G Y\ykb86Qj FM\>~q%ma"TrWt: wb p ټ%Hܵ+a~6 &{o-yCީ^en#k 3T=x'R|pBy`Q¹$,ϰ\߼Yܸ|GYNHR|3pI' ƺ/fLK駿FlZV N3\o]MN(Sv@|Wᗊg< iByԯ==kߎ-lO1Ry6$,Ydoz}^O%S/*ajRb륧0XKuZPsu!exhs_Lv"mG*};-WL#/ -mןBԏ jKohõuў%PMzm;T`dqr661qvhQӱbEykw5M|6@FOjnQ@6Q `]<~DNJ{;?ܲ舥M,PFu.mW,I%( %\ gHȋw"J_v6Ғ-Фm7ɕ$zM8.9q'8K=,2HoAes~( h… JzcӶ^xUULϬV't f18Xr+ ]dBZbZqrǫ6>bM|,Yu5P90,U|=glk|V乗'ua^4 0͌qEu7W0 f_M$Fkt@H"Msf8ӬVƸe/NeÒ>$Dh{m}=ff&ÈlxehSy8!/i_߿ȩpr*оJ9W D1B&#|ߵes}Z۲hgӵe/jƋuUge\8K8"VQZǰdi7 .Kz*gI|G/[-שT0w,'aZAIid+C/cy k4JI)4T\1ѽyˣV~ T W^u#@z0A+:ghs &=V޺~xp4N>jL=nKh!}R%وo-Al[Kǁt;5E$*{g5 r%ע;L& σ\ږgd o(Ut Gٚc9d{%$ēi~6pXl}ZkF3^KoVR.zp{ ȥY\O,RO%c~bY/edl(zh#9 {[-xw)g$ $!4.z_*Tgғ@ E 6{ӱ?Γ.n'`<Hǹ$+ɬGJw|&d`0œ{ Υ5I,.gζ.ӹ &%im" Y( ohbja^_ycDi$$0tmP5ܼy]DjDUheL14(L gJ$k֥U5q;rݰTڨEuqK.є%{V)<*t t2rR&&9=\ EyBNDvʗ=r2~b~b>FFg׍oS)/(@E,  %x-zK Uy(GwX_0w{}֫,壿 9rKvs$mސMb-,։2gFeJ+%AO90dʍ,(`W HA90a˒ #R#MT#Q/A+lh(aiR@O44q@0]soҞ)췭5B_᯻ҙиuZ;.@||?ϙ>719f5 Sr*r=X.ԁm2FpዌX3Q$99BR$A sF1bZ ptuAet7u+IDc̉,7>؟?00Z\$BB#GG*LPSeXg]!Q߉x1mgx8"u )b P3"9~6nB+;Hb6!ɇ=wҤOqNN}/Po0+%kIO~Ӣo}c.7!a|5\Wmw<䪡X/;nѶ.Gc৓S:!-r9I+G4V9i6Le8f Q:[i#iasw VM6ۨ gT{"6 w'@3:\r,}ejJ:Z9W8Ϗ1=\Z Bqv1 H@߆I=bDt>JS`pPX9ec^7@M]Rsg|rT/u`ȁ}|- 5q%X-҈Ci-]0H3"{ybAU!! c~vx8p1-%nڰUd!%ջ}_ \u{XEY&cάС2KULbBh8*@xoz3lIh3'Hq8}J:ζt")dGh,*q}W~ !8/kb$ԑf=E eb:5qƠ}_es067Z>e-儞`峐{ò^OO7 X)"ګhϋvu\LlʴNf~Mϓ#r5jANdTOθ4\+L.Ӵ2dT|"mm\l^x2S4{}Ry?212j\!x ;TBPֻ'+}*{n8ƥ]q ׌r~|6jxiAr%x+4ȕSOif?AP PQ#?²#U>#gFFSWS BCFZ5 eMw V)TZpS:Kq6_`olVayHi=|z:90 LQGu*Qװ e(ȵm͊A0~i|V"޲ho2+x#G¦G;iq+ 1©@ƂNQ@<s@5 }V 9nFlx:jCm.Ѧœϯ"E)NGUGolQDMtPM:QI-|e󕭜VPh_H#WHD! ȠKz\ o?4e2%XROz>N 6,Yr`H0;U'Ӧkv$ω`RNzŮ%4B_.R-}I3#47ϐ#kB֧(OpEMWq&?Sƣґ %LIoZsNaKaz=NhalsJ7Eۜ +U _?]^UrǪC!?j!M)速 e`]9vҀ0ENk#sV8Q Sj$>Ma/|㲫xKx"*Ub%f9\S^S†JMDNrenGgTQަrug4L2y& ^ҚOEpJDP#-ZJcsik١CaeM '$)5vH5oS#]C=ftf[ynElRk'B*+y;˾j6M}!H;OhWUQ Lp| +3c [Ec[D sX8v;:.ZKg\1e_$ e@_Y=TZ$h8}I@&ӌh٣:^man'f7К%PD * 9K96a%92 [KZƪO,7w3 C-suMOί,ԧÇFBBk޵>tM§%=Δaqt5y} pSy=oaf5ѰPvֈAr#*A.ϤOSo1xS4NbF5R2  y q= 8VkcS!;\Fj`8T؎{ פ)ŕ7T2j+h.l( by0kuE^q}`.u*N/j>c5$\M8m{>!wYօT/J5 /akl*GI/\ YeN7ۃkcǤc(ciD14tqDТ m O/#(B^xQj38t޻eoӚ~N]gaeKGquܹBc;6"L&ed<)g91b%c) ClX]vȧh<5F:X0ږtiCȧP HF=2b"Ϳ<4S䉉aֵ=7 ֍f;B ֈwI։2@8 Aq yxqFVvv^TϹ~AMD%.wJAP #C`N;J*_%w jI,ool"VÓ}X2T+%h W΋9,(ŹY^Ź)5/iӻYY{mgygugeAg'g~prXA$Y#I0ȜSb 4QNYa\\R$-[zJP܅`om2 %@x&>F}!R4H!N-{d\p1d<F~J呠`=p pb?`?. irb,BQ0(G԰q.5@dDF-@h__Hed?ve-@-fο]g-v_(^f_f8>^g_e߫(v-if>.,Ay7Q<<+guR6iǝ^QW2h>\C*uΞBy'SQ$~af*٣b?{¸(8xn(N|%1jiPq*gKv) !avI̎(p0gg蛀wUJ3KaT<htw\.PNgA$׵q`! y=N~c;!dӰd%Rw u:Moz)A=>ЃGN|4W1hN/" 9+*zKΞ="W%?@?u$5Q?pIee")TSJgA0h-x MjJwDDJ:]"RhsGCҕyhT1Qdz_[>JUzi6c1&eKdt` /:]\`dιDҽ!8 :I+Pz!0 c?YP\bG}:c|D[YuU&Bp u$e04P+Oq{ۂB3lKܯ3;O; *`uL󌝊sZTbA>?G 7E hu$gbsdW[]%fgyE]G1gժ;I_BP h=}ky!Գ* ! cN"FҷHӢσoSWTh$_*o/Ӭ{h5>K^8bі5Q#:vk=U'9Sۺ5r-$]_\IXqẂ֕߀aah܋:4}ԒpP1] 4悞/|WЗahKRk';A_ц % 'ϽRhpe "rl|MH ɔlW%<'w F9W;ejp8>=GS,53< yt&M4P 2P=^xJåa0OqDzX gˢ[.=mT!ml";׫mIr)No6{/K_ 9QɧL=Myk"Cp KI1qۉL탐 i3ό*@5M Sr)(6lX>-?߼Ƌ ,R@| iq6&bbCs oMɀW cA!܁θ~·<О/KGV@y| }t; =w&wS㐓4x˫@nmFh9?VfEcO \5/4Gǚi[ ~s(Y5 % ln>gΜ\xܼ_zkHЄcA (ՁL#|~ZkKu ʽ1d-K=[2ӏn}M\`k M7/;lUK,G ux/&[X ,cLK:X&kz-o`] S`oGL vbIž窤ڰ!Y* %=Z ݵtMTV fn^34`9R%d^"%+;06?c" 8,9$0Q6!*⴪Bk,l: l0TRj8Z퉘h:xUbS D{%Ta;B]r ȨqHB۶;5f.9ˆmH}kqo|HJqaΨ!fx.V@[Ħ`(]W<][j.$qu-RNl#Rl BQUнYXMP THZ压S(r.~W-hFK16{TgƵY'drȳUT#PDXNzI:|pɂ"Ec[YZ(ŞcԹ}Q)y>'3UΦk z𭏄PGXnTqmVӠñZ73ufp:e5خF-%K_V"v諜㵛A8;}<2OӺMss[ߋQ!IP]?LXam8|yGh)ٞijEdh )VWEH\'* QFS#VAM68VIQz IvQ.,Eywu:e̯`JK"nrӚ1.oz/dHĴ%2Xtŭo()UjMbNK o(sLƴP4.rݼJ2nӚ(ӡ(b\{iHQ'$#O6@$soppFc-w'Xc_TɛO{/()J;Wk%5rhH9]][f\Ux F푣S{SX)x|OO攍K0Q*Z0snENWu UqPR_B\8C!zDήVsOUQAњhB&z"{I+b O.(WbYg)Mǎq&եWw6U7lEleZFG`۶>맇KJEK <(e"aqwtS^h_JؽPk]^lu9kQ?w5ڈ iR=4$ٔ,KЕ 2a34LK/}Dxov~R D ĢrWoR*o3ʼn7y(r󥵲 'h XﰊGcKN i;0Q嘰8'q㤌_n^&ȩl17-ŕ6)xg-J`4g ׌Gc"b܋Vӱՙp[*gl:=. G8n_f~}9P\ Z e4'-s\It!YVW "w@={aX&/-hZ FmM nǢiC.{@EI1Cl˪zR-A8Cj;&d5FQ(K(j.7t`tMK[T"i&ģ2pӮ~7>6~ԏ{sƹv+ v<BW l i=rTEEg1q^+T({W* .;sjy͓Y'h?z?ͰQ݇EǴz^UDXոCϨv ohA<c5q l_70!oC:ҔrֿN̏gEAbYVSڂyg7F&x_n?fIӣކ&}~}{o򜆾#12e@R;OG$?!c ]Œ$+ldDt{Y۰Yߟ e/}Wa@LؠX#%Am1o-Te]1x["Ty 9Ԏ>!vi kV$ͤ60^A@!l˶|.(b6Y7^YC0UwjpYӊq ^ua:Pq5@%O@̳a|UpeԬt}p]UU(dK>vUF(;8#DyĮuy%3 ѹ!y=8Ak1jQqwqs98I JU U :6)RH>lF{3U$[}?.t*E-óRGG71s'Dɇ3g/4'K"!FQz.p('l&wF(6ӶJQiа_Nɘ!7zq|PͤNwzjNXB / | gs 1%@ΕÌ~.;6=a/RzM{?3"|+`КȎw_tny?BfrC،P[?xįV717wk '4 $S9qxQp=;$Iq~hGc"e 1&&B`TVlDāp49*Eqg3*D&PT`g\Dl<МaZ|2 +~nxD( T\7xd jU  bзwM:A.!6Uia'ZL_2m˅!(OabnF9:9[`RZ[,M2L`XQ**'(Ij_WT K(ˤfH_* %ZCo9HK_{|\ɭ\~w|j}̦!%23i=TTV%&KYאikm] z!uG Mwt GV# #dУD8 fK?.u##Ejԛ#Qo3GzEtg oLV`2ɬL$%  M;hľ,hZnjkF%jopkex8[\Ԓdi.TŲ\+g[_LFVOcs#zr8m6}'U*KEpO, &e2ֆ %hlع&HPH$Ӿ~o6V]6bE O+!pU   yōQ8Fc*ѼR!z5 dVoLn4[Y߂g?Q< F ˞;>,7N+ruqP Dd͘.dcEזxCSi~1sn1yj)͆FsDS\3$oFp?|lp(s_!kˎcQ`a]sݼ]ɱ]gَQtG]UM=̼B6N_6C)D?'F&x7Q:@J~:ljط A}p?!+گN&`\!)^G|uȢgWDz:Grfwmos)DWg~ ĺ>)|ODqR;8/8Us~ 4c:ջ Ah70(5C-kuz&wdAL.bF {ܕC)YO)?@1] ^5`L\2O%PSx)pyʔ7\%8 ~1H3dL~tC#| 42<"T< ,;"&iu 0)@2G'Π6T%m;;:#zzGW>L禹Vmj}Y2Dcl֯Pt+l ?BVL)#${\mY֊Lȯ*V1NZZe78z'MW넒t UK12P$t`:C4!?4j彃Z8Iӯ:EwiCk6>~u7 Y*G|zd(??XR݋^ kGJmԮԤa%ƈՐPJ[$#>d/% ӥ" ?Dp䶼Jl|ex&ʺo9k1z#s$dzJ@h~<\hOMEa׾]7Co5E tb$(3:N`;U3-v}tӖSq)Ĕ:E 'CRŘX.g|JC[Ua/S@аݝ 1#FgUv3tZ(7Ҡ]qv$6G qyX{@ujK5bZ%+ x7d#m%uUEsnqגsm Ya ф\ ~) 房v(;5ei7@pD"WmDC_`iO81U$2ZMRer詤۸+AC@x f ;#!["aR(pg.z=q }nְMǔ*K!S#Agw0}JGSgnmɯxa\t/ xPzAJm0Tz4|Ivi;2HP.N(>-K=Ȋ-cyh4u˴b1ߨ0-)sta-H;vHURL(-V琹s(Ќ'?] {t5X2i~-{4 j&Z FkiOˠ% Ha}fe} `R-$`r=J8w=o EfkT*Ξ6m#-[L▬c+ l ۶m۶m۶l۶m۶m۶=$ݛ̙NVJG S >>dTBe{{C_IsbIWGf7Da;41\ok 98Ք-za:駯bb3HCNOԪ@T]hXLDy!E/2mBL)JxLsMŠ&WɤLs_a}|GgC|V|Dx)DQh|! +u=$?n4)V޷mf4 ʯk}F -}g;lOcޗOODAV$p$@eSmOYf>3IUY،e3p8F%y&uT1_i LMsޤmЭp4"4|ti!]/} PPJq[xVF3Ϳȫ1]2L,>A3a+< aS6C߯\3~Ƿk\?c~J]=YK(>ߴ/Fo7{h/)-Wi`_YWXJShTܺU\Vx. My& sAR\'`=:yHww/:@@8.ymwdqL.bE -)(Wh-xK 8tWc#&ne14d'Ҟj;RxP퍂6| ,`1A,yA[]WQjok >4` .1 nwW'|X>$d 8#!?~9Kq&]a\GƶA NgjacFhiAP^CԱ@P#vr&O$}A龰>B5 &Ŭ8ӆ)恕`sMryѨoOgWlMfg|CC]it-G/2\C){4;I ՏNYJ%jokyu+KΊ=[Jw4SEjDBJT#onЅ&Fn!X6z6`6{5y=fDx.p-`d^tOY@օ(^/RUy< #!PO;Dgxj3tDڶ] eIpU)5ړHfRq ^@w=nmorc%YTDR?([dzCnHxֲHoӸ w .m~ʭ;PX(^}:ưi{zxjۼN޼;v24r3\k= l]-XyҚy'L-y><ϔ'Qۈh&y<đɫC=piZ&}n1Ww>6Lweȟi #sɨ(}\&_ddy)Aw#$V28mOvٗ ́>Bn z )=9fEOZmp)^ybNX̬ErJ+OJ)Fw 5[FgЛLLLt[9ֲHֺD܈:8p~Peen?|]}]C'8ZC45`lfM/tF^MPcԃ_܅jDlyWbLo-;v{/2RKH?A8R>r<.[FzڍCda6yܰ6AD:}4Vb,n/lg*"1BĪc|$3NSA^0g\ì|{RpP*7y%-M[ֶ4ihOȔ,3(\c3bJq8I"1"5{QnvsTv{+V+j*՗&eb<²J+a/ s(Dg)vc3!*G(_<9*$43kL2q2B7,OIr򇷯Lsgepf\#(|珛W}cN=% ֯%q5% NK23Oʰڏ|$7-q; Amއ FF? cAjz|d4rp1/GF~Nb#%l2q 9&0t&}"@E')y,*"$EjW$ƹ |u=mp"HDrmQۙ sN u1bBB66Ocb5G;re?g QA;c0A-UR@d~\ܫd4RuW{JeZ ~z\BizD۪xtu6v`9M"t0ŝiujz%b ;I{O{d=+cu\&(o~z5got%M!*:LP< f3L~WPNtvr4q6L!OyP/>}~J^k9ivE0THя-a2yEc =W">c}{W4[S'fSeSfg|PeQwO+[?w:j%c(*P:@*PsSrmq9Jesh2|p mGGQ?_<@n!1ńEJ2͊* 5/7F/z=gmJ$~{sÂ!#G0L:G]-w[Wgߵ ގ|X|Ʊ׷ԋ61saF>}x|Q8q^c`7Eωi؆5뷧ݏ[$>;4?,ɭr?jf}l~t:9gOG$xnkNqV~84sI ֭t Rd]v1?t!+hՔ 'Jw|xNR4VWO"Z߰/p(ky&B"ީM+}Q ) =6HJb~y0/uSHRqT|6]FdQʲqqUwW6Jsד9j_!jtųxT hWKI0چ%o­Wqu"٨8 !WU]T^7oM7S(7$Bīq_m]^"((*^oY/QҜ1MNu >֧ԇ6l&)WYż@UA5rJ3Ĥ[d=ar?]M{<8gCg^uL?J-b <#+q% _擷G!^lj[_} 6'DCiCJXU\hl| 1d8Msj՗2{+^g29st+y}=*iUgUw k^" ;edbbJY-Ā{WwE gBi0,ADjaZ%ԕa}vN?t0^mxp+/(iB̨ǠTW󰼏Uw],b.fF )wqD^e+\$s $@w(0Qx-RƦB2G2F% ŧNrb_M\ eE64P;?:Ҩ';ZPr T޴n,K%0CE$ s_A 2 Wxv,y]& jujɁI5n(lgj >o\\=mwŽQ47Mt-*Wqf&F#Xz@UMp[? 4TTV8v{9Uc|9rr6v?v]"5boO`GRX ZE|P'uo8=:IZuʥok\e/>T1bSCjr,=h)oNkjl"nVV_2CҕR O$}a\nMa}Rލ*Q1e`OEPO\\I;d*_[m gte,5AҸGkkFo!Qe'%-O z?F SP5vxKυw'U9< o邜Wݩ;Tv!9&F$a)EѤ*3aESQ$>M[ݳ!go8-MMIFb(raˍGf$H?B!"?83=҉&B٧JO!161侀N^/J@ |PQCN9 py3W<8lۋ#?l^\Np`D-T-@vF$} Gw$00N6.+}<=؜3pQ.3֣RcZUO^4'{xHX@ P!$K/lJ1?҃F!&A?_ROV<sCsz}c/ho_ƩR1% gcuHBC~aSdL˴zOtS%崶2a_;u9[f0b['ȨccP+̴0dLofcJY/ZlsȈ96=KDHRv(/ԑjÜ,3@?xg*2)| .ܯ)5 XQb,/2Y [&HGCy71倫*s~vq`r܃c{y Yo?3Gz8vQ񝘺xYvX2oIeyHDhdZn6~3ZϋC**Ʀ;;ZKi&6't[Apz2~w,y DjbāOZcl++r HtZ]kmýjv;3:S2F{s{]xwzx2X) 9~j2ŁVQ8Oi~qd4V6H7qaaDg<`X8 I:ᔙJEeh$U-|EA!>dp#'f78,%^\\`nIB s b6&8v`"Ț-8 u}(FjqȲs} *O?ۺq:2i'?1ȂwمU=@]dN+Rzd-"QG)@ĠЕl"bI]C` 1L)Ff%CM}T2BjTKIxn2:Uї)iP\J);r$I$HJ^Is jr(fsJKP,#O{yeK4][Zn&($g̔ndxZ@1$ĂٓNסwžGsiZϥԓOF_=6'I]xlCf簜} ƺC 5BZ Z/UM-jaQ~aZ.E.IC9Ie‡`Lݛx;nW'0"{ 0,U5EѦƬh  c'NJ褮AɠYL#O;3p!rC 㾰s5Xem"~M ydP`oܮed4**d^;XXXjr]?~qH&MER%HXs#Ia٦pYlY %iļs]E?KG{\{qnd89#K%Y8L (4Bc-<̵~ԨfТf`K حO۶\.b C%`D 9p`hN0ྐD{ՑX(GWљm/'2+8TE:sdXzlԗ趬"zdU Y3G_(ƳFx"5O؈`Yi"s)[GQ'ɨѪL^<~Ph҃iR0׬ >< c *B87@͆1Kd. Q;e!&1DbE7.jkT)1 6}"ZNpi9 uvws3{l 4Sxf SG Z+;@UE_&9T.!|d1(89,Ix^{AF@P >$B^? S6/$ozy?Hz-M՞Qĵt1p`$^`ךD({scau08f28p :A 7΂L^@P3gF Gy[9/3*N:41x ZqĐ-nsO^0qiu0^AD?!Co&"ojj锲>"} +_u%`̍Gs} [燼GnꌍPu(_ C?ot/'OʩD$!+.e&~ 4==;8~h>?DۂvdJqKm@)$Nbnx-JFˬ(b6cN*abjm;B!%Ơ]ٜ)s$dyw#G,<QUSqHQ_\a *+TD}}^@K #X@)1mM6#!̓  njs)J aꡧ'Df7uk57I-&!T9hjƉqܱ軧h *\民az ע!rs-uIY0y 2"hpOwi,ז;,19= C$*unLB5h2UGo}/VC|SMSdq!*X!\{AW%y<|(wzst/]lg=fr l}U=JOVczN65w"٦nQEcӤMZWN!t.xtxrKcZP_+6dt"i< pQS 0vTm ^! 3H@ƍZ s{UåީW&Q45zax764b5UT2*oC9Րn[ hJorH*c2 !yr)KXC%k e\n _~ML7mcyTܨg)\^KuU CӦ^ժġ`(G}k H+uBejAiqVq'Y0z2I^g'"_tsfaɎB]*fs#@~A! Zuo쨺e!<>M;i'V. "?"{hkBlok v3z=-\`ļ3Ș_tw"uHևGɁ$-# CprHyURfTmdY|t.w(r8!(*{TWCw]MNA.͖߼ !z̔K\ۇ2ܭXK"[gZO:Ѷem4SVjÍ:iq*ٽ6Ab14w@Tv:Ϳ)UIRqre鮽\[}/B`!@c~ :0}x9-2?N@arWmk,sZ-E&m.~}+?0\KAov 9:t\я^Js3=UW<}KPmVh&Ȳx#i=W3vaÛ*Bo)gbA[YU!94(S~P #DajrH$>KzZ *g JѰe' O̵NKfr:8- J6ZRl+Rq7[p2!Xԉ\._1UM~ȜJq+@K-?VhYh$Ljv2^2(*xy_ ZV|ʟwa1tqLW=]0zA9$Ěw՜ ǨS Ґ644aP}TtRΫV-RDΎ~]xXv)r1oqx_R/缧ufbn0*+ 85)/F^7v'@2 'K/e~'o=6ZLI{ B'"0A,E,̛bhcټ D0Y(j;-ߢV kbebXa=Xd!Ø[L@ $e`ElV6کqq :?i~1X[kZmf_hp{Z 67[+w+rMf:1&&%$r'Of{};=y&NM6&uLH}ad^b` E`D #`zdQtCg:Vz ]C{4uWEI~ caɂjSFYJ.Xcv!/+J/g'K'Z=XTZCIHl]%Kk˸7 õT6k_f˝ tklkBZ ffIc gfS@6N\ϸab$Ƣaoˆ4H$58[I2S(gPF6cD+Tѩ!tg,ñEImє>=i_kuH_"P`ӛ{\kAmp SumC=Ӟg\(wreid%viN"TΉI:-^ReX >fi|\Kq:zhkn]_>lDDq ]S [}Pb]"c8^fc[XsR)I &vV\ʵ^î 7|*>S6)zH#|Wjk9ā$O=6tњ,ɹ,Ru/N֜X-c9Z1s $ݟm}Ě'}@#An/?{Fo6tulDDO-yOxߣ_zf=QGtݒ27pchڠЖ8*#J崈>vh(hzliθdht^̆Xz$8ENі2=y*Ra4t<\bWht\&A, Ywf^kk}_HN*3i|bFvۍXi[gCGbDs*"ЉXW*Y-]2k\Inke})~9nnX=jrpf+Ra}n{J1PE6G;uK*R QԱ][ͻC=lk k 3=(?MOOkGi5q֧W?w*yeзƬczqRk]((Gl`j]mEw=xQzO`JKfʄq;7G?Ů{_2%{h6q?n'cJSSRQWpej7*U Mjy'd^-ұ8wg\8;аZs*KT#˽_͎-:45oXY`'QTQ6\Fp(ﺧLT!Xe հz9^ߢ)-Fk俿'gכ9T6Xbጛ+K՛m Z/~|;*s4-_YYǐh+c# x/^j?S!\iO\?\::"fye?5Nvn2Tp3!ukHq*VzϮ9)TPȱJF+Фt M 0Q`" #՗UWjYJ]ęԑ.~ML+H;&=;(],' _Gt j[4RnP|Fm'feCjypJO!Yc7'!ʐנ :ٴ2z}U:>[qG:vR-mUolYL<Ƽ73{;+wSJ\M֡kh)鎸 \L'wEmKx{%Z;H~8($cL>330܍ X`!Oλ@!*@X{ B0|2'W<%qxhNXHlB@-Kz6l@$ xU%V3;*YU$wL 6<)k x# X8:sXeTƠ#F~C#[sń?M 3!A&fkRTek cHDǼpY;Tt6"g?kgD"Ռޗ_Uc~DWXwB&[C|YTal|"w<^$r4Nۙ6jP`{zP>2m.7iVuC]0o*ŴyY--ay:0QWgI31!^O:um50zN~`ք(*g~m~^+cGĒggз)Ľ!$M #Rk2k!m+ngCX{yL9ahWfA` ȹqjG</=~%fLO1ػ'LwxR)90* 0),1. HUi*=e(W.;/_fB{Im%*xMPCׯv^Nˠ+A|S?hpx_1yp Rϟu_@Cr@S r# esv"01oܒ. #Y>:leMVc5#,cb4%Lfac`J0~y,yvr?feTV1aGY;UKM6޲5`v&`/CXRy+Ps?qcO˧'`mIrw&c 1YJ3J#(!-ר''C]'I+aYKdͰS .[Ŕc_-5sh غJ1[G&A|{ L3o4<1~[{4O_Uo6^c-K4?~}|+,Tک9c_閳߿]DR|ۢKC.rQ" -ϡ<[&dW)wH/Hz{۷XO0i Cys1u~Л$ANp^/lE˄p<;Mkշz{[BuE9Z^/A{ 4gȢ[A../ <߮_u'jؽ툠)y v̳/ }ps>!5JWn/LF{1.X3m7_qݘLWv/Iר.=V f͂]O3 𳝯+Cל0.σK+ 5p&́ hxn3 V*ux|w.wvPiP; I6I_HEOTg;j͛6gU3Z5bg::]{.z;WJw-10\,9S|-|/YZnY~[PFX_=1-,ucvDtlnVŠɖSvZ/J82h/0i9+j@A(Ǭbat~.`;}٣עOIu#<*Ʉhi6/[)b> c%M,دnoDʈu-:UT sM"Mݪ. GK(5Wӳ1aW^jbީ,buaH580KG3r%ӊevyٮ0KXPa̿^Uݲ^Vj6DqgYpZ="S#`&;d<ץa"`WBՠlek-AH܁g %)-Ho(@ohJSQtU!e-FeۋJہOrh[-ͼq⛳=v7YxS T{b I_\뎸9gWjB>9B0(8S^>9d?&aΧ 5hY)ؠ1tdqxP lmG E9ReU}j)plkޜn ߮&p/F`4 ]3A8@LӇJrx=< Qr56]/9/3qk2rGdY0 +zOzq?8;@1\ w |>Y.l:NK`aRӧ5OUF4E = ݶ]vд?-}k :NleS=Uc<;e)p瀛a jZ٪@>Z-?ػ=A 9Q@jQt]2=uV.%%c W,i1+ud:jlĸElN[Cawe@]ts>õ6 x2O,XO8-0XF?gx!&nsQZ%/_ݕ^qG\'GJ)ٴYT=R-& 4?4UB4’lp<i`[O^:`,6EPEcx(+23{Խ摅S\!cXİ`0{5IZ]^˞L63Cm6I&Y +X *d"ч<;vzF_sP5[ d[KeC >A)?yސ82+Bq2a,.㷐X(-ɪ^h c=41XY`=@Y':+zX8lH(˫ϏڂFvϸ`޺bZ ϐMTBԵSTMQX3>8M~% Ֆaٟ0^/jQEn$!v7ؙyURHYWg;~EqUb~'25Y+r- DNt P댵h]7 =ݒ)eb(fiʹBoGw Ř$̈ vKHo&3 п3TW쫸BYZ>7(xچR_Ib/5?htk tΌl2\w qs6Ա~FeLX"`#^ YM]C#Qj5Gdض.y=!"f!UG{%좖-m71.c 1\B,,~K֍k'_l6x+=nA Ze{?8j^)TlXLۼDaFQ0Èn4ضzqr~m3_<-3VϿp1ɓPah93'!<6Af]17ݦtc82l4]˪ Zdf? a8~΀P*n#s$5_v7aե:X ;JW  d,͕5\)MxfWa2Dzai^Idamb =BCh7I0XKh^1U@.혶\&0t8}ol)pȁêbhmM)&q[ؑW,"o~xc fOѹhT;m)f%\2;c,:`|O$3F7D~2%+S3R\t}J4tVc[ yė# 47x~=#cUuN9q@nUC _ ecM樒fhlPHutq\O!k!\t@3x w8QBցu7# vRAbǔ[|-V[g]Dd4ѮXg wj馬}[द; 褚a|P^V :訵{*a衶.Ap*ڠxY2tb_W#(ӹަM)i=[;Rseњ F+07c"ACrMat>w< TSAbGY=HdA\>[&J&yJ[ѥؔpFXpBYFI  `he{\6>M&c%"3H"bjJș*XͶ@jmR15fG6?v xTm[sW$C"[r6Z(;"YuJ\aHv |neC7WH{Xn2l؟ڱ6Q[mOv*xWq;.q"]ع-b*R\=7'ܲks9JV:&Pn^^x}~OdDH?M]R{n Kv %. cx{bc6hnI,%3!f_Aw9tgzԸXt:# a(gεі v9eMH[ͲF|I_ Te0o@24{+A![*Ml-a_EXo KiF۱OZ"l&LRSdrxpb^ÒK*f}15͓8#Qy+ PU /*X5ZC\FX{]X 2EWsڍU*>ĹDx2Nst™G2 %u @"}MCKygؒr4*M~:Y)D46Ļ8gy? SW?1sqB1ىZ JeH-9W6BP,, "Awۅj#q܅SAv׬tɍKi,q$h(2/'w{Sa*Z(!SՂLJv >C[ 8(Jn 8'e|FNf A=`_e[o+[n;i 4QJI< =wiУvGaժΈ:oA<1ջa~FG:.2DL#&QPUǺ3۶ C\H|O21. En<<87^a? qGl-srf^LDs!$.aL wcXZK@;t ZE 4JIݥƟAECo َw1bs:) 6T{Fxz%f#zmVsj$Eɮ[My⠝$$h9ݿ6h'eL|K x)V_WL|bu VڒlfhHiЁnQ+{($ Kt I4~ 5ȷI_1 #0N]8+rO\!"@z38&s#}MOi'n2ֲC߷VtiaR,jJ+XKvͭX']1 ek{s[w\DN20BQ '6vԾ#8% Wy\zqǸoMTb>uxc/vjPp-+fQ"B3J(HI5_P.Y!Ae3Ky6qRprog5tTjl89DREمn.ԕT{[m`oʫ6sXޑޅsמ)l#u_޵Y#l4Sc<F@wW-<ϟ<@ө[(Qk#2C,Y?P痝7nuZ7ֆ#kO{ޞBQ)DB_/ Յ`tyܹ(,->+:Gg5k(ɦϴ3a7z)ǔ/[yv jq 0TCNOK1; kC!vz"5~AGXUiԎvoSlH39# m[dOSB8\pwo/mm[qB?t3YMJ%~d{O@َh{^"!$ԊK]孯0&@p]TCd0~CVUulGY x 7 .2SbεK>u%pC$Ҷhe9'IA~ap>D́{^W9x3@1_t #jYu./-;.@x~O v#A#@߀?]Ǚwxw鎰4,2AGg XLwdZP=i%28xrW-1m;hʔ.U/=s He'O%FP/@J⣍Mg=aW0QwLϯhu)oF*_Ǥ xPs5,\̚lc,5!~Q2j #ţ?C]%1oKٌ5Y 椑mcʉTg#Z/H}s^~OKItM)K1ѸoiL t\jN+x}P6 Kr›|rO :8j  67شQ4k-[罩 0ʟ^WP}:f.սь gzH<5W"F}c;Iƽ{>%[SCHo&0 ihSr=| ?'V3ec$^` [b2`ݻ$8 9&Ƀtf M.!qư8;p]М{k_:.?͜IO?^Ay3򼝨OQp' ?L'Xɐ82hw9]!&6RGFD Q6~:GD?(>Ҳفug>|Hq=vaNFQ; 8xU0QLp^A/ӢqV+&eI( G>85'^w(: h+֗]ϭ9?2/r2>cx9u@ &`?C%O8AOZT|o<=|Cjr3/QPI|㒊^7q8\T8D$UHKN*| WYq~:ǺK(6g.Z'4;"*-5h>϶J&;a;&Qi0LtF΁QA,# ȴۖ~({%? Z$Dv vmzob"Q4z-H^l!Eck:]9FTvVbe1r+EseASqpQ vyR4Gxx땑FޡCktcv_5ʂ0]Ahhh>cR.cH5cFX[ѧWvխt6)y^h7hufv)*Tc)F_ /,<ҭ<-)?︀ea&! &Ks>`pȇUa1%Ӄof8Q+`\X1ƘQ|S~5L:`#I|M5n$L,<*ί|”5 uG@]BꌎXm1nHi&<ф8fF~`gEj`/99Όr h3BO1S&t^ 9WMu|4b~%C_G t+pzc-.vوZh bu?=*B]o̩ \REۯc. k_vKs5 mtV{ VoSASunՈƾiMˢ;^]Y՞DvjTkL3뺌 X\^^ _o =u澩zƯtG*jlu`3 zk|Hje*gKwg~Glm`4IO0)RQ֗+E,T  //0#Z8⨷5)&܎nPJ$졃Rѥ3r&5ʠt8p::N=?͛+Z7Yk?_/W.Տ_qs>c{-Ljpp$P!S SR!Y q l%մlSc. żƥ !ql%-CK[Ը֓/( ǢSb%.͢Sb2iyKsXJsf/ |5{,.{LԸDI}6 7Fsg/TyY.ո׫ټq3q"'Ӗa&ywz!LEUfWP"vGb9 l.((ozS#\iO_#) \|N߻r<Z;d T+C&vYL qBUI[*[+m+8mLE_@Qn~ilbHw'"zkLp eoG"7k}63Ei\H%jJ\ G?A2ws./9!O 'BP71TFnl;WFO WR_ "b t9@l* hfNzʋSJENEN2f4ؒ鏽<.YVdFOG+XcxJPI'VKo?a{  A)DCWyQuvX2g)d&RWoї- f:rzSmB)pS}pS^h:U qAY]|-{yy"-_j_Ѣ{tkP~T7]G‡.ͬ']\qBon:ݬ.nyr ޳YlwL+ey̖xZRN$-[gHx+g}WR35<;7oCoM[/|ի7+U^ðw ^F=$+Эv-/o0vЇȝ\ .* Vd'D^pQ/vNwQujVotO=\.9Iɲ3XXe_o 1{#Z 8ZN[6l{oح^>(df9ufҨhX_a#tMX:T >YsxJ6m[N@PW2CШ]P]#W֩-Cd>Ci0uwUJ~٦╗4Zhd* '9ӟ=l<>_ADg#@^`D`U\'ۼkJ2-%Y.IfI0)s^շ ՒާEtsݼCc9D[N7 [51ZpE5+.::2񲰗2&^&(zp_B>L|[zmڮ,`UEɦ4jMKepW؜E.BD+DH.( p1. ~.n{G潛WxM1\Vzt6vOkG^&t )eu*{մyenȷ"~8H$=1͕$%Гk!0OWDl6M7{Q.s) :L{q&x_u@t iFzԌH[z,ԇΧ b Ã01KDk{}lMLc^NՐ&קI8fWX||vhE}uHIME2m-k4eڔzZԗ N"3ɛ/LfgM/_s#>U3ׂ#wz͑G՞vtE)/Ssa(.]!<;3yxhӬx?aTbs}K*ٓꢰ Ry^s $fzxaZ!XlDܞ%YZ1VJE F*hNc7 5zCϢ FmxO-rҔgd$j+I~]օsx0+'#lTz Qo \oWc"DsJZ" %'dO)LSՆ)N;]XAZ*􋭝HJF V/mM׷i^WGnלyOmn@W_?D ,]m۶m6c۶m۶m۶w ZZ=("wĎv{km&AG>jLkGO^>g,d`} !cAi Ut#^-I6w$x$_9c4Λ!;9EoV:~%(y57mД2.$7NR<9N!2?+9ONe@)^B_;^^ !UA>ׁogs>gX}ʞLH9D5e )F_X@c+>!KI76>&!F.gA3\ī5w~S&-f!,Gc >9Nx| h^Tg__vdwgny|[o/ٸFM[!)g!6TKp^HIQyFo P\"=73"]0n?{ancS_DW} wH\` Yt= z=<ՀW?fHV<!K8Fͪ SFP!@^-laZ`B^F`ydO!;r,8j^yLCxrN"%8m=$[l}.9\ /۱hIH<0wLu`^1EɷCsA,yRvDI4ѫ@` /M_[ȧU|?v㦟qr 7'OZAd1@L L n5Ð ܒ #CA L3$6̊k5 GRzKUOPt#EBH] TCLLɉyaҡg'Nubʃ/+B @Kv mʭSeѥwfoޫD$_+C]ъ6И3YW\dȰ*W> ޴Y~B+b`R|}y0A.LN|Đs*Y1=)1h$UǴQ,W&c,B3e9qRÿܙ7x, uOFhSh k6eT醵]U)G ^9D,0)[Pmy6F?NT{ FYFzETTźl4R &Q{> EiUoqcҰqpxNFM:$Cpif2&dTmr|aQІ4FQB-\k2e  Lyl qOb3di[οL~ +߂HqjLp؇QJ^cA̻!ف"EwŔqd_k : ߫n>1rs7bVbQ:@|"J~, j jj ;bEvd;gte<| YFl,Ҩ۸n_W\LVC.Ű=4>_6*bv)_>O{;!e$ц OxĥIy`2 n YuE%,Ȏqf9[RbX.'3&!x}ʳF[F#JDuY2ߙ+Û  "w1ƥ2xP]H; L Ue$͸ 7ey 'G馤;&]w`4qƉY;2պIj l EJ]!pg}؅Vc. c{,n'Ƴğ0C0n~ M7jC\9 ?2L^fFB9d^EEDzP^0eN.SdwzPiYAS{o@Tb?WBh9L܊Bg{><ބ9X`eA+RELv$4f@{ 4S<Ё?KB1AD!: ,)HZDΓ)>Y$&;xrZʿ*F%N3#Λ.MFZB=gQ$݆C>1ZTm_ FSt {G?ً# f9&R-6;Mx;8i"ÖE,r1kSYbUGPSiݱiy|i*`XŏF҈y3l$s%Qj(!Ul{k;uCD[ j6B B-“! rb`x^_=um/Vzp}Cbm}/[?*b/vLmXnGH2QF+uErah˗ #T}ʮT.q u ?񥜑}~2.[&GlF%EِNQ) y̎rHZٵRAxꆠlT +ZZ0>Nf&v6=%z ipU plBDK*-\Kt\,zzF ڕ` (Ny?68ch^lFTQ=,%(^(@WVlJ0` N Uel]b\yug(VzᵤEC|̬ NLkb(_t}8{`Ց1SG+DCDgtw!߿~W6~UGev'P~#:wF;ǘ|=R ,t J#o'MA;%]R>ZG0YՇo>S%2Ug7 n PͰUKsdz?;%>]eQ툑 1+D7?:yCy? si H/_]5]s~>=G 1NOYwC.MvgT#WٸcrFY X\=}ݏpcKYx"Htfq9ZL$Pͣ~C{-!lSUUA 9A VK4:/æ' < qD4ߤ%oi s g?!z+!Դ0w nAwWE@r$J+>'~="5qn8j%%wPpVӀ_\r╯7f?)8q? J<'ht~::z}xk:ڠq6p!F'aO8^Sj2K ^i:} l>^S!VEb$p]Ih exxIȳ>T[Sn,b +Ax2Ra P@Y`i(A X>%(+1L#7kk5/$2f!P3xl3"ZЦh,ṷh@g'^pu 2J-va{K̪ Ka?OFвy G,bxɬK!v.lP'|t^4ПIa/Y- /AӮӜ}L?$i7{N-ﶒ(tqNY`x0f0` ?Ջ86&\앤b ]1O忊.=9lAPtazpE>?mfw[4mO'=L>F}a;+D} K* J$9$$ *kgv =RCc]v5r!U~0]X@B/ŎrN2B3Pi?|cҸ a 7+!Ԯf_#Ck? Th=s~)(XL'-/Nb?hJ{ջ4L}lEo -!]8&LyeA?%8kC*bt!Jb9xbpV(f.㔠n͈]-`ZDR/ߍr;AO2O,b9%{b\1Dd>'Ty3 ԛj=QSfCnd;}EolQbh| Sm %/ YN^?g'ec~#+\TEaaN 9|+v?]7L\[m;չQ7t.W 2yjc!8&7at)UQC\(Z!@&&T-TR- W7ccQI¢NKTmG߯j$VRc[oR9st 9]mOv~ͧ1"E1QI`8&`w}Cwe4AMo2oR&} !Q@ %NRAɸ8P+]][Mv2DxJk ouĿ⏢T,No+D1@÷[7+ 8 jKN ۾ Q`aCtu'⏾&=XDpw0!`z.6!:S|`)Ōþuht}8-BW8mXe(ph+A.el8 vfMC^});׆`學i $yEaJw63T³)i+AN}+Qc3IO3˪cq@oR?<Ʊ<}Yg>i^r8-߶oVA[+/.Gb,k7v sG,lT,`+6,TKzh/27 c( R'kt5Y_H [P{bj^vh01((N./>Sfnmimie؅AHY۞ۼzI15xY|U߀b2LY_^`hy4,&2ϥ$ B\~Z#Y*]pMuNE +GP %yKCڦ+唲Cª);DqHV;Mö_ F#M?Ҥl^:EppQ'5CX*( @F#QҞ_;EhAI14*/ u1Z q1߫==!l#vԞY20yL`@fWH:m},x!7 }ʐ_T( \9ԃq:_&'2u.LS9+$n"{X z*#-*QU+@W9!Nߔs EdzTB{LvmS422# Z~ uaI9g|6 xJ0I[ +r8VD22}ab/ CQְX3i~>7_aH$U1eR_R"U?j 'Mpԓky6 }E=`+|5lzk~"JĚT6qxAL4Ǚ r6T9xBl3$BA(B h8<dz ZX_NTPs=K ksʖ[.n WQ3W鴵,enځ%yϻ*֘A[ 0 @hi[hj4trO%Egdg6s[_cpg܉tmT^Ϯ$@2JP#CfB ܏HܷVma:)}qfa~.A~q*CbZbR+5-Tt[vf%P>-/\BD)T&T.d##Cj4a4%"N\h*%ن.tlOYh ~#]J;we:dQK/B |]Jj];|:LG^-^e?0EsqK2 "9m= mtID#Q״@wf4o<=? B+5yCƁ8'F." 6J Ҥ@=]`:@s(t Pi=fω]ȅ0(pm?y r4!RN Fl4 OYYI-sD!R*Gh羽I#q-8x{ժzA7ۺm[3:m 1D,F0)Spe")돕MBxतF>s7g{ p}(52 P4,dXv8!ǶQIuϔ_ ;b4;!|MW.u7o-\ .Q]e( 9؋#Q@0E>Oc,S_6 J"wVR8Ȫw!SeArKHV#nnK nvO?nƠ/0)}i ՘^F5z4~yЖ7֒`pg> -Նem:nj[p8H ͤm)70^/\Zl,,բit1W1|(,L '3I#H_Ü)xp396M'6qXs)=$ܕa,_=!8:&( cjƑ{Q" ! /{*`\Ӗ@0ntXYt'By ADm'ҵ[n ?|,e6 MHdzՠVKOa܊UB`+R19,ͬDm1Rn7ჶOd;\wrMXI<>$C42v Qֽ {h^QY҄>]̌ṳrHې!щBS屢,\]E6SXk"unl^l 7(|Q/^)a*` _5uRjdUNrP^q eНj6SA,;2]?6)o\M8w?VtXrb<~Ww nn=^ph~VDW3s;G'y27q̸Q_- [3RBʌ)X2h,Ъѯhz dq;5aK^PC{|XuiZT4w͝B4$H 1T 5j.0|~-Ă!6&rR$]qԘ R`~Sba~@NLKPC`N)Z[IV^ hW*bO.:<<iZοciL1ځ&'3-nW~\%q7Ӧ5B{P_ ۙ$ 6V8e˺gOp=㲲 crݡB7϶w@7GMeaAiK\NW 1LݩJ (pk1Z77?>=?{r*4I9HD$F]I:ӧA8.u/ u"q ޤSNa2uew (.W8oG_6 a@}vy5s_k)8]Ah%yCǞ#O^9rLo(.aHX=ܟ=NV%yy5(>F @DQmBk"n}p?6J+4Рi 1dJ~[ƴ:ˬu\v-SI7֊tDéH&Tz>etzs}0^}n?߭\/'[(_l8c|}ҳ.}C?ڰ]E젍XnN SDC DJzZEZx`NdK"&&اlf1(xۅc .@H˔H*I-D^XZ@o~}Lc:dٷmALIA6f,=rS~ˀ8Tywqؒ32Gfwk!FwojHbŃLG\,V?\gkMla^LWt[)), ܗW 3;:hQ!d/B泌EVCm6;Ʌ_XEGEL<6YS_|pKojE>5͚;c< c8?߀CEI#`c3Xe}*!FѾW!dp.6aP&q ױPy%8GւR\8>qG OFA 'DL"Ա4D@Kk<3oY94N&&H]6zA UbB«˪P<*`tYPb%'ЖD@O&r{ %%c  ޟ}0}T>]SHż˘cŅ#e$I̤^XI[%hxԦl3n]Qi}nBls8aw.ɜCcψG攓qaO8cP8:W'aZJ[knsN>cP2GrjS7egɷZoCR|_͇O˂o@V&xj\x4zS-|g NLq k+xxlAz*h֏AE暫H6Yknsal\:Ccpc}%nWWR$){tt_QG[t˫NҐ&Wn#R7F8ko;%JSku8]>G sMb8sb;!& 1gSkӺH<'8tauN%֘Ӣ jB93 ތN ѠWq55U5y8)%#|{;MDܫR/JihHMja%;Iv2IbsCI~ a$wJc"1zu(L&5?1)ǦNOO ht?{F0zM _ =bD)ާD$ (UB6 eKg޹z7J"PzWi9HfLaE[\jj7~ 7Y{zHS\?9|=M~3QHFjQa>ͣ1xS> f' ni[Rh@ ǢZ:9/ڌ!JߓAXL3opV2w<&!Uۄk8`R$ASiqjaR]Q;QF'#SΡɻvd(lZ!,p!I b@q24fepLoХDx0K?Q;|@?r6Df#F|bVpXvE;^![&dCw#%t_lPE82NAn9Vd1?qMOahT%@Y Xmr9w?( |&|6fΥIK3i Qh{Dw"L@CW(rLi9fFLid8w$= 0%M loX WRuPV s)l*(ˑdzE tGʛɲ̦O}Ky1~hi92e$F+R:)ajCg;:j~y% ">*w bU?azg;gp6#U"bYl:eiYŐx=iC{#_/= 6.³.] ::Dxn2us|zAaҌ d>ӄ W99|5\xeGdG?^3Eh X֕DlImOyx4VJr$C/> 'J;;L־aP;kY;VB[5Sqʦz3xBmEݻh~U ^mS}r\gZ4ߑhroEh:HX>YҺaP6 V1+arͯ&MV%֊N?{sQX5, ]k._gF'mEveչ*]r{zPQZaoX`+8:>zͤt!g-j7ԩ-\]Ω[6*VP~ήa/gMu"Zu zASO !ĀqۖWpmXV8b3׶9.2vmq5Vc+C]FS;0us^.v˚]8=&)v,/W}ȹġgvlPԴaPn%耈 h]a͸svG$e׍_H?"kSk:4r}z|!/}ǺC4}c<'%J:eε Sd\"6 X|Zy]ٸd1޼V *5Hu V#4 ̀ OUm<;3Y d0vOhf3YQmcT5ۼ?qсn8-Y(^[O*7!WO L^9aO_@ڼ/tNBW iG''^MR?;5:[*-|7 $Ooe\ҺCK4d X@w>-h ?Iޣ*';3f@B6UX qLCj+LM#]$ަ!U" Y}gNdT[yA[h.ܔJ=iNe7CYz1eR.\&o\2 'g56kSr|95q yo߶la7aVz+dꙦ+F%S,hCz4v[ 5d<.= *E^<ư _6ͼ}*|ācgQf+*;] ,7HӡY_syG9yBeB%5!Udqc*vd v渐;pb7Ȗ?x|ujZ;zh ܡ3nϧ aC̺?j{jC.`@L{ܨ,G( f&Z}yuGf H8?knʵ7Zv[;~g\kG_y+tZ,2_3%V^>bMQʠ򁐲yveO⟖Ʃd<9]{fH`iF$Ks W& vn/`+ik͎_Ԫy$Q91|`o\\i%$pHi<$ɠעcwHQ 4H^J 1&iG=CS`Aa[ㆴAji d 8{rzUyRTQg]ȕzIÏj>.-EzbÈVߖy*RWU|CYL Qg,tXDV ʯ$d+6u8>8{?.Ǡ<`|PK6HvHCȸ00!h C ^$>w&>}=/{SB"=sĎhH\"sxoo#Tq2|/\+cza S i^p"%ziU I(+F' JL QX;D/(EQG9ц a ùm" ASPcrKE&ZP;n8${7[DaNk8|*)nI"h.۽50Fz^I*@ϨdJ-oh˭w& g<h$H7J,IxÝ r0SO*_;2;<ކB[.} —!;dX cq RBo'Ֆ-M~fGmm0Yؑ>GbIzNÈc@ݲlXWf ~s+ .sjO@C"ΐ 1I`:.d֑iTB$>8aƼH*.C1[CN- 9j7%0yL$eJ )fB7hd>Vk)>R jiU܊y@ `(8PMpҚe[a>4 DV^ Ll<#}1ħ)a<5OVy?U$Ӽ;LH ?X$?dAM/'wPC]Z)㧛x3sG _1؆-?\(DŽZ2-}k_M_wdžؑRvghONqC8B#e= Fuݗ#qdʂu`i| ! Ja;yVA7ʜo)0aTMp EYa%gy!}Z6G^qq೾x /&w [7@)+A$,N#i8 $U5R7XL B$qllUTi6Q j^ S̫GKwL*LRWSf="(1W]dz!)W"/Ns1LTZK[M=x{][;S~jr5uLޡ\uLMJǛVx d^O+~uWL[%:ԬkyaP k FˈE7짒p_@UL@G171<~ {]5''8p&_bћ,ӳ4:q88ōz`n_1`B3=ܗA.UNV}Y֘HL2. fE-p snTV䑜TC PRn<Ȯ罧0K@x4q$Ԭ$ƅ3 =JR=7EC]?.Ǵloma;-ћL;jJ9d] B}5R/v:b@ ~d? [Z(!&H8ʗE ;k@7/;d1tWL;8“:.:0e:9t[e h:%Qt 7>}?Z=pDdj c𩦉U[;;Y"`ˋ-~hZIwQ~ȶ4٦ZQ»ovfqws7RMSK ,Xq~K[]b1("faM0554˞d#8a{>o^ [UXDEQ>jRдZ>f۽tz\CtoJzwɌ5r [7-pJ) ;Y+ASn#ݠO6JyLB5ʖJܼHM^[>Ƣ0Ni0(YQ Wd€kr)4am,O::uK~`\Sv~kLK^q+'mei^1$x(PC2PFL υPa苣i0^po|,r8^?z1Q\\O(-OZxNb-fӦeah Tvʗ;Q [gFali,5Faѕ3:d1[ne{թ\ֻCW2sٲasرJwT^'F-Gy Gx:ú)AE"%{ yIh/D^I1{%');>a"IWH{?X“ej/NjvJ&zyP9mNkmZgcI. P`1`tC:s8.Fk"4K5ԇ3" :PCxzxHcs2r4HW`ӿkRΊb\`i8jYXuVM,%+0XC|̲k迆W1 "$ okfssd^lЫL$ȹSLOWfyl<1fЂ)d?iО }Jd}u΅/y" .m=}jTn+fb<۷'Dj(*i+2ae7镓,`sKNJ=YͦFʃ ~:'=4c\Ƈu¤62V^GY07x aH" K=)C6GM|;U$=gh j#U;"vV<- VY+D~"` WB,>]XQezzĕjz왧 Emr:/{,1&W(>dj ưQ-jFoJ2,~-jG rm>) !Ųߴ) I6+Y,YVa:?AUF|,? avI+ve5/phOdo{oQpbvU:϶xyXj mY"X' KHN+w _-I0>_%0̅7 d O`֏l?q`,uUj7-i:`4;? ӈO <,ZmEv:$*mURr5U%*j-}{8g׋&'XM 0^YMӟid.W]^̋֯{z," -p9֙6wU4mů]RodlGxye=[Q0i螎?Td(Ojn4"-‡dqobwR\ʀJ6y]M]|*ǩF~t_]8*WCz%]]UU8F&'t 6mY`RHE_Y= PIjĨA( {1e"ܰ5$rG= 3wGwrҽm,K͒g C-X'wP-Ҙruc0-G{Pw)Zyd{HL ֲaV !nK3HR~Ȅldt~n?} W r.^҈7ʟV 7JSOP:-)ђycPv#/.Ȝ[C[{^:Dpc0e$*}UM*YOm(uv! ͖:+ 0E!'$L=<Em4. T '`+M"׆zړ\L AWZQd#ގ;@L]=Ipd.߰톪b4u{oMΖaQcIEߓMܦa'udߨ)9rS)#G/ +q$,lm9C m"kDIyDC(~]9qy>o\_Cڜ6Jb>W_&d~ܩ]3**"|bGLӰٛ::Y:9Y:Z[;XNlؐ~kmKU sXnvJf+ecNȕ&id Lq} 2G"iмlKAzQ]Lfr|jW8Vv&[`S#>`2j9T1d6&SJ^k]T tŐA%X>,VoN-o8$C`2u굆UTu`0H7v}rrnڍ3yk'n;*Rӧ]:@#!QJr\2h0!pSO*.TZALf(iL6[=f֥3HT{ƀTatҷŊBqGs ȽO>DА=iWFBKַQ^}po֞wr'ɊsҺR";~}(U.pi~_lg%/&97#K]Gd*PYW}p4 &;s- $c=6b ]32AdxƱBL`>zD2] rX`y0'-0KX':r< L(ZmZE7*Y%!1O?Ѵkdh>NW[:)C^+v=z~;ڿ&qCwz Sn~Z&ou@ZGw 7i>Gyt.&;;#L_f5ڳN.ԣ56zrx߃X%ҐɑHt}p_nUOEh8z-gX+N/@AoJ`?BgcdV,,J%Zlš<,% BEsf"ɍio a%x"[oe`ڿăa T `궙8\CGt &H\Q@p1V$' *0^egLkQ&@ڠ_`ތ1SUaM٬MEU*? 4>p3w7W%Pdd=j6iF * ѩSm?@AƳwxs1*! q,z_e3w}{KMKݢ͸(^\2&(8AHKH̱a~ 2U~__A.PDE7D2\\ |n2 ݉9)aG*h pz7SذQEװՓ R :PV@qLGfvK ^C:)@&Gh6U%L *rS=y'ю#/)  JiyP&7oB[ەz,Yí@#2r96;wV\u#9Vv,鰑3ۺ{l9REH )`%qէ4P^#=[sb<c21?掼ˢnA'_7&Ldi3n5;SfXn7\`WMݭIj%Yģ g_ Ǘ#tn]ffiFaV8oY[&41KJ1Q]Sg4[̑ quve^7@ Kq]Tb\PWR.d{vv3h/e+w1Z5&Çn"bMJNN=.:%pՊ{b W! O# Kr,Xh$l @gEk`gQ88AncȯZHxQ"S)S~~*P&cuGOqMZ(Ⓙ|Qm!U|9xr7< : `v)u!os@B8[k|vU8#cǾvm%f|o}&->|֭>TnXizOЉn8DnT||8[_A&ٴ Aܓ;sMUKG#m Ԩh>)aƭ&V2=AkSq g{V`vLj/kSѷP)/Ri/sY_unY5}W!U|z 4XyӿUӲnAՠA:D6۴LͪƓюLX6?xXH&mD1}^*8/'L*/RICG cCXi\6hV4H%xp4[,[4Ӭf}p-.*+;\% AqM.1)+Bd{Og&`Nj(I|i eܪhMyKY:4לN}wo~L^$m5gʗ!liTPXBYZM/uhdk7Nh]銞o7v假^X,tyw/tܖ.km53P[ZPvzy(;BQLϞXt689y{CxMsQRߑEw9 pC"BMNH,]W YU-Pm͸HαG~Xߎv|(J' 8N[IYd"䏁3,X l0*Rrb0 ׮!pcHm=av&qvj; n(A⃍alcᾂ<@7# OJZvS4W\ѿ^ ^Lڏ'υi\yCؘ7dڰ9h:'@`18˭Fn#aGS30@۩_\v jlM|m* J Cz۔t[RVZW_L_:< vuH m9otjVCoZSeq>mMۮ"4Lzx7=cޛPHN(yMUD+CkQ#6YkjA;H <#H[܊jY"~`K‚.YCPQ}(Vsʧ><|vp<*X >U,(}C d`Fxe/c8Tn5!yj2K PȬb?zk aaַB0Σt߂SVWW#N* v͜{" :Q en@ Z_)tXDO,hlj H L6q Sh_ $,l UGo,Us F EE OKW(OZ95|&ܹ)o*-oĈ jt15#oKbDdEĄ&;CR8#y5X_ : ;0R4A20-; "Ыpեꯍ )#B\n,qMgũ{^BXQ\޽VCs)D@R"A8.\LHtOtli#^BhpA]#@ـD6- 8l`blեқHcuMk g^߄a8b |Ս 2GK<OWqwOBʁzfsޠHv}v""wC69יG;@z_z<d AŁFow,7f)/葑u/)f :LO{$Vcyo0M`4Di|51>ښ8a(TKGmxP}nn`\l|V,xJ;I,:FSyAbĝbA@2@)jgY f1j ;Tj&_SH}ؿiOz wcZ fw+ },BslY>qI/2n{{XV.|roohQzf0_C~ܛȪ)GʣőxQTD둸B'3&gAɝ1mHDb,x׻"ȼ OӤ=JE}lfq=%DS'ʑcn7=nJq `.IS`7]8f3mpj>w4d0퍳!{8jVu[9 A!DP甛&Sl[("rݤĄ%@[-h=w+];Ŕ Hm2H|2Ք~V h(۞\ׄe>]i!FCLC<֊P"F`D8%4Z2R ŝpt$ 7|{z~x=Q%zez5;7͏Wh- (`}x Zș{K˩mW,7рB ؃ObD*A cwG  9 J4.%c3k6K# PMIVeׯSBr 0iJ] &Jiob3vByȞӷ&1TFѸA#dc$ (賹UTޝz+,s N`d@uɧ9uH0qh$>uԭsF8 k zıMj[L5D)g0L YnloHB K*4Ȟ }( DY}ߓd3|aߠh e PY139&VŪ՞jnS.#glG#?qj1ȱy|szgdl3t!^Z~[_GH'$|#+ɖ%ƅ7-%9O'r G>{?t4Wdd^˛ ->ϜWC7RO~}%&DV^ y'NsPa~p?5Dk$`_˝u|@]]ؘTCPoMFBB{:QQ y)f.2R,V)0J  'x<L mĒZJO A܎@RRT Z MPryQud5w ]bd/>i,lH6L !܅Ʌ\ǚ|b@#]A>Ga=͞4ͼxJ!r?yn`0 d?=zY`0.~ʽAA x,ɯZBʷb]MVx8@0 ۑAO8%=>Sb؟)4}(:)rˊVv-U9(Pr9dyb.r OYR;Ņ54o ̘}mg;wdHsI Rp/DFh[pMfOj*eƐ'71+mꊡ퓊jæŮg߯)#WԕInH;H5PG!xpS U^:4ֽN8=~\dDǎz1zh @MWj .WWLQpbtaW}.N O|ChFj\I5>܆TZSTf[Q:ÇdT`VfsAH21q+W%.\ׅJ/tB]+Þ|$HaŎzlfo ֮b0pa.<U\!2,iFN/ j!-m0{x%YlYqEE ? D G|nfW А BUނ̪t0tg / @'WHC`|fE;8S) [#4)1˰٫8K#\ X6h L[Ӝ"$JncPȎ9w'’}":tjnՠer{J8z~!` 0>"#0&dCrPLhi6;?jiF:739 rNy[?$ cQ5V`iᾒO x~Tjף؂| 5phեh[~y?V:Pc*~VRrJhEi:ILi&Պvؔ)/z򶨤 Q`Vo泏-hk#cXjkcsTN~l%j5)aer2 4$d. FUWH;8D sj*< ={0KMRݬ7h4QT f8cA,R]#/Ŝ[q\p=A@\@D7*x/Ҽw}l5쟷(5򱴺m E8o^A%VvCkͱ&$YcSOF3)Å"џ1ɢc:{1˯c;psE8}#E%Qf:S?ZKJNd"*Fd5!~>фw4AWTǪC~+(M0ӝ%1op;#_1 =LP /Yց||#1/{q;n&OɑȰfWc/K:'U-V^]ևZ_fwMMCJZ{7ukU9/;^w{q)K !W=æ֊3$xV(wfF`S(Lr ˠ}إj a<t1)9;쬘kˠg{|]ٽ̺, 4gku+A3DpC6`9KC.0)Z3Ld YU"E5P&5z_c_!k _Mbmhgd-Co@bFpm@x=ٽ^g}Kr$UFAEěQ7IKc1{vd%YެAUZjic(!ZdН^\IO B6q*@(x~_ ܤk j# dv6qF̈H "6e3XzCu}p`4n"+60*jIlk Ku*&5|b38厳Qp[˫gA+?KIfIuk+7+`PO|cjk-.N@Q̑L^_lQ!ȷtJb^ &,bvy[sRΖ47:߅ ܜ6<o 3 "E4X5}Bpf8RžAycUю9^?;k?`Qy\/Lҹk\LfvPc|9Vdy uHm}R`TV ;v k8D_ֵiFQk?.T HO+,V[y :sr #0d ~C5$EMHύHXc2d7&/^Xɍ诛976S4˽[qݲyD5CNO՟$I-YXY>CUv- U Wl0^jJ6YW' C[l UȰD+<x5JpR+\//eXT`ۮ-nloTh]Qdُng'CdlXW'ӓy Ea)cܡLOjr@y/b]a,-"})ueɃ!#p@(]YkzYC=Mbrj=ԍN'u[ ]BBB6F{[`yP3TLwyL\P~7'2u/gE޿|ؽm"k$nx*Ѕ@$=\'%. s[ؐ}$j4S3IRt5{z x,$zF٭ٌѺ~vdȄδV5^"Z3@#hܴ iz\M B+wcڧ3~fsKmT>g罪30$9aj'HˑoR4[beP+r-XK (YlttRช@ 8/NhHB&2Nls >r>ԶϲtV|VkeXffyYۿ^n7Ϙ6d'*51ǙZ|D;uR_eV(B'EWi%ɇVwl5onΝEPDFŔm*k񜤳2MyBC b}`pvr]Ĺm# q+ KH'҅GbW̨WkH+į\MW]G(F҈^ zO{dh_W./>fX\T\=ȕzo N#wݻR AX,WAMSuV6\s?<`'\O<] ( xl4eܥ(XLh=I 舻p3iLnI#Z4Kz?s2MCϸH PY,fC  C s.{u6=MW)|{VU{uƻ|ڗM77E*٪^yRb|C_BآFjNJ|Ni"RSej&[LCSXa+WC%iGMNQ/B,W!JL]ZrF!yKUUiÒʚ9f6/B:c[Ա,#;o50j$Qc}*g뗰t+ǝDzN/ʹ!\3J,#ĂFrI4stAP`3ˢfÂ| -ѽvłMG'v6+=A 'C5w]GEupw}/&Ҝ$"eqU勴EMQb&T+JդEMu̺jRʠKKf˒ĊԌq'Y=m̹WFMYmOsvm41KysF6yix-qSiVjvAKf^nC"Wv A=0Py zE.fQ5VExshh7uתoϑ?,-9m@vcd0,Cȫ5 /W,z+խ f[X=L?ZeU(k5^ȰmWUC @)zN϶x+QW#NH|[AQ-_ArQ?RE+-Ew.r-c;qx*WtZ{Tɲw5c*P6!w-eO"OvOɀ|v4 E%e\7U5 %p|5SClgdue !F pYi۶aj9l]4_>M))" #׍X w:NB7t#tm[/'1'Zc>W8(H*wY:#h8ADsWm@),LX=b+,]ٮ҃kNP"_]ykʄ 1Zjtˣ@q4ڙ)S;~$ t;{M]_#PҘ"5ʚ{0tk?V2N_ !AxMN9JTܤ=qcM?/?#K,oK}.%HQ|Gz_?Dߒ&_"#$[mk`l}w$K'\:/|iXd.Jvt%U"xbTLe+6B(@=AKHVN5Zar=WPWY; bY{rcHKH VRS'z\q`u-BuO,AxHoZA+|MQ r_lw]qRP(k7HM*S*yC}Y:B yhƝBskq^}tZTq^X6 0 &ͯYe=#?A2!ҡS2D.^ϓ{4w]k(+{btn@&6fhm$=dgkt Ҋ͉QVPT#<3ӸC<<ްno. WLB®Ӟ3,zTD *g=y$SXښeS}G4[KhUﴶЙ6isM-PϚаnfS,KzeU8`0{F ( ާUH)`yp^yöBPwQw)'fK=`ЍS1e^6ۅDQ`fm$P]]0s,{L)I).@`"X^Sد=qDXAf=2tᔳ3RQ}q?]ҶZa-@0˓v?5RnGc`J:F2.A}k NZ]"3QCi.~"O 4HmPñ}#Zn/%wD[Όaƨ[cKbŕbX"g/2nI˹iui^Z \?8ׅ.r5k*y %5 ŕr܁:$۪ ݖ7Y^Ɗkߋ##wC`_:ufd<ecOCL YP*;c.8}xVc|Wg0k+ۿq@& swR )SL7}g{r=8rrO[z}ͭ{{z(+>zg7O&Dw'i($Q#AI}B1W5CM=XӱR0@ HPZM_k iQ0aZ@d"vc.Q# ;sT.SP@FF5C y''{[ ДѣYb83| lS 3bQ3%$bjp4괝y#W:)'"rsZ[n5q=1Fvn=tz5UJ‡ƳI,Q島7E]NH=XOSgC.cHO1Rj<ũ$,HNv32Ti7qVDn a<48+)*}n<՚' `7EcY2iQEЪAE78|:G~yF*c 0j[$+l~hv낢!z4-gn+(j.YRo'6άruG:YPG `K$w*gY!K~sXz+%fz$7YS+#'1MZkaQwސ}\*ݡXB-jj&2#Ӌ0GsJp\R"iK[Zni2<[Fw͙^OQ$ilԯMok=G@ z6tՄ>iՌ O(`8Е2Hm܂$BpNUv#кӈgS$Qyt9){ uU*!s' cVlWbɨ IiK#%ףk}Ҵ4/PT, .S=U?~m;.iyA OP\JI8JXd+0]S?GS4R/gFaMZ±z"[rM5Vi?B CEmJ4OQC}17>0\SGޘTNR̴G8DZN"_?4bɳE?A 4j@_eRn::V5,o-N?ޕ> .C5B{iϫ!w?;?rUwKH/LɈw[޻b~boyYY{jYuuj^jGu'Eʸ90-דR*!|_Y1C/2/'+a+/3/+uJJ:},~fAΚa1܅Х旂)rŵ~$*-7a_ ;_5?<ױ6:IbzI#&([c'nNZ?E t*e%w-koр6hlG0P4.XRxK//F<:G{N.y 1rR(ZlD*biZ;.^+|u<@u )=%9AVXw.?Y26 l7d*EK:S=/d s7^<&kړfH% G|(ChY (2 eiWaU—~qad(8 3"DrݱvMͨ>DL38X/JE&l N 0:EV۔B|n\*l-k (`z= nLz@*Z$C> բ@iEWceSyl[j&)2 .bڙOp^20OpOX>=\%G .@c7Ғ& ܝڌl%.i[7òGAcSAH1Tufb<Vv{`)Uz@0RQK+ AZ]a=c>C}%1oSAge&?RclAIDrȫGrl!- 8Y .$V'Y (ޅϪ>AIЗ2T\`g|[]!0Y!cq6I/YU_{My8u]?, 柈xiB)`"f3\-;(Yj3]CG(;ʆ NYf@r\yڨleu# =mPY (*\6.C UHk@.e*Nx+lLǽGxIyÇ>6*GvIR9X Xy(3F.S1-vN΃ڂ h"* A%A;Ud/9|uenkC&`cs$덠`%TMdBsD9獴)dfMN/[k!2cx`TeBwObgh+ss M &AӉh"׻Mm&Y웥'@GEGH~^n)Jd='VKe~RU"`N:"87|=jofC_a UʍA 2IbjfwTٕq񛭐M{?/Z=ĈUv1p%.U6w D66<2ä;Cԩ1 _ldћX?>p>Fz/, |'C@m5zOg.B)~_[Ź선vl{ONnk 16ޕ3rE s6o%L{|ab7g^u娞5Rj2^}xOcydg>Cݻc;(*?Ñw׀W8 k_N0_԰Fhysэ-ʩ5OeRTJ6PZKqge3{PN]IUBi!Z3ӗTh .Z:[,*!I`\cfY _ZW+ x?11DWJ;jؕN% h2*i>ߣ%wL[vp P[ 6Szs8zL0ሯ.k^[|J~Tie (By)gTN4.8JX9?xJ=r̠<^be(]g"n0e{*~DR xGE+ |œ>;S*3n7w߮t'/ceR!PPO݌U)@74KN ~M?n/,&>_.jP$g1q@\P QPT~G^jV~Ep8i]0KAƢy93sE |Qʽ5Ix!^wǚdcD(r\,?,C1ײ6*FaNUĸwQI$ _Ɇ:V[p4 'X|JKXC6GMM#=\s\QNECȒ!p"zZe;^UonHqj' = lfۧrf<ذekuzI'~CfbS\p>7JhP\]K5KH]Z$nL:cMDJ.44U'ĜA;5qtO"  &] 8ɝз0+Z1QZG柴7+~;xq+X?[i2F[)~P^ =҂߫=s'c2Pe{r$c눣0,z;J"jA8;Tʄ8/ّ0'(J~&[-J'{WWm_/\a9Z~xiVJtwE(\~_Q,}is>%ܧ0tME !e* OĪvYl`~Pl&r"&[>&/PjdLX_62NtH1z*p@!g$,~%O [ L_O'& OC;?r7?Z;-@EFSoI-)(j7Y␄0?\ Gr쭹|OW,)۵yz`,у?v{^W&_nK!FO:im5;ϡ, _d[Yczp P5~8<9.υ(1+A^zUݷKKQN5L~?b9$.VgZN-sb_fza,ɭTo#G~˅=]]V4un+[yftv#|#}yȂwv_0(h Ι z6Ù(GDAIȅ'c'(*x)m:Җ U}*rE@ӟHED֭,i ;L,w1l 3$G|Bk+gH5[_C`emF#:G{]jnX ]cC|B 17mЇk׋'+ucQx[IO$ }ˏjw)?7tttD\B钹3a;Y?/R퓭);/eg1$׷6+h+vXkS8ۧm~S֪l?DjwwqLZdRxS4R'ޕv8e0ڭ }mG"7Br룱{H'"nZ5L|X Cטz9 }z9H奃-%xxnž&¾iG#Mɿz`F>DX9w,+\O=eӦOk3e6k/4!fAj<7z^;k)mknk@J;% j06=d/h,%#faE*s!i!sH~m|2pmOZ7 FOn͚uF!f l2@3k CɁE!tmYuV|odMQoǶI',0m$?-.ttj &zX ʯd\GJyݺE'ÉGjA!pvnLtȇTCӈed=M뼢1-ր >y8m"7h$nl'Jz2)t. 'j"rRH6cp\]Hguy>ެL}}!5@"O}6@ WWfb - L<_!脺#i&KQ#H7PBد@ Yae32>b5B0:ƊM.x@FuM.\!LT?{sx]oha u$r21HXr1-H=5P<°e͞ޞݎqe<#0$zBqua.9VUoKwk+o.5t )ŕ:TъlyH{jRT ԕ% 4ՇKabV*pe(7$N3&^0quųS}aޢQ\d_ E$6-6::z-iZj7e@rUaA <._2B _DaS߻ p2;#_S4 J9Al=%gy[gJ+QW:4/iܤyDH>_q'#ah&.I >fXҠ4|0m_jygHx-lB]65> 6zSꮖ\wWc3UjZ*~u7=^xJJɃ{fojuf+feA8װYOI+NKiOuʬB9+!4@N1rE kDpIpǿj.JNkjRS φ{{d+v!lN\ng+'ۜB.BMh?&3T#6m@N׌(bY=9{l ǵ_iłT>X'fWr]H&=GNr oGK@}Ev O WV -W31*ajDMQ8g)P[П&3JSS|g.]R){$D_h7[>ۼc,NgcP2D<&q@',B[p9!8Cegٽ[bb:|f _ |3] |mx+7)V'R0+wN] w?' [3kg;[{GgsGB5\xb3ļP.mYY!1cDJq5_ϐnb(TS'sÙuMN]I;cD 7Pۅms2\ `8Z )SDBgqIR@>"Ŋ5$  vc??IOf52Z8]^HYBr"63{]4BF۾8 &3v1O??-760 e,.X7S gOzq{TW{+od߅{uE;l*nW0I,'׫J$b-M3 @G5*pxe:MK|(Qvmn/?l[*\Ȗ<]HsE&8eo8A WwdDFۀ23 (yJMHB77`ޱL۱oHu{J!n2"+Dby)de~R#{`|$Ȃ)rɠA( b:ğ]U@hCBtcwյ<<(K(-# ńS Vs: U7kEݩ(0~LW'/C.^_O{q~}VŐL{1!)K "r밣#tܣc/*lݑ6+{lEy%s|`wS #T8ynv>vl;o|[--L8,"I՗ 1҇ghfU+&4{v(<3u~ڏʘ/'H$8؛; ց[i=skwF Y֪/Lr?C/.Wo1+FN&g)R-)Ec`V;Y?&b+{VoU|<4I/ʧ4u+ҕ:iU4v0$` <^>{<󪕴&$d"4`5Yl8P =g(jJ2QRN$+ "aw{k4( p5W0wɞq%O nh opOf-9*ܭvViYzuk^jj֠'ЯQSNh#G 1n\'3=(ut661qv0qo?a_tG|Os]*|®%$i&,5[OHGЭ'w]&3{zhM?-`4y=#[.fOǑ=w A I{tlm zg}\K!vi~NeW. rCFy,~ISdCJ2/!Ub}1-#WQQ?ygօϟDON`Ɩ1+3upQЎM^Y[D5F~o[9cM(cV64RP`:v~QDbKy|Y]~Y^2-^>"BL&$&X9Zq))R3?:Q[6 f)q(JXwk<#Lv("3yW=%I P{(<4&GEL&kX'jHTmHdeҽ0!-5xcy e8(&׽O:oJfޞDI^uXnʗ2qYiOv47>Mpra &]r4P} iS $e\5WAk8q{ô|#wΒ:gYe"@b#p*F]$1A偬C4]}XӴ@BɻNMJ=V!p /!duNB!l+ 4^crKf+6Ps䤼H,զWzgxZ3)#ʸk[ybIf6py(KW8\~q-Q4gnh3vOм;f:2}8{v֤9T0Y j'IOOo 3}]/6M'x/=O _X`Q"| ԥ4UZ4AR2Gm)Q`7ijR.4底iwA#.?`*v[ >[qigY$o?#\؟û}}w@ZԬt6b$I|5 $$)dCGaa )cFS/8Ir^ &Gwt.jo9\\..rf~޳FDl cEZEr6@is6H%bE|#rx?'D;ivoqdw!32[Rv:1-W':n8d=GcoA"*ܛ'M-\J.XRWզc9s(mvx7 ,<ߜf:V2P-\Iycy,e2=+Io,[x]a!n4^y*5 0VJhqP Ǭ.l1۷^|} FÝ2D2G^[֘58LcsHlIv ]?#5FayrAmuqY e u/V/]LT#zti}i 9;.E|fA1`Hc∪BCS+'YT`b@g -Cbt4U,K[4o o5u:lڶG1. SeJGaeQ +ѥԨeN#Y DhND Ef@@H@kvݺMGȢ^8_ L ^1'AE u;Y_! Q;379{"L?"NS_id4k%R뱃;i$HNd*ܗo<XOټ9pK3+Ap8Stٴ7lan~%*/,,Xpߠ~-QqG9DjClXb_.DT *@;?? "q JU;74ugB㡔w\{VBC̅SA!ÿD?йdFN(;J 6whEh'p^lǒG]'"kb&ݠ=fKHc484Y6T[D3-E:2Ya8e, u#Xa߄ήFa!X.g!JFGG߉Byx(jm0U1-Nr$ExB_j£2} L4&wŒV"Y!mo7å=۸֛])K]#kEtlW\ᏃHARXG}g˵S5#n;SzTOJ틉*8X yċyiSkā܎ ;ztΈ-յ% 2C>"Y7J2/e>p_FJk(./.dL9xvݠW`{kD~V1?ˬDt:uOP?ӿ:?ޟr}3T%oM2˪khZCOZ}׀ ' >*8?==?N.ˊAOn`8 #1p_d7ljB[?f&BOv^"oZ$kcAu+p`ZJ"I'`FSʔDBz4j"`AxGL9'H֮[~qgfg{;Q'!BjiI(l=QG)gaWB6"JjdOD6`ooϝ l[@ _e- jb* 1H› 204axc>16G$|Ŭgޱvu:7wqIZBpcb?%P\)4_#ϴܣGI_Lue>&-8yc'Pbd/|H҄=,m~qg$Se;V uRxpff K﨑%UMH?c)kv)>OAcݘ֙ծ\s0'Wt̖G1CLIvrڃkKُ뢮D>'79lAhѰ*7}Gnɕ7H?JⰧA=bNeC7C`".^NN͍wvb>[ iq@ޗ .XXھS˺(LJe5w&%ыJ..ўaY[e~.x`Wz/OqIJyH\}vSm8y4Yc,{@2.p+,eL1y~8"^/\\X#ng ‰[Qo!3 ;&lvv B;C CKӫL7lMT7>FA0s2zjn88@:#qShrG!ͥ%<j-2q삙iXANxZ3A*bԁhl2–B)9F*H.bkEph!kNNEQ@S-j@XN}xLX3; -3{ꭀLw4C#};5 tA(] TW3p3.޽FJ =8 DAOހ9 /']'k[_DoX&4񑴵^CK`3r8biAZ?yI|bxD3lhZmf2`8k7(9&E"HP-KuSnp'A#BC .f9Hc\wo8Zl>Vʏ$ 2I}VB0++o|JFE<ғqc~W-h%hܱ +ue }R<ԡ9z='+'nE:ŵGՇ?~fX2yixO8dxD@v=;"/LITզ6J"#%c,];iz&~ԠdeEb^w. Qw|s, [VЍ"j_Zũvj,: e nP=5̵J91 #(w˸;Tf6珦9¡ H 9#TjF'H~BW}B2?0gwSV LX)bgn,P"C=O *e,iaԣ5f}ѡ*|O {O[w;xe2+UM,o^80u3ȳ~iVHɓ`%F^zarYoC `0-1.鰘\LZKKO!I{"K}[OƜ)x\u^|O|aHef oJJnxkvv2jݤr[AI%3i8OMHj1Q{U{jL\+Mg")wOAT$%軪'ټn7e-L"ځ!w凜kvQl6b\Fe֮馛z.c1b?NG'mw~N+g蘮뭷?0@~]߀&:Ȍ]Yίgv'~]v!hY`Cjo[zӁ)? Y_[z:UEyYK?YtS@% 1F-wgnd?owr8w8FӁ|1*Ecꬥ1U70 9$Ud&#DI7 &Xc̣3UX|)œJ&Tcs 3xCXr>ɗXEp=U0Ii:44i<9k|x}MTbb*#9V}$ ÝkmW6 Q hSl J3^6tZA9 59cZܢV񊴌Vf1zdM>W)(hDW*³?”3bS4y5Xvpegx5)lW$]בJ2܄<I;_pӱFV?_22iԵ:s:ϘTY9FjKf5(jS"ݢ[k# ًͮ -y mhC-+ "odr~-tFT3$g3ܩ|euj a <^qfoᵽr60}rN͛06}Fe.|{s.Rrwˮ~y'm~$3k=D؁wj̦jmBǣ=ҥ R;$rWʃTGnu']K\/yF`MǬ{7??MPT&0(-?&=:z/Q!y*: _?{GsO+#жV]hN1[ZOBf:{/cXi.]gJ2oJO nu7h<[gM/ Tj=֝w#EZ~P)al_hCps/u pwb;==sEJLzc30mx*nzJJ:㖙#o2? 9~r`ץIZ)Gah}Sb2 x[O3 wbal3XyHկ%4]6ML D52 )=ܢpd{żtUnw[E˞s О SdЩ[{A )wzMVqfVQy Glm4Jh\] F袗Xx/PxFҧW3"^qxf ܬ#6cAajЕ'5ۤۓ6R@zm?Na@y"߹U˵mM m:MebCEG l)<w w.)  >'[ ӂOE L{Ix-ٲ\-YA{(p*z2n j*Se>HFB<|勭AHVjR_Jڂ.mE L`P,%  ľp$h0f~ $J{o]8GXd+9.A+ \QNoynD( h u?\01ȼOx#E*FhvJ&VjEaK (Q{"3Pe m?ymYZ޻mX:>ku$M;T'KL\|3gskoQb1Os[sfhĦn&MSk Yf ]NGH%(,m-&IRZa~5kLTfr|  @ m\5Y)Z!4&_(%WD*W_2i3<EMߞ1 g߂UӪ/?EY9*_;xRt@=?`MA+L#{`g>_?WEW1s{ӹj8sq"}3.G 1fkQ۲Z>.抷RJb|O-,[iGݖgpUb>r[/Z&Xlۊ-2Er|P?v_cd _'g)* ]s_Q'N/@]$KnnmWT z<$%3#n?J#ǕDiuB~|?ulXQQ4VP2)Vw+8>0oG*i#Om𠖋4BgcWZyD|$ VI'W`P1HGpt NAsGiJkvY(QmץLMU\&i5s'iP݋o2B'Nme_'G"  N B -y dC  ΎA#(ďj 瘀#%[>>v:nvQx@:!(}} :]1He殐yp3PfyP#R.0 ׷v!eh_9,\m;>tant3??\9d (=ɾ(5*3,=4YKV<=>Xn(IW!c\q)bEkzXI7B#Lp!-a)#P3+!}ULOטsKJ; m_pDoN7?_61^l8peI&5̂}J0d :"[2-G;4:|a?ak 8W/IX4a4mE eKqixG F9@Gf^5k)Vc99t# iapF0:hZ2SdM**ҔITC˂J[HiJJF藙޳eC~{Y$@&*;'g kz== [ g==:{D,~_TE*:WSa\5la+ }Y5ryޟ-яJPյ}@jk\lB3 [J1zYqVkھ)cȸq{䚃t=ѧKv8Mf |RV^B#؎ta{1JʷN*qnߖn H6G&ۤV|G}l@ ?avɤ"ER1}*ќl/2B-ZїFvbeztFxfNHfqe6oe}qKB3ma3t=Rpt++jKfnR5qIg zՈQ M0F'S9To8!LARL=~v~sXnCU,5u-nE5žO5M 9)dcCC !W?eiPoo5Ȁ%$$N.i,D`SeB탳sk[3ц DN$&.4ThJw M AZ=,˵5 U@^S=6d1m.ZT'Ö!jn4)mXB@ R@٣}˸ pqpp? Q=b_5l4(- K (@1 ~w\#.3GYcb&\:?Z\/Ւʾ,2Y=%7ߚZE5*E0)C8)EzT4fͱ94?.\ %vG3b|?k[#Ѝ|$D #&UV )^5Gnٖ5Hn,l,w=wC,18 6Dcճ[!ao$:U(La\ ^nB6Aœ-/ NJm&oApbAQ>Xr8A3oq쵁H>'?#S) i 4FDR7_Mks8=;K|*17َ|s^mrb"(7c_ iU"x)}m<%"6}Vl:C-JZOm=E Ѝ ϪXe0=J?LHz͈ Ue`,>-[6OIR/}!{L.#Դ"1\"BA#w`u\7ed!{W:2a9F(YUh^m}LuQN > t`i\@/@?  `c#O@;g[U_s30N.e^D#-/ $u?@dtg&ehZ@`~{! I )g"}`u0|1H.ph!!N*T =X*uߝD2m] { j55\hR(&}3V6}~9|9;%FY_ H% 0E[p59R,V)ML1s[ӔG2XG{ b?Fj`k*U~[ t } ZD ¬Vw!TXM^ 6X̡'$~҇k!,SԆck%/#I.&s80{*! 8q!JUh 'ZMc~Mt؅=|dZ ;U@9!Z(`3$!8h!JeGdkQsE=zzЎ>2"Ԫ8Ojmxз~?)qTi׳#Bg(\\6vGLGd=^3?2ʝ ֧yҢ;H$<߾eѡv6:>_sc9k*&9Aﳂ9>wsQ4 Лls ~6TFԶn6v!#UTQ)*_ml'J0A価vjN j0-Xf/le~ʿ8BKt0:փDEi&Aw+N69 כ{WnچQ|E2FH ac0 5Q8u23WۯwаV_qUиEqmzW@[b忶AVnʁmq8G! 3^QxCߑ>PBi@ 'ҽ+'6H`m"Ҝbj0/j I OT b>n=ٞVKGd˻ ǻE< _#q]k]}mbG{=u\_d(`mFPYآ17?@J_(6H8 E=W5EV"kQk^e.3gȧw+# Q1D2 %5} C}FL<+#b2xurͿf(,XiƬr3ɌSL<=d0{>K  \ՐܨղdHע;;Dy&%6# -X-D!a1D]%x3{.(_ƣ3ED}z ~MwNr:n/T5&vNgȖE(mۻ oX5?~ kX MXW@woui(qQe H~ޱ1^Q+F b0GSh!+4@4I_YtLK;S(BZ^uv- I۵z̢[# }rvB#\+xJӑ4{% JkUIGc5 47Ia)QȖxW 7~PӔK(7mF0*RQь)!GrGUUZX?{ FHTAh}=CRC }I8fK=v-DIV>\6!UNL\wWNƍ^N㛙^άNU祡+WWo` kH8d< 1T7pR "F -$1TxG@9NTb>l}PtmI$9ЮMdUL-41[Ĩ8T76m}6= ^mpMn5]d ;yb 5CBa}.+GDM@&<5YC_+' yN )QF=Sf3W WU 6H`Z, v~hF`w8 B@ڸ &8eZ`v(LY,C5cvaw$mUY8e}o ;#N>h(jAgPS':Z^sȣ/N;+>@ڛ'Svjfvn @y5m A' jI${5tvr.Q/&M UQ?x2A!KrƃEpGT^}E?t1{4g8"G)(K@N>Fh)[V:SQHkHEGrg%(쩤R7õj!@wBWf)vW=SEc~>Sn#/=0[z5#XȐLEo31%Q())y? 1?L- Ԡ( Р$Sh:aG1|H e]Bu'rOs'.6j+a^h vu"m:@Tj8U>pҪ.us fMk Jl9M:N9mh5( wm#hO)*̣@3-JvqT3MBvU gFapNz^NAbedb눃j\H+xzjMU9' >YO9]BހG11#^ٝMxMHk͕1*"҄nt|!?L1/DCX 0ymŽᐴ!korJgQO|izS:"9ƄqLJh^E3k!! (@jG*psa o,DF{ Z\__YigrËy&%V:tS8 " _D[D BŔP3b.e:aYh(áP<)XKm۶m۶m۶ڶm۶oMUR[d<2ӓ{wV!яR3&-Vp`J`h5ЉZ*LI}HW [/:u*Z $;Pr 6:JZ#hVt xfz bvJ:^R In<<"[7hy8ɡK0>b6zn9Ua6"P:oFghe^Qi[>܃f@Ck5L#Q5)a5V'}k |4 SkjX5ԩlp.t]WV23lJ+!O8s rAHQ!.!ݒC!y@0^CA'< [t*dɴQFT&)>z~9=g` 5Bi)rz/)\~С"GYj Ǟ`EhIf* @#^4I QTYFY H D+*3UG !<,*T7EZd f2JNyYdTtǀ#/n L>EY!.13DZ$\aT9kK6;MLH 劗P>B{7 Yn&b VSN9Vw&{ xFNiF׸O,#tؔG|fvl+02kٺ ",-!D~0Aa )˾w$l;8=]8$0,ۏ^&W$y:Byh ӭTYvRzW*7Za{}as +)C ]yQ33`K!5QKCtK fd0'Eݴeb Gk-}D!#Xd򺓃'~96_9ՁI%&uSMAN>7,Y *Ǹ+( ͈`'y Pd+P[EyMUi^,gX4rIj͎ ެPO&L~8Wܳn0ݜӹ U3d7$%rO+-7W^ 5?W?Gmpx~@y=)qLÞFSM_[n$g>=fzg&{d`-}; &(N楺iP [=i[V]ophʖ,'%_6f~zZ wreՙ`nMyp2#e%%`RPqqyrEo߼fQ B4zS'~ w${ʳ@旁{iL_",obm6yU0"Ȥɤd,Nl{=pNe0%,sA#t$O@MVkrSh$Y(,nf& 4* C6-Kh5ML`;8l5-2 ]D |[ gnxf7PZ)_^ vRbH +ELT -#yl G1[$8jry YC圀>58Pt %BNi3y?uMvmP@ JjNK(@=(Hc{EOr%[&7RF\2oibeX[=Yڎy= H28YIwZf5RwHyܶSxƼSkV+G'^pz !ϝB.h7e)ӂ\Y\#نby %5]ASE9z]Hj6btZYqOl6\{\nZgs=/8*.wuTںqI.Iج2%8;N-a=!0Fg7ϚeW jVlߪ`4s.g6*Tz}̐1oGe=q(S&)x]}ĉSw$}gWXsr##-`m!A2Bwdcv˥{HB+Âi) /VObkVaNZNGG6Yy{(;|/mtyv-g_=wzxvvW2z9mE}^gq6(T>z:/ڬ.TR~@M`[](@v2Jǘ`V% #bOq ͌3v_Ͱa7T%)-YvYt>ˮ`jiJί }ߠ]fhKܠw濉K%lTb`yf1)8L?^;8=r'36T^3=ȸ奕X12Wr&ľ:5?>"f#dE uZjc 肭i a2j)]Q*(ވ}z=K4v4{f5+޷3:yoeX~!޹Oh%`T|wogOvv8k D.<ymNldj{ݣmh z'sV 4@|z >)X=v߸3#7hN76cCa??' ro&A񉶬󶘗LHReO+!gWQBlijiUz:X`fANvh awIN_It\IRNtB3BJh /mçz!_~?b$joq^VW?!/^˻ה|W?1{_arw59(<'#^!٨І~zrFγW΃}O%3ѷ^JrvGp?bz@SCUhFa"JB4F-ajB._؀4ŹS$F9 cO=LG+MDSϴtF:mLOfW3)dohAp{~m m@mwQz_)ff"ef"^N"^B;ݓCmMąlf"PV O^^s>0QQDtH{s=RkmQCVМ"}q+l6@@9Rڂ1eX轵a~^=#H#UdZx58ĿyuӮ,Ӥ=0Lj 2)K8ju wyM$:pqČzއE@MT#k9! :ԣn!5fn0q~*n`Ñhчnx2ԦjOtX9ȅ3AZ.CzD<\ٙ3vôCLpќR;m%lO] #* Tմm(1S uX.Y!X`Ӆ",bm:(n m%.ж+׶T@jOuhI-͒Rix7uQLX&uu q%*#0n*I!2,fM_t[47x/zHx $Gb½e)hۋM%Sy[-7<[C[v2:Y].goo[uDĮ@GZl}ChbƦו 9J'&RV W€ S$ZocՉҋQm Dj3[ ȏiθܷ{BL>-.zV+!Ⱦpt1Ǖ3tArn*W6W8d(@cv@1 N$j_S <9~7n\(WC0ٯ; 3x9$ڀRQzڑ$ -b(ZVBg1TC(qO9zZь #'$tXπՌgpPi/^aSk(E鹘 TENGG`ߵgQUtuiY.VaLe gL>SN ? s 5 nwE`?ߛ9D|~ϔ4a?yI'/iA%OoYekVf:1. sRlWO.1V/ 5k;Gd9܎\;qFmel[= &M.V6eV*PEP+GfZtv5n:6VYngn2䑤hv">sj sQ [k*^xPWRc1߶=d.KC}X0`: ۂKPIMmubG5XaK%|x<3OÍDO:@j4$l5TTSJS[qqwo'@b!gВ@qq34V)m.+x,[4q0?=i_6X\uBj+)bB)LeiΒ }N|x1D8!fcӎ^FN$ҺAM 5(gt _IE=Ŷ҄hyC4('T ?=W͚?IQ$Z*>?Tm$З$^GI,/K]%0/v92{ÇlfŒp^*ɽ&]"]\tmDLLķX";1>sm$%W^ kLQ f\bC'#z)R mey`oG 􎭩$zmyvXcĉE8vGq:LX"lXx/~[j]Qs3! i  Q)+O6x eCd[~=n +Ml7,k1d(T`p8w- ko>ԧ6-ZN23].|64r?<BpGM2l{N{N~~Z].SmxTHB<ݚK:ͻe؇t;:ۻ> ǂp9U>n_C%udJ&ˌqZ{FK5^$C+#P48PYl4K]@&flk# رiblRq4]N`P?`N.5D:LDUe]E6p0ة%{k;[d7;8o4g[<,S>eLO'l6qA NOɸK.pL?~,TF(OzP6#~ta?erh 9#\0=18ejLpAS)k_; A9;N[vk\Sλ`9zC"6,#XڤGK~qWu3IֶVmi?6GU S ZC)"aD-)ٶ :nMCwVIQp,}Űvwe %VK+cKPõ546^*ۼ bNކQ"6 r/B"16 9 ( ?uz:si3٨ >ll(Z1|(sޑd7(|Y!CFMtj%{v eO}J\(ɞ1qL3L6Y TTE@G=zֈ.lZxeE]8| $ :==i!ķtJa*bP(>v}_Di -)aE7zf]utSIcPC-fI| kPospڤZ;hϬs1[@%np;'l)V]Q(}||N Ea귄vr!Tp8욠Og(7 /~ u]/,@׿rMI` %^L #4]]ܿP4o!9UG߫$xe2tAsyhG3~܁2+ˬ=Ҩ\@׈R=j"AEt& Bm8'HB2\'~@^`t`w\5pEi*3uN ߂}*:bwy*U-E؟sXĻD6Ս|b5,8?T%UP!쫢)Fs-1۷2`O[\a Ĵʠm8I6R]y0z;&xJ.^E"2b$U3t/AW6F gtȭLzͺ#Oqoc[=BjmTDE@mENerz\۩X"D=6&9)[SHR*T,YZo5|-(OQ WrJFuyGPB++?܁Z lѲ/QtCGNK4k 6I)50Ñc^ 嗷Pӿ+ yȢ@aAΥ<9]rHCx~+ҝs?kHoN+Եmw 5EDPSU/ҵ7jU52pLN~;HC,dfռ_t)x$vfg^:UbpIӜ%` /t8N K̰ {Rè]7Uٺ!>$ȉ3 ضCK !aa5c<|n:^HsM2U%}~369-bkvfJ,: mX ?<5Gܹ1:2 /&⸙7{(^CW=RC9b|[IđFl5ݶs= >\4N={;ɻW;fu p~a񉼳1ʛ2W->x}|azT)cjx2&^IaN4"Jgf>$CT<* ]u]V{1JTk)}5́Ӗ35K3ٖli< I.Tgp@Z[IYg7yCiRSv`o^/ ר&=qynz-f.6Y lօ||Z x9Y+W 73^\ bS ٰ؈t|gPb@Zyu>I;+D&h( VV9ȴ'O@Y@$wcet w{tFESwmK׭!mE-d[u/*' .GI2@$Gn_%k: h!|.L^ M AZ_%V=1k\.ҹ7wPy%ޣTZݹsPX6,KB&}.K^u&' s}2i8q'J w.nKSE㊵S@| 8\ :/zʻ;^;/pWA0;D: z;^ЉA7_nf_,/O~o =cV,Ԟwt/{(_:OV~])Ó(خ{gÕyJM5@aOVJJ; 1+s@B z֞ϨOg1&1He9e0GKI|t8aqkdRdlh3;DcЅ ۘ:P/ﭾ2e=aw6DKxRj៯x%|".6]E(΄O廋پ5{0'csSeƁ}9KB j,{.wT2Gw V`=>!%tF)eP9D{?(SyN$HGB7'[L Ys s\LD׍f qeM ֯=eI{~.}0QfR VoaX%ʹTYR]rP8=B{Fȏ"@~g1a^^oO{{z=w;\AgAh>z1[LfF,5Y#Q{qzMC"s-4$鑳n9zO ֯NV&uәv'lѬs g%ѠM ܬT|ԱPӑJ (t'dǃDL蓺%Z.Ɂ (ș=(0HQEJU`Xb-8zO6 LMiOr@eU5v)Y?v_]Ђ9O&ڥ*un+蹃rj!l-1˜XᨍgbfEvĒرh+Qv r~*]]俼+ 0([@ԛ Jk2 e3K 8 t8_񶹉.Zԛ֔r)i<*T0ՐU}ZFnc'f KZح/j{-aՕؔ&jmj[j#5s3KH1] inrnfv ԳUHce5mC*Ex[6<\w 䭒u}9҆6md4ʭɻH ,##Z~b0\;vfwv~~hcok2YyANN_ڬW9_ #䓢9;_)[#Rȩ" =dѻ-hs%ܰjzjoFjg3P?bIo%m6^Ib3:3,7n $N,e *5RZ۳{u%@.IL 8DG $61).Ru?xX9{N dL3xGa%PID/s&?09!Vf 0` /HZ)F*L'-Q *x`d/*G*~_tْ5}4OyǬTߍXnKKdqLzuwƏawnu}lE&3%ްu㚞ɘXJ+Gq=ЙS.22m=FcQًDOd 7zK6nD}̰W׾,$prw.|%?gB 1gdmu-$lyH,SojLKyU4K )~/ f;  %\.[L'Rmx ^)p8h THx1@]xt#6\IH VP)"\B\ ZZ_CV\|0~+`8z/$OFB]2br9 Z LCH"^u1#"wA feҟQKZ}q01\'+ʺ4?#9cd(~rtxm/M~|9KiOfu% oÂwjN}Hئ&oaAeibbf)F#iwge_R蹖- #(" "( TWFr:+>E (Ά=qf1ӮGaCclNMͯ+GM*khC`*+m\ͻQҖCZ-&&xUrPKIx%),;F vC9,E{ZS!P ;Jcd$QnA'3 7M-uɿljOm;㜥w|:x_`7ݪ=$(Twﶜ?2 =W ΤƜ9 1ߪw㹜 a=^ߌ;;PS?@;S9V/WkEݡS ݭRM̖yӔ[m { μ瓇yV Tˢ_QRY [[b&I/WȀ!Lhl` B /WxU1 DH6ʞpO qJu$#AV 9>{#ħ_mH`3^7Zճ!C4'?WJ`-l*Ƌ4JZW"ѼxOҦKvJ!hGk%]=IБp 9Kɣ3w~GO朶;0bɡ!k+KH܏^5sF Q  )n4)Up6*BON9bْbfdz?m $JاKL b"/d<* S@44J6J7c+2W4HԬ(QpRrl;ˍ&t2m3\g m|C!Q  .O!L_H}TLx(L\ v[@*[kqmP#+͇NX :YaN ]rn >=5dzURǘhzV0f[oQXk۱>fc#)o-Жq7bnҍKИ&VU=JޔH x7_38Z;eׁ+DqTYXNrFBH|KWN#e rfe4WwYP,^s-FѨԀnWUM~O eܵ1GmiW|Hۻ@_v]#"K-UB!ؕaʠÎқy mn;Y6?su͟ 6lDE4߮|+kgPA7\o2QvAM5!uʑ6e Mg.c1 d.ىD/Ը҈l*+P?pG 8I54,km^ظN_IXsz6 yzwDc\5 βh 1i0`2l3cr]_]Tgل+ 3S,sS2,bx$ Ei"^39iG&Uu ;V7]u0XRI&i0H A2pL AEI ‚aQP7oR7-)5'@ Fۘ7gUkB 9q˸ ,v XUa9/"#^3)Q683\o>o%c̏R#5GizMjGpt)F GzzAwxͻYEB%r B+|fK=d/ OŪ1xZ˗̌ #B-Bzhb5r@1! -.$Ih]Y׵#3Di R2CM>ۚ&OV }IZN,zVJ?}E.h8H_HU(;75N&@zjHpQJe}lf7n\b ,X/Α~폇I34s%hII4O\:]L/qh>ϓ'z9Pgn;CJsZtEsbޤ(`EޏWNQkS`u( Sj>ALL_IjPBQzϓPΕWA!d  qr>'<7]Mƴ&6+_L|Sљߖ7)XpD|eTbT_=|@6ʴL'`us1!eH~k'Ft]CaHnQ/;μzSwܡqƽUtAUPJլxW!0AfaC2E8R ?~GuŌ9h”ҩ0LHD^e $22~ӡVK;:t r{blI|:tvj& (/Q!(Nl K8Gu ΋ UDHWe>oDjN^7Д&op#DgzCc7x"d7xs +)TrN3.3ˎ ؤ5(z-[i䤵a"S ZJI,W/"d&l­%G{`CLL, +3Emn M]E][. ##$a\0D3m]]R!ΔĪs>/x1Ǻ, n7$Df2)+$Y5]qL R'FW&х",hwn~V)>hwGN.4 ꢱSFi,L>?R|.K4 Jm9 v_ni \t_Z{+ĐAqRGRDjXλU\uN"4q5Q'K/ * `7,3ӎ,Z| S_7uZ-d3SM&O]rÉPnj5 zAnk~UdžWo܂ydhH]㮕L#N^]Ɓtr[P#[[4W.Z {f=}9 $. ^ੁ+#|  ك{^!4z%HM;(řD.:Cە(=X#r1$sJ:mB`\\8gGL)'9*_ د[u/tMJ-^6!8q^L~G<E7Ť!HL*4<4:Nj$d>=}Ro2!k;)C~[$kʸ+l'} r ml˿18 gxkWX / s?(Ũ<?sN>H(Hc +?$~K d+rl%jh=苁ضY4V<-Y>1EM/Ht[Mx7?'wc_dgjvG]9['p3j@5V̹~-/nk@5/0YA#nYM Lhm@qL:ȿ 1$n`{qpa@~@=aІU--k o^qsZ O?q\΁4A9 qF3̆Od(@vpDÄW8AnI>MKb˫om8Րzí=DXlBes*^)sgл1_}SpNO4*dH IBT0[i!^qƌ1elܥ91^F 3zVHAEq=z#&}TK,5f6hQ*Vl?oeLYs#EC&TPE J\%JDPĺQ"qCY2Na HZL!4T;Yu!BuExf:%+%v-Ex̜ܶĹ?H@%9B_P)qr c gIx>r07Ǻ6>'' bU0IƙоM[E&<gޒ74}8zH^ 9F۽_6JT;.8bA,SF]e~!-my:;eG,%,P6>8Ht]A넂a](όMWPY6L=^"(te 0t3୙9F|^?oʤ;iOPSr:xXQ@# ßCxremRLs"FЕCxоd;d(&V'a#9~⻥tqdn{ -;^93^ oHwD+IC.׉M DqE(43>|X4+4<:mJ\ܑjjMxZM'<ӱC +o\b&7Ʒקzn6,HI5hz?܏Rg r&8 ʳOffXcu(Ffvy'v}T(jZԇ6{P: ]m\7[ P*z"ҍc '7]7G<(6$J';~` $K^Vk 0f`ٸSg6W,1FEP~0:2>gc4K_@# hh걯냞f)4" oǟS)#NxWMc@T"ˁp)*8) c5Ĺӄǵ}@sǎʨ:Ih^A]\sHny)f}X|`x $ JH[ 5γru׻OOFΰ_VV>f6޳tu]'f&} %$tzY!H`vq[D N$"ivQɵ0~kf(2zWaI:(6% O8+_8b+˜?dDMq:t>l9ݕ~'91M-ܪ:9\ġA$Kn|OyrEMl s pB䊿 PLRH䘼VOaʘYUe}WP;z,)F;`RqDQ0a4w劽s$p}wC@s <_(?H}b諊%1s_A+q_zRNtW*G'1hr|]>=zۺasg_Gto\5KBwxc= Z4bҪ}/ ^D~kkNJ_ ΁HE;YC9K-wE$$kpbYs`7qBX`(<0V4Ey:?A>_b 7z83r?-C&[yߺЯR|sznm ~8kШh [-س0N5sU:lcacf7uF2ϟ*iUsfEd= 0Nj 4):n^k[,ycD"n#TH2jXFEdj:t8)ZSEߴS8;)`Er1BZ/zYoi_ܤ|fx96oJ'l sʠ6:eH % 3gh=a  Lw~ e`6Mar9`p@K1"MҊҩbGH_ˇ`Ҍ=ĥoW8b Ls M@ O @K.q;@@  LhLѓBek˯&#AE`J~wqN>l.(OO/0|xČ|)'rAAІ&䜯~}eɺ>a:.v~t ~z&J ug֐̝[Qeo[$l@,6j-6xZ s-䲽zBQ =I۾s q@ʸN.rydi|(*)5n: aF8XW}ڴVijl b80$N#0xg,Om[$a2.R"wAODΓyxo܎+Bn·e0ΓBdoJn+<Lee[h+qgYulDA/`n0 Nut}yӤdԎR#z ad$.hl鮣m埬.䝪,mz7udD+x q +NM0g9h3$'Kгg*@O(R J6h) .,odc et㼍2v3@Ty7RH#SG/G$BqT"YO uMOfl=hިD֍MuFhr ?˙t p b*s!g[{bhFvB7td{=Kڳp>wf)]# Vn#iSц㐺tƝ׺]Қd0 JטQNM]VҁynA/ <$<=QEy*-艵1Q=wfUQ嶥J&0w*&f@f]u{ &F2kdKo\C7?`9P1}ae7zrcO[( +7/0M.AXZ2V:2IuH׹b?X@ޓeB/Xds9XfTi"-Aەz䠢j.Ӈculk E6?Pe0k/SB;M\k0&Qfڤq?(ˇ&2Rb]B;.Jb9,!z !*{<"uG!ѰàЋu1eP{p)]cY&s9MI2*2hf˵GkCR,݊*4 Na4t cn5|xM8´VwpC Ձ*ӣ%w3[!;t!N`T*%.ChOcËk(Dԭ(E=!!q ߬S!_ʆxPR!xb^PLIAsLcU#c c W0kyWDRہFM4Oʀ F_@#M X*z#NFPc>D|Ɏ2BP-x!-Y줎"D0D eD)gܢ"󵂼J$囗Vou>?+ė SЩ8Yq<`#K!JÐq-`wEM<ޘ)TqSXZaJC0Yoűu}шJ|`9`X0 nSk`֢)d1_͵=n["Cna['A֤c9Ze[uiDb/갚 {Ed3 ;$L>38qxO`*&t/y ;TSʋHVHR1SvlkR#]3:@H(%ܠ)(|hlLARPUT@(K9U1SQ|þ/7"6w:VT F.u|jhϧWJyM*.OHh$Tq1*}}!Y(`572(nkD6KQ |3sP6L: ,rr)"[16nG7J8zZ_N̽ۅZ&'o7>pwj%';*e+4aV _ d!E7 񬀘sA}yFI-q%[GObh>>?A]1bIyX-m d-DB1NjK[LAB%P=!A,Koѿ321ŔSΚ:*aRmBuIA`[Tqq<(~EBžxķJ1h5EĥIa:r.er8 tI1dw)cvE7&5"oǩC ۴^2Dݐdn">tzb:|[gƠ50j>Xnڽ~[[ş՛{i_{:n>WA DO3Q nX ~VhA:\9^CSiXC{t"B;YкfȱclP3̑$ќ:,ȫ/L^bcEAM^m"!AI]ش2ǒ}p`tEhF%[)m֮q_w?ϰ%Wix *IȶżnJDzTXA5sOξ`B)P|6 [ѥO9\Ӭ1¤Y)-t yϷT2(4-{ PBe9W:?@p俩uD(qfG:"+<0 >`S;R˶sojيUabbe.)/SzU;'.K>'}b_bt9S/RQV)ek⏩a.ĂpQ!GepbapTdO=B}H[Ƭ+A2t|p4atsޜ+)x8PlPƊI[j.+6+7{H_"V{%PiD~j>M1C2g/8< /H eʴ[}nD-ȚƹJG:x<&@PS9_0:Aqa5\byEp4r푅C"TxOf"[Onl/pVnSY1O4g/-ţx؟}n;7 \4nho>Bް?vW)C0MFRjre1BrZal60" QG\Pe 2,FR.v)b%wm;.u%=&moCtCa(%S&<[6{OiK7=GjNC&YDl/zx`ӕ?m dI}d!A nr#l q}\6qE'   -ӒgXHKűl 8(R-JRV; 2#}ZUQaђQ-?6mtSҕv>n[;9 gfd3˒XsF5 &jfRLsol6 b~HPqLp04OW"oudBU':5tJ*$O4m*S =`Šxq b@=Dٮ_?~>0+zRORA$)2345Q4\ʲ|$Ob< p %Ѷ/Ea?YE [|Nꕒ8bj%R?H{SuMd:S*CQي<aЫ,v`~a )g/ۓ9unYb$C_yT<)e!KW--5G Oc<k\Bj(J+z p) |1-1A]ƶξ[z(&Q z@7бPvv4ͼls|!_Y5T']>cGZƮLajQxwk8)@5-Y? T m޸->t3$9h k';??H]&\QghMPx8*lcf9{hr._4* 7;"kVF^RRˌbgA z\EI9MakQZ04&??J;ѳ-=wQ&^>'Y&MNQ'O{wuύ_Y9>"g{xyx:T7;4c`Q͹nJ܃vOjoBa~NN`Kvfi%ގXk=܅6V.{b\)|w^5Q;6m6ݽZquD#8o&Ͳ!:+iBQ&qg XohDbMbU,hAmTU@DpptQBeB :,~ ).N(h]@%sW 3OTUE. "ؑjj{+ J֝0 m@m88à QMs2qHmжw}e?t!b_aBWbtzt0ʸݒ uQo{Ɍ{4wƌ ,!S+IoJ霢aTw0S@)g'rܰ|uCmв?SynE39֨ԝco{VѢ `LnYҢsEWGUd+LCQf%͑ SC!JۑŜi a&'7Q^>'MwFY,Zo8ϵ\~z6=N7EƜ)x_!l:xT7M#a궦 Oɳ=8mvyT)EU8p>U~J})ZQ J|[|*]Klh_e Hy"tx-ʟHIu Mł*Sc$m(B%I^>itr`%4@H,bYaigt~qQypŏ"dIB|Imu4 yGu%-~K>C683KD0;dΒɌOk5L^7Edcw ?{E j])ݻP=*/>Iz\}TIdn Ojv(shNġ2M2LG<9}Ff վ̥HJrqpX9+\r2'Fp>>1d,|iPǤiZR a|YNqRڠ)~L/lXWyG? #,7C{RO6(G؁n9Pd!!TvחDB`#EK[)jQ5.%n='X 0)0bӞD-J_wb|o ꜖Rk`D0z9)iP7D# 1pj/-c+I\+TI):;9wv}8?zNL||GyvCʖ0N118 1_k5Qj.Gh]sDUS>p2KcC"Dsw/IayKQidQ {]z:#?YJo0% QPvO $K" W)ug 44M`RiSFen DՂysWJkTHTo$+n5BPږ 6a$L:8F`4?AJ!:J24 5o wiN OU ;]>aXSʖ`!&y'%H3k?jnUjns]Jdew`K;,yE[cؘ.Sx.ْܓCW&h¨yhEACjz?f#UfnvץzoT^U`+ۄ/_zn+`)+G>'m&=}J9H-Ђg7nRulJ'eN{sWnHKmcDo+zӖo"X(Tn0 N/wn""Fp`ގ;*!`I.@G(cڤWKq-&4aZ9!q͙ )2qtZ~&Y9Z*,jL$:R8fx몐a5TgxXMd`?Q{P*:zk `hJPԪ`<^G(}^n_VWˋ/8Fޯq{\I9 sѽ>['ۭ8|_l:sZ9>괴N6v|\:wPxYf?L.=il 8I(בSc&_<'@Qx&hr<39P޲7*-nEwY5ft?OBPxgW.V졓qTYèe3w6g5!]=7jCƖŷv&8,BGw%*H{/-eݡy&gjN-Ze2{\®Ux;hG!*Cj)4Ll򨕶KI1Pی#患`HG[u|oo:I(tGoaBE"9:iUI>>Fɓ[鶋ւ0?2%n߬߅Ibzi'l?Ԫv޶?z=Qc!QT=[cf䠫ѱKQ5 ϖ TDO|K+D z*}:^iL}Lec&}ە\T&[aE(id; V^_z_BI1v+,D;^esE#p\ک I<3NykYhx4ԗ#CdDIEzGv?\4-88L8je #9zo<sب(9e}Lh5 VL{vo}oǨiFW+-~I~ZW,/X@?r"|,2a>goV. Dd*[c斪Ui^gRlrF>J(EG0yə.0U[`#O\Itz9S0/m|4 3Y#9XfBûp_A*ӀViI7Iqp4i*@+5I@`'\p:ZuTRﳛ"0*ňH*8JB%:7ߒi5{W_`` Dȓ;;iSc^'-,#W"W}c4s+=v4' *5S ; s_[dcӔ_*p\I=!m (VH#U'Hة9 '3o?᫨K2=d D؆p5 iU\JhV7o;5] gNBq K(GlfF" n9R~vM(+Wӯ.:R9#Zizx~KS0Sʼnn=m bDQsQRDxrdba4hPUzUe(:BяH oHWXQ }޸\W~ ѧ,3i=ljPE<2QV+vt7qKD3Qx.AcsFvPL\ũy- /u5j˚X7˧,ywHȰ'uI.ȋ#>cQ`68[͠Œ5"XD/ On}3ޮIgpFJSCVnsJ!x2J F\~nWDJuЩS:z;ڹ[Gqo<5dKJ6V׎E|hVZpw-0bbl'D~:H j:9֒Uӻb~,AatEܵ,'Xġ-q7_vu jU(PISU~7)mvd> HJ+("GFD +%@S~ׁ*M%Qfӳc(l?.T:ZD}Tjļ.r%D":'ǔ=3Yesti^î_ XgfUP2Ѐ|sAY " T+-LR&rS!y[k+٧6ܶ\JI.CƊ$'Vb?ukS=K|w$ Dz`Nx?:snaD{@'vhz)NQK8Jo)0i-d k|y~xka7J;ؘmȶ!xI͑8]*ww2[A&"ZJ `6_.yd5 r2<(/˒{4.K7k_ITom sի6X՞vݺP%o"2xڄ /[uu(}79Tw݄'Bm|+/ahbиh[;ȞAY ]bM-ʊ2e]RېK;χ/ᖔβgʣ*CuÎ&^[ءtK=s &)44\ց"cKMW)%{SYcڎRK!v ,Ij0` vXb+C2h[ aS]r-Z*E7A(0P@K&#լ?g/PO )-s=߲lUqu]fpKѕ\mzTD!kS?;Aw#vd1hraU!d=\ 0$~`s _;pUlyNE V܆I3:+Zt61EcDyR  `X#?t[ߚ!m=ۈ^Q<9d\r-SվŋCz8酰4]R9u=M9ˣlt,C4 VGЊn:z`eԨ{HXQ(Is b:xV}u~^:wetU IPpuUƋ:\Gv9"} V|Ȉak},7(5AdIÒc^ƥŝ"?x#4m;Jy 2SQIR^Qߠ3=YMn]yAso\V^':!wpL~kȪ[Mjqu,#N(j0X|0 `[E~`Z7>ZgMzB cSWq*-N =/UxuR0 $F`%~]r~)K7fNwN6 ?5W{?+G.eͨnA@E](]þ L [{Z ),TF$:=. uwxhBw.Yc޼Y͒+*ZXA3nWW7?O vo4[ ^0?9(xU sY ';xFST 6ĸ  w_1G\[[.$ph@Oc1 \۳)Orq7"+Xfq,p=Wۅ1@UݬOLuZٿێJ5~#sY 6AgJɈP4p>Z\cS6P/)\Ԩ#`D6=Sf0u$h+.y;CGN$c7@cRۨo-+O> o-܏Įy/B:ij0WС@q2͢ﭵ-<0w;kb/é׎%pFQl 4T{7jq?$p۴PѷoQn9,V HGknOU:g@?d] ȬRfpozS%r"=2`EhbwHe-"묋. '_Aw䐋] zpmMb O-2[w (c)1 쟫S*'T(3 " [u7\N8Ub{O"y?n &k5%42 r_sB^%}C#AρƗ>1xR?z1Xs!v K"(W 5[^x޶ Aqoy&s=V8$NMK}BC(2&@rN"8y{PǺTsEYUw`' J;z؅%o.H9_}3o;5yVBLڿ6"z@l<킣V@"p?scIr&EΜq %,T 8ѥcݢd2zڒVjLي2F2B4DrfCŸ#ؐ:#]]`6֣DgwmSerWKطoyg|9S99Ǽw=Suk׷5Ǽw=okykƧok͉?ؼ6hk}ǯɩ \w)\w :>cLq=+IZ-Z>关3Fi~5i:\w4H3H41]w&|z Ś|[w-&Xczf¤d$pz2> =?=u@6m}6}6ˀ}6s}T!L!]!>{?"|#ždQg_~F=;|i>>/,xl>%ܽe2Yc6S`̆*2 Lyn?42{PtyWMhfeUuLVJ^-|F,2i?tvX+CbD1/xev+ YƎNtt!/:5@X1Uevc2;9u,C (W9A'^'շڮ9,1uibقqA skjXC(j^ Y$إon=:?Ng_"N̤1JpfKMDA2uY'"wek#, tk׶m۶m۶m۶m۶m[J+YsN-EF8~k?bMĹiZ ~PnM EղD2'-6H!Fr{Z7b9bm*@\hrm2٭)-_ -}v(9/ sΦr~2}08Bl)[NtR]qzj=OT=M}ML 670J|l~:d%BqV=,&H ,/A@Q&>:QÜCxCȜE(YZz2WOIP*Z{DX-% 1)9yN 9LjSʎ`gF\G@=#=b[ɂp{/-PoJ7տc;*mૠrW7ǽ?ˋG@ Yg|Jt299}{=N?a93{^9%ɉۍv~9'yC#9}F ^" |dIPZXjīN WsWx,AZ 6V!L0х, Hn3|8rur7:3"sZ?NG ƟQIes_t^~1)R5~t%2BC~vxv_aB 2< ˆOS7ח$s9V >7q_}6GLwp[B&U3-" {lXrx^8sqz0-s=KI~􈯣UEWW-p5-{4 Њwd9 ;t#1<4,{ ӂ54 wF 9yn-Mz_Ky,Rx6/LV;-"sᓀ#il }' C1#do] ba/~I ->䋏2Z',2PX[i 6aFvoeU Z%8LEU)c'W b(^.SMHL4wSԆ0c-/Ѵ:A8t6_{n;q޺s~oZ1%LHP>)9J+2^4q}. "IS .rǰ}7۶ v@U铲r)裴 R ژLZOKW[|%Ikn2])W?ןeLpJɳzΘ!2tR;/M1z0u:U>vFgwq a1Gӕ=G?fQ\0ezI؞y9hzf<3I|Jྉtp`&D%zm;I9ߘ೧~?:ܩʏJ4c^άsTk$k|IPngXQ ɸ7hQmZj4gך?%b@×8 CW&r6V=TԿHrB{ڄ VXɴf _i y潋sf%ɰݸKn.4= oA!6}6甪UſHo547Cs;=Vz`[DqkZN }62|#߁U#$8㋕ 5!i[5Dz\HR~OR#TC6{5y8h|4Z翘 pg2~hyZ>Rj i+{lZb3v=.CLkw=$+5uv{ ZP(S}~f7J$p 雛4Y[b(Fug-J"tz{`' 1kj|gW؈_!= 4pHC޺z'&Tbq@Mϕ͓ǂS2|piԥ`:2< HM#=+ѩݢRYg`AJ'(FЃG}ĵI;TI.LJvKγ7ʹs_Z8\/Ua)2+vK%[E)5f[%ϵ:>>n)'\075=Ў-)rld 6( hZfR~'Bn' Lfla&!}X I2Mİ3Ocl~~S~&q&bES6ʒ G:xWMJeƚY*8G eO騚 29+gY]kc>Cu]*kߝ]]O̴ރ8q8@Uq>s"ܱW M<=}ft1FOq%Dƕ^ԩPw5˧ &QfqÖe'vg$/;)εԮ@#u|e+&Q-1h۸`y))ؾ]eqnK-l@J\+w~'x!50ڰPUӬDr#v *uְ֨1GK2nQ kZ&;g@kIհ(pm.2B}@+jbشT< u;(ZJ`|xS5`r#d!xmPb&"bp8Q$g%ڟ<64m4 5#.=Xr/ɰʉkM.Q-#<,-je*7^OS*@rW|KYܬY8q6I L=**. y|.% ŘA(׬ )2F. -? d#UE[ԧvoHa 1F}TFVrx>~r v|}2QuVJ:a>bل>b54>yPq0eZ5u"u9BNCU:EuF6hj)7N4Hn(JT8\vl\2!E:'ھ>i tS`_v=њk0kM'$'lj\}:\^@3D){yj3U\ IDݮ `BT*Rx)RI(^Ss`U(a,39TJazy.lӯhB[ymWQ9^a̫5`/->'TD0<)zMRf>[ VA 3ð~Z|+߅>kDjd6U1(EIV78@yA#/Z:XPn ܑ4PYȷP%kF;wz((0B6OЮ)y~>$ &BJ=(he&.eM3.[`~7%p1NAUR0Լ&\%Bŏat` pzVR[6%d/M0*5ajV}whDT2z)0#r VR^>[ĉ5vZ YYrI-n`Qqd I{5k;<_ևdN+$ 7v\ 5x`p;M3e_7@CTopy uzs3.S{`SJpJ"de>p<ݮ55/BI7XwKs(<0ech_ߠMFv..6XO(/!էB:ӵ[.kJqOXKصI 1 ʋ~J7r4 '; j5Aa&!)GcL K{M ْJF9$_~er-aQG.}u6 i vu*:{IK0>ÜL?dPym{ ^}d* :uw pG b-\</(=K#SCZ [3`LZL1kcJG.X ,2#֞F GY8ʛ26FN!Y\lTĜn ľ/ F2)!qӞ:q]a3O(oV6냒?s_H}xIƦ梥S jY/! Ռ"q5BKSKG/GOJZYmEp'Y9;Hw\Jmޔ#ktTaTeh-Z&g B>l z0ґ,%qf^F-au'tmaR8\;jMkԵ©T&(=#rWȍ :eّڎwg,j{$* `U,gp?K>+[8okm3۹iG>5#3Ϡ's%15Ф4/W3 DwmפX~0'-JC٨5'.Tf;oQMKM@$d5vMC̉c|ϺD85՛B\x!K KJ,C^~ bZj:gwxYͩhv)@X5'<7A͗yz0„ye;^Pg`9EqLXEԛH~ {.3?~_#7YOhγp i-tD~BtsCk*bWOӍdhɨUv V}B.+C9~)(]Qa<[}MΙp %F܃ҳEŭ pw }#Zi3X׀kGbPkw7idkU#Uʬgx~ Ihaߖ x ‹ b=􁑸O\Ԁ6JYPb p@) %Nq OGR]a`cA2U,Dtj btw} @gd}B]/*8_@9]Z)WbQ? }eO('idg:(|k y9ac[STj u2z] :~fE&]z@N,N$'E5w:wڏ;X0U>808f̨%2Is~x{L3,L^w(R70lFTGnrFPTt2Y|GiUfH&\RQU"$noM5O/ruU@*|3i\w?!]gy3ge RkBI=䌒d(E4\3I* J%®XI`ȒZT79"X3W{T?}17B1qh o37Кu*9y 4|ܜ4EaPz t2NPtϨeH׽DzQiILSFjCAwM>>ٻ0܋rm\?y*lD$io\`&c9]@7H~4gOn_-Hm0mMdC`aTcխMsF'>ێrW,*Xʲ`-EmM X%-J:l?"#[Ryc n?hC9^RC<5=H@&W9n^ǁ2ߚ \Qt^U/+ݏ揟ڗ/}Y^,{ w@v|gi XC/ă(=RnHR7J6gM{(E4b՗}p+P TjRItf@{)1W`O~9ͭ` $fx/]PHRQEn|H͚40{E` p=[Ԧ3FiEk. 4rB!!qocϯ[)AN"rcO-PgnCCkG%H-ɔo N^]ζtq\=qn {&MqGIqs #cQc@ۦmpxpl<|^!|$aF-"~~7ey K[Kӏb3*\GyY1Ƴ#7!5:wN'( ְA?\v*[Sl|Wԃʼn?vSpd؆Q@ V{\J| e)o(߉8es(~_'.ȅ Z>ⷉ=poY=prj}K-!@OT>|_x`^8pkq~[K57_u$7Cgxo y ec&x;Wb3eӖT XcJJh%? D]k$_P UU+6Wi;d*H:ن elhPC8ۄR)ӕ#_5(BE0ꃉ)g*ޘFFyUQ=z7#},NU19>q) +~aջ C޷j 5/_'ꉵa3'nj崆{ >c^PHU8Ȳ2hNw,جiaсdn5`yKu3@Kv#l3b`GZvITW*[ҍװ}Z=?sO 8In Q4+X [H*u:4M~(mȓnX5oRȞRwUhZI:p]jzt$#|(v:eB 1F@3QM L"VN1*L9-H ~H Viܛx4AV 4K"M2Ȑ A` zR9lb4d9hA{0]}p"8G$@6S !txso>uoms[W&&s?%1e{WΞ_\ g {7ԓ6mw>G2B͚ާ羏81qrhQk"]^`s&#]r䒙e%&LV+)c\ O Zl=ށ`q VP -#z$E_o/n!\ k:Eb|e7K$QHYGK^CPXa*{| Pٟl4Ɨ6&2rT'URؕ"2bCҶ9(ng/*R@Z<?Å"R."3tɒN3KהM\7fȘn2/"^ob-#8p-Dƈ* UvgIc (֦K͐?/kKopEuo^s obQ}G^jvʅ>ULo+ծ\_V([9Q2A#Y`ˆ +GNJ KywMGʬ!Z05{+_@u_ڬT]Eb*wA >7JKzҶ-|#ҩsٹ}yPѣaRa&&0S{Mp{9>~`c+sBR{"@YU'(6gb sIolF×M~H"Xdt ςoGu8L}O7!tUCG;0r_C&ph9*z(Ķ(wI6@&U,!!3sɍ-Gm_ekQGѿ85١6} ԌQ N(Li5us疈eQ+e*$7߹=P=S8ǓS85?(À:U1K7tmf>Aa:cSςRoOJąWV4|28 wz &r;Qٞ\n0k::c .M֕Zu ̹W\b;D>FB [{|[on6*Jz;2?uYmZ`כՀûy HٓfAi/ẻ\I-VB9^4yoP"B!%\K4g?ZrXZ 6qSܕ.>ڃ?^/6]%5[?Oe[D'A2R6}VV_$`܊FZqqOmS$C8;V_S-J%FA`0/K9CVlҡt˂M(Hzb4d*qQH\N=J 6ɌBNO|u9gmJ2π4p6ڬfF,Sh#BHH_Բ$ag x4yJH_To_ G vVfzNa!vwK ӏ2[ٓRe V!z|/Adn[i+(?7hxlgu^̲[g Z\#ͦV Y l/tR/(:* _'w|Ω8#%ġ^:W^/1 }BoC|H'FlFG(}+R].)HaܦbJׁ2KH(%pV7.;Nd*Uz#)gSgpbYyxI|d%"REw[NcmAyW+yIS"pXwY-m4::clж[R=fA&FJxvf )3*fBKjr&NWqpnnH尠) VF>|, BvIgkNHuoZCC26{F~tAC\eZUx T?g/KCu+ S \T{oJ xbj%FOswy*:@\rUN?/LN¨k{jRc6|$[V ^! J"=]~^ԫTYݿKcmޏ ?)Ȧ=$;?/M]Aޯę[*Z;xִe:ܘڑ@gnY u$5$N qP]R3IϯBLg]M{ Ԣ.4fD?"k⎐3Ϊ:g 0]~A+3 ' dCU1_wtW6k;B_'XO5W{p2hEP GOysֆ%y@~)H SԅF:vniƟk|kҡD[LEB!j(ܡߺI5l0i0;IݼI6fk0);0 SVo`CUlŞ}^lnfcI)ɱ _#^b}PW*pP&Ďa-\QQFп7ܭ_v/l DܚjU1Ao#jSDCmkV6?JͿ`xll 5<b7رg=gma<4*"+fԼ6oɳr1Vs]M(26 7_m+@Z1kevg]ՍFqKJǓhVU4glnФKyEf2{<̓ g ayz 5m74|nϦC_ˮp5U xr%x5jMr owwQÈ;Ļx61V|/z'Ӫ槡;;ލFXXjKykZ?;iթӤ\ a|yմa]v$ e$)prGB}!n"o WWeٗk"J]pFs^ PzQ|/P]#W+f]S3i71@1/lK!񹌞^kW@j͊b0 fiUc^*fYɈdTzf~"AJ9Ts̕WG͍VEz))GAnq^*rlݭ)υ0~!8`q@kD3[hlp7xmZ* fZ E3" b{DE!-eTip9jD8dRlɮǔYn1?<۫--yZit]?^ t97I΢兘ض[: f'+Uc¨q_?R+rQiԺ&pT=;q k b}zu7 dWt8WAh\kYa I* 9yGmfX4-I}`K,KGu6sWm%d¡5hovf3G~tJ5XBr)K=bZk@>Q.zS63ũIGXCFYf=wOM^4+ςgcB`ܝ" 2Zq5S"#Wz!li0Jﹻhsj@Hx{%ӰeKjv(X𠈫XTKVaU=й;D' ڰKK| +H+f3ݐBEBئtͰ'e&h$'3XoҠ,G\T<8.rƅG̛lK+tf18]lZ%yұ| 2ʠf+l}.W4DHaDp3`T dHi%0 2'a!7JZ$aߒ%Qx r[=-ԭ m NY@ujZop I._\kP$JA.42穃%"_>ЗykI=5$`aro]( ^(UDjWc8eI}%@9 , #;)JKx+Myo6d%kϑaGj,p-ղTT%%TcE"V|*2($.4GWAMb|Kd7P` 3:9L=Nߌ'9o@3:D%? 猥\&4 )<3ɇ 3'[/ 鍧/ܤ)l\ޟF>i]7\3 *փoemc5K2b-T'IY|gPr|;$M)|UlpM]TXE{z[=F\N 8uF.u, R#+Wb]ˬŴE1'!#Ӟ<i ^ 'ͪBKX5TAnn;6*v%=]AY&*Y8Q%, RqO])Tj~ҡI{wtv_oMrGg2lJzBYQʺTNC`(0[!բnwhp*$o{*4Ԭ-ːa1nmw"2 iwbJ0urf7Tyy!bYE&a # Y]3KH/ Pa>4[,~Zd1δ[6" q0 >U0oψ"l qZ2;) ||9#R5>~,{/~sșW^`αZ=EFTa-71"2jLI)!­ڠ4)q%tye%w<bC0_zܯۧMouv%ղ?͗ѠW"k_2ڬ=V!: / ]!$ghި,PoUֵ=r:Q^DUYYc&HAG9XrCt T@ҜSCϏ9qqת`ok B%1ʬEpٸ}x]C c8pC(VaeG keңQ ]HAcA'S<\$̳X.%2i'A z&ŋvׇ5lZDXCk8{;fYG(MK iaw[%N< hbPQ!yx4"e35&0%%c$(k! 2h#꧷FaSJV(˄V-CF){9E#͵c[52)W_#rvK',=2]&܎FDR/ތ'Y04t-:M<-?b@% atƘo EkE;K9C`&3_جzz$ZC9m𙼡SPUF_ӾSk_?zOaԬ=A&-pO rOܬ 1W85% Ό9tf XĬ:Z̬~XY}A3sOMjج,~hY}a3sFO{Oo ؋}3uFOq޿yOtF,`hX.M9φMhY?S탢/2뢿KOh⽅lo:4xX^X?:a?nR&#ɯkK4悳ΈM\\'cĜM|'Xq``ij`lAkQncG;[n l lB<\g] W \o"{\Y=>OIE>?&_}NhHě$/WL~o.9vv53JZMw-p`DlR](PG*}l9!j ,!iދ5|4Śn^ln2jjN@ݭ ԥKi~tdΖ =jχ M_66*Y@@dP/X,-ԪO0EX\W~̎^dDáiSI'j*3Pha2j>D.嶋߸.Ъ G2ELS*F<QNXrh3i*a~d,3ΧhX*:[,Ȋ!FHQIJ.W#ԲK* )hE6`yݎÆyV'GCrDn}ꥊ -P>z8Mp<hQ`t;%T O_ޱz͈*&%M)+*QسBdmQzi .8i<ؓ>©_V7l- B (6ȺV7 j"9!E+VԤ T#0 AX]Xݷb=$$35!)jxc)GL:'ϴ} :\PG 57=N[ibtbˎg+d/sz5&nidmcza4^0[6JL 5Nos,%xZJ3P"f93a NelٺrF3Ռl̸H(}Vγ >UcHI\hK]zP Ui@v-RQɈ_y{K(+WY5prv %`/Nb|Z|렼rRx@ `` 5Jpx Sd@YG)KgSM@ްPМIj@ 11`LaA5f܁a9Ϧ=u(iGڡ1ځWb:7Y;cgg"cʪOoc|֝T\1+$ѧ( ޭJAY:\ qY6Խ۵-%FZ="y8l}B]r_wxq?/)l{펉-nG vu^M-;zg`D/l'U-wou;L}? ]y]\5vլE^EM}շ A}OlC]X["C26'ok[Rӟ"+lzA##il^2Q9UMx/_ I,VB߂ S.xbdAX<0;j5}Pya'\e!tK:%da/|Қ_{^Cܐ ?,u8nSI5w֛Gx }0745Oy΀=J/|XUdXudЭÞE` #io1*P tVlNd`]Ƙɰ>vOh<U01j7ta2u'(r;W([3ѶL(%1ν 5M3IeaXއ<_9kQ[m+,X뺭,b`ja1)qP?l7 u]9!8R:\P="qx~ZIھ;U{މ6нVMEC5:jjCuƺ f@eeeFZF5ƺ0div=܈ي8]V*!%GHT0E~_ŊG+_ִiݪ$ڢ~YYY"Qjj1㿙{L^fAX2U*'h,"nic -`ƈ&{RHquN+.\r{kso{KZ5sZnc 1E)ѶT/j2]IH͘ZwKi"ad~[M4f{8m]Q7s~4QT"S&t&~1,v}ttgV&Y5mwit9CmX$Z7w-[)VH'|W줓4i7y(IyMJwWq#hbbdn-C\ L|v*2yqha1  "><ګ'@g˺*!BLK']L{jRПzIr7L GGS4)iTv4ؐOɻA6vCp(壅#( xM"I"sС18a0U3qS$?##yXR6á "B##(@wNf#$̲IA]ݚՖ]X7'%`"5ϛJ΂,EXs?v314$:uP*٤6׉xLX.<9{)%.B._ak*\eRjۛ2v;,uLLGX//_?i".,Q>+HYM<=RJ*W]$ϥkhPkp+߃]dd!t-%Ss RW8wu =Q'KiQ(0[i+aX ?eiI91g-[~f ܥTU{`7B J\T dž *@Enl BNGc 'Թ_al[iP #VigƎmNȕ'h@%҈ )X6Ұ|4F6ȧ c?LxKb~K;4Mr"pƺܑ?e:?+R_ ~klv'\F 1jGv<-Uo C hOmԦ! `3-Op욒02 G%e‚0,-:r-$KyW]r+JՁGA:>Dtt#f0NJǍ }IC[ C  6>SP OoƲN}1F,ƅfa»b㛜+&jR~w)F݊L0/zˏƄ}Tsva UCu(;45Nb,FV촞M/"g$V1YPsXBpdvٻf߮Бj\)g؈ ~[ V;xrÓE]U_$E?|ivWVi@{Kׁpk^p{x:vs{7叞@ vXLu3ͣl<`z+I4"- -0kA♤8Ʒ嶟L1Ѻ<gf !wADYF}iN)V/V%C+1\77B?>Baҳ/J#O2YskgK;b't䵬J<6{^(Yf}K?ƲMs7b1wSW ꨢDPE֤]=_(%Wݥ/l /xdtWfzp0Ew.?xZ מH&-_.}{nCeJpA{5?yT9$^TwT3EձWCDŰVR^9(0zQ/9HKc&~u9洙|5<`,r]gKL( Fu}y S}#i9> ƈY)`'c*rtE2rKLg#YJ~MR B;EG攄. @O |j`eY3&YO;=WMӴ kuW5E@5[uNyV68ZԸ$J$]8DOGh)h ]#ذ=ڄAկin^U{Sx3Pw/$KexOf+1Z6qLAݵLL0Sld ' # SJ|rWУqe_^%lIl(r ă3suK%%\Y=T)+l;wRx0i%}erL 7U%2L}\ȁ3qL s8w!G<,jqԵ 2x:$a~bDFJܵYΣ}Z{h[)bzRD2G'STkm[̍VEO(sa6GS\-& }d4 ^ j%E2uF'dv_Lao[ɪ񁱻"j"*ˢuDލHL™g{ݿ>1eca,h$@?jOvWN$ r 4s؅Wſ2 eM@w?8^*ݪUCr/$Y׬cVIt719A Q:" cKHE,+jPAP 1$tJ'5ą ?(Yo+S.Pu:بaǚH$;QËBv\C+a<m򮳡`1#TaT[3 {s!Y?A% {o#Y$U tT)M)|eLMTqjҕZGsbgDw,h?Ďo6wo)/88Nߴ< "~^XE.Kg YIЂ&\4k^g7t6rkS  fj-Aֶ7FOB\.נi*}ı|i̓IQ~d}F(:ݛȝf_ WTKU: 5V}y`0Ak:a~IІPÕތʒfCE8ћ A6dE3gRSBqe|]H?(3*2!X2,Jl?om65(PwT^0g>0m\|)њ? r`rbl YAtgY+~ՎbCqٯ8܋w'VU}gGmsggCwb _Y[ߒχ;D` v,7eK",le+n'b\jmݙfaz5o)K\u,l.@KB$}[z̘/9b(omOWD$o77Sk54N' VJ.k*@h6W D,5j6Dx/y,#xKͩJd8x3*)ϣ֙[Dݱd zh" 5}vR>[YBQ94ppWb,d%:ߩT҈2Ε]<}8b쳩^JlyGG9vJZ١-85 mXC0sjP0B0{JYD&.^r0; ƓEG{mc;} ͛&?00+; K{񯨝;zNAP}GqgXL?Rf%beE0yН(CJ(FHmɂ!#/)=iPRq.JVl:0UIaAeO$:Lm^P d1pZۦ N&L* і?HT>@t,+:8N ,@ dsAtk~fd?8BXKcOvْK#
c]_!1$dqɐ+-upQ."WjQR#8!`(@B~ )b@O hzq,S䐭gD5,d?/v*4?Jq\kIQjG6cdcϕjC'^?*Vrvȣ %EZ:3F%*#IWY.a=MN9n3[ӅB~AoƜ!BW_R{ӎ=0!((HMmxTXyɓ92#h>񹯟i t:k AR|0C"uѳax:P8|t=l΁]{KG0>RLQ*ֿRY.s·Vy1(DMƑ60(k6j%h% C1g< FeƝC4޶x^Ja81}šŢv:}МRLJTOd~pn@{C{}IqvTX&;S(PD ӡC4|Q% >#Q#e%U[ĬX3rl N1Jbs1(iL1!5aI-:DGt n}t^3q%5dMӭ-iI wGS+x.;=XY"93ٰX=Tk)JH Low"џU pQ*ZF%S:gLEARF9b9LcR~Eh3MsޟM5XSWU" 3[X&̀9޺禿VQk]Lf@QpXvتpk㾑DD;]\F0L^EZYAx*L)A#!aN[R-?~MEA?Ȫ*{{HNLj:Mn㝜 @JtS,a $;F' X*[Xy{W4-DqI~]| pĜ(oa?P)|]^9#ԀHz Wt``0Jx^ܡtgغ],^F_ @vU-_JXゥbGLACHIfW 2'dua3bd&#2I`V!,dL \, C[%Sa."E8=r"R?0D~ú0S-Cyj;֕Nq::or;a'RV&|j:M:.if% /Aj9)3my-eMv܋v{ @dV%KD1ci"RneB= pFkCV ln_OiHEӓig8_'y&3j[sy܌rp + ē )pfnnFNVYvLNYZol5BӯEGV%uzMO2sp'.oPNO#LEuOGfdaFțd"*|0bt?t,RNJ<=1qq:IJ@"bc۶m۶m۶m۶m߱m[^11sg3wӋ^wwefgVeʺ8Wue. BJ蔓iujhi^:9K}=UJ^pq8;vֆ~\ߠz~Q%.BcCvTiGWuy5?<^nckIOq0LDQ;l &i#m!2xmXz#vW-DdC=$lfI)Ic 9$u*w;X(A?k.YBE-q)I$M$_K=F*k1,碭(P"F-&=NjzPb%\+SMm&E/PzMuDU~}*hj5֓o]I([tjZls$ z†MkPK#LG^p$T T oI[2R= U>5XI' k Q|bզ.\/ib8!GcCQڲCMk?m` nN7/%w)<1li$ /葛8\^RDohVVYV"7u6 !rZ;j+AqҦ&~+sI3Ouz+zj%/n/߬io*vAk9Na&1Mi:qJcx$AvlÄ8~p[s.'.Վ׃omű2y҄#}ME9f @mw9=nȘurlBHlb{(}5S\\k65 ]>G<(sTdz,4$5Ts(OBٴw\1s?O#M ,#e{ƒkէ81Y{1_RR)j'2YO*- XD{)ODCF6֧YLAZ~B2)p ETx[v8hSĒJe*ssa:|Xlcp:1' OyF7JQntʊr݅0xevP1I5MuX+p8[qP=IX?Qu:MrC|?4K1S;TDڭ3 C#cu*OzG@xip]b^=>&wް+~,Ĭ5B*,"v=8vB+6˒ֹ0>rbs~~Q:ODf/ ~n AaF`xԴJ8qr8ŐJa$d{I$J}qIS>|L\\$XԢ=t,#]$)ojEWgfr2t.=aN>򚼕;5‘f-饀jj@3(-jn+|쫙d!nBc7w? 7MX!S{vC"DnRfrfQLйsA)2&7\ '8#oZ PNMfYQN=xx8-ߍh4Ng ;.7ڙԺg_}qg`\T5+U(qcxzc=oR9/4zQb0~V3qx( É36IыN 3.~ ݂ KcZU =+L ùPwtIwho1InCS<[m[ 6ۘ85Iz1jS#;'.r|v[;DA˄v1lú[7 j|Ȏk1̦P~g1&MLih뒃nYW8g]k]y%WH3uǗ 7MdXi6iZ@R{Q@F ܚS˧mk'J'0[6^*&7msUp J;l^;B:RW7)eq)pֆ/xòq!Y`P7|Az@A7[ pB =CK@B .pwGoxe`I.\ُ3L{PTk@}xpC ji!7fJľ, NZ8>MYq'$#:BмBXݭ.\M*ԲamEj"ъ#l(GC3(+> 6T & CeQCq:z0) VLc3jPax -J}TJ!s}ߵi*B'Y {"=AH̰Jב0F"m1ˈ`y ҬSqs]扂lr۶%X+іS2Z!qBDfCZb`NǤPrwmk2"8恞6Qjsg W]~aqP$Sw !\Ӑ2{;WcAWخeMD5#t~gݓmSiqk HI.NçЊ~I=| 2yҸŨ%rO!Nr4Mdo, D^k֑[HҰEI h۸Ev\ɄK'PnJ˜vL- ;{Nj'amSCDuD1rk;0۲ I~ԶnR]DyK s XpeCb2Vf+3ɉ[q px}ZJ{I)3nٻ)Hb舨f_7G-q(uQK;me%;-%IAf%.|/%[etdQ' :t"W/1UpUIXqVHh R.BxdλlhkoEPRTϨf];"5OZ-k&`-Wbp[2b/] 6yS~i:+Om镣AStgtZ( _ |6vMY7.}L ɐ%7X`K6kqr.1{1j iXt5I4 8zMvN=6Zÿ2?٥ex8`aQե7xc$~l:).7 fp q=hpo6 |a#yu OO?cU2eq|&]6n3[7dҏ('%=p?,mBMuoW6f-L헦X ~M`2` po,9WÞޠ^'1Ǒm{X ە$2I,ڢ3Ƕֽ|x>=>M.9Z}$CO'+ S!u^0n`͜Ie$vfYQz!1<-#@vډc%QܑEM.PiTˮL* M|o*Tv3k褋|mְ94@_$S5y͉57&*NlT38Jś.{Ld\ ,z¸:4mA" 1f1^iaYE۩}I@L"h0kɃҔ_s Wo_}''Vk^#jt3zpSܧlj' D;!sIBmyVj6 μ4*+aE"n.b@;W-Y[)X]>+hx^,0)2xE-D,Pr?'Vz&݋Keъ?JlsA/<+i‰t'ވ J SVÅqEșaLs'mO$ #Ug Sg}O%/*VYqO3dI]=ɘ)ZimW!b~]2j Ö;@V^X^T9.I0.W=RԼ1* C1rC'|ؑow2bsFXݛV9%2tClzZeA;Uݴt/Aa Լ4ܜ .&Y*u 8/Ծ߆ \8eq2O%S yE|s۟_l72SP֕4[]_]AP-7Sb.LIi"][g1<ցL.n ~f75MCS=ͣ0:Z]OG+w* tֲL;u|-"i3 ,Y.G[ީv2gsOnxh형=q8"s ķ5y1;v~]`˳ͬI7=2u7S5+ mo},byt V)ʒp[3TKo՜^ٻ<ŻQ*u<ّ1#9JSx3Kޱg"S^$QZ 09'~Ҁ"@KYBZ(Go{*)+uoLCe[jyĪ! <ڙKwub\Չ~TS愺)VOM c_-.7\6{fUl{7t͈f}MLԤToagėVʮn`Yrm_p87opמ'~nHm|. koatމy-;X"&/V9[>3Zi+nre;mgLl&[gxC,}835B}ӚX<or$,4r#}cΩgۍYga0+MEd>6)9R_ubX/F+yż×v_V{(;/X{͝_ÅxOG d9C>s1ToDН~ayB?ngCق|a7i ZJ2r,U1 0 cFF 8h6f"|cVeC&8ߏ4¹k+ OɟUI1q[ X9ɘC !vqH=%憖t[mUq*Ikg;9'^u Y@1{_<AHܚd|ww8=gM}0xbsia2'< LΝ2=%#Bײ84\dLeuEW0{=Ъ jr%8]i:%peq³N~->p Y uR꘥Lf0_:n_agTL:lO{ĆJBg[LAvvp4= Pn'SbޮHzi*E)~ݩReљDžvI0r;"U9hʀ0D~tx4bYX!gc-v{P)3Hä:O0 QH K(ԥ*SE92?r LEivQ"H 2ZtD qgCL15yf3EIb''ObawQ6MFwc-Pܲ\̨#FT03ےN<$jA0'r%|^(o/VPٹ23\-D2\ű O?uxBQ[Z{egJUb%%Q>v:(b}$|iģ_+ uվvO 0OSm=׫Vo(*V e4) vGUEDA1uԙq3M*ӚݷuQ60Զuj;fЇknt/R]YD[DF,t ^*EoJ.Ǫ-g~V2(!΄2Ԭ!UH?BBeq4hϭ0LN[d8 &DLf[.NUv"9;XN&Pqz]*33o[E…Kam`\4Po>Uo|$b]z=v *W`Ez3 dK9,ZzGheDX\-6ZL9¥.731Dxj>G .Jpg]}*)p/0FScIv&`kAU* 5 ~]Q0}gМ v~Q-UGQV߼e~)<ڢ=hHx} nOd6w_El}-I8Gi)?65=jbpg8bמXr!Dg_^kW\4;g,r@6yp7='vIs{)av(ir)("ע wǃJqXONN^ &pa{+vR{j{鶹3|ާi㾵b2Zxwz2' eTWxLCy"[=/ CXr 'vsY //St3ՂIsIKB6 ( ‚:[ʴ.'>JZwZӆw"ç˸lt`}\OaB[df:Teȵr=cvjg\OelC8\!E@dA^"%{;v3bvm>7yw(%[#,B B]dv)䕇ړ.MIH %B\.("&j2&b+RU먈2b)IDa R&j"@\ĖTA\VoGO].r =d%r H=oU.ާB4$)6\r'WQ +2BuΊGmP/5j7|Ԩ4U7IjRvex&_RV (/wn& O 7]90̓jJn<{"ãPm4oPs8xL YFe', IC{9ڬSQ%d/[<&wn՝^P4݁9E+(c44F/m(qNǙ.ma^'L ZiC߸24O}7Ȗ`G6#T>0|.7Y8ڄFu33qdj,fJdI Ze1äe4X#k \uGdrO֋3,{ZFʪTțjuPJ4}83]>~o{@܉ iMܘk\nFxq '>DN4/3!壓s.sw#G?UפlQC,eGn?L tm%$#pPcZMԧ'_1FgE b{^_<񱝚rG[ &S[2u#>.ML3R1}A77dU(9%yZAh.=qEkCNy*^鑙$?Yl]01E`~(B0IaQ МPZq,$m\R]Ҿ7 V (/&nJϕ5nt:"]h79nC.B}aðEUq7tረLR v;w}T\J"e"(x`Xx2;y w&U(ucRϜErJDU@4 '؝|l#ɡ3HW{Eq>pKHfEWŦ3D8=X/%rm1̭!S&3hSLqa8O=JKIs-%@@oV\HHXȻy;)\>XЕfGwR`tE"ϱ%'WoWOӦm$d0%{Sڥ7&ʂ!r>ѻCjN'A7\epnR > XB-hKb{YkQ,Z2l-s֎tzw)/PtOK=iʙWR-Ϊ00+㤲@y䢼LQB?=^;Xb&5S5M+DI'ܛ ?6BM[?(Tyyn$̰0GYƅ7$6P(bY_Wud"vrfJ.2TG:;ۮG猕:iTklqHIZ^g¿we3 z3j]5 ]bfƱfъYOܩ@7cLu:oO8Miu |8 l,l]QTb.v. Du+ @א4#+,L;Ƶ_<QGU13@,WO}Yn0QFx㞽U/v0K-|wi(*MqDU+$ ;}^Ӡ:/Uk^46I}3Y1N6wyͶeLBBX(f/]ȅ{]'\{MwPgN9ΥKbRnR$z\!_nKx!-~b9nם? _RVTN=k0.{Ô*>Esy^Ɂ'Vݙ,y9LM@ xLzY]3iEMLmYC00QxccoB0c > !*IRjBQe$JGJ;YͽX<-]"׻l~_-D [AqeN˜!C.dj`wPҩ298Es55\pn^c]ds:ą:OfV̴wGUǪ\LȪU3wmyqxE.}9{kRwƊ<^fy*SeTe{TOucy#1DW]F̊]˿Ն:kZu"Sp'rD1kQ x8Zeh&ZC6 -R>AtsB-vNF#'Z  F=8[Kg~/+:ŌSsѿu63B뫱!#0KZmKOQ! !A'[t&eаM>*=妡~ˬ!##I{($r2(rf+Ey$Ij~[)Ɣ- ~kθDQbqtDv׫'l<F \H#L(RD[T"\i=_tD Gcc`>IT߳;s}P7i*ݾ6X*,7b_a014éɭM 7Rt|Ne,qƃ2_8sy#ZX6Orokz4H%qS͉i1i4)l V涸7*VZxs㏒R%HIj .-f*(*B:`i5](/ 5#kjXOscnʖR#\ QR@#i3wf~36I gێ(}D:(gr+A qyGo3|YSga ZA'- .A!*j:!w!s:A`k (u[*3;@<`EKӖ?d\J"+yIoڂ֭ͽ.vכ-5jkwӲ$zv l] SX|.DP~X2W SYI!ppFHW `,\͢ٹSDdց.;$|W,xsrݬ:BTiL]NJ(#xPb◊g3){͂u$ReK Li)OBgq [,͵YjFlrB<|ͽ>oY7&EEqX vI7947Bo2 { WXI# ٣קoVO5\L$-< U1ISp/,{b8MDQpޗLzUOu /ò3d1tdf[wLßp q%ĻP2$ihk$鰳>Cp2<\B9$M>|U~h/.QGZLp""י*w|u(1{DxNtL"/:ap8"mwfMT:=UUӶ#LqmRFAKH(*YZWK i[wt P#R[`<<%o#*Nq}4Y [ʌ69W4;:50 FL_tYoT1bzS;zL^ #f=P`'sa3x8Zv:^ \&Uy1TW9u_t\G̫M\.&OIMrŔчI ỏn2(t'?paĔZ-1Y$E Q'*^(/>?)y ևIb--u!(-=_fPI.Wv Xij6r.a&i^[cCfO g@9ơGpC[XWxtDaބ~S(ll)^Jw4A;DB6߭,@xNW1Yv½SUݒJ>q$O%lBq\̓ b .28  ȖZWyN]d4!U a2Jzȭ>Җqv74`66{$;el+a6V3?ϸԦ=/[\$et剴Ww ~NS*5cSᄀQNf[YD6#=}zE}+v^k+&TY#H㿍QhO/s`⼅m2xj ffhEN/4m|aP=ϓ1&H(?'Q'䷋[W? >rL mH&)eof1"g@ە05ޠvĥ0Doyo >@=ׯ@ս{ߩ6{Z#J퉙n:Qğe) VM.Ϫk'o]_nuR(؝,A`3øp/(yPA'6i.e|iW^ O ,)n^;ϕ8f`*F/`|G,X(Ү۩2'fpk[Wm,-sh_C}ʹR.V Q+Ccp%& %ף|Ǎo(Qƿ3Eёs{mkx]8t-μ^o6HDmS.'2!Z>JgzȤvx&vS_<rGfՖR6/|v/ *jM)M:Zgݣ"܃2=S2:97O} /RQ B/ /ڥ-a.+r*N֢V.jv>`,уwU[jjlZuUW V3'ەi^pbr)d^e*5PjgegZ6oJ:`H'-e+|S DEak;~C{]x \epW1!Pq\i Keg=O{" QAOoV m#64}* 1\p#P%p9Ds.@X+I9s#4L[;O!:>FN띍mTKWpLHS(Tmw !JToFBD4O@$.=R\KM"kĹG'Tȶ4;ժPkZ!zpVp߸j2_?&5@XM9m=ӵr %CZCPFCp^DL؁eM U,l@Ɩ Ay[hSݙmc~3wq׋_Wo^sG;7/o5_On,@XY xbPN\`$]LgI{iZ>JE!%%&` *ju/9YM?5D).'lKN/V݈Z?*n$M;~ 1cobͤk@46*$& b7B^|nRD6SZVBЃ(@j|m4$ a4NLV3)4kԻ PBc$A>G;k' ( !ae P . ~L%F$j89'1]L7+ti &]%(l2v xdM,VIOқG\G0dqFq3Bմ-p`.w%h F4\؟K{{Ha]| ~,/"`ng'N𴑅N~7̑EicKF fbsn}N L}9LF3 Bsf]-7Oh&Ύ5X)aHK͚ws"K) _&{/.ѕdC5fI_M{Hmh $|^a3ߵ=|†RIGBNꐥgp,lNFv Uԅ7MTRB&V)kTqe/E>JMbDFq_ziwȜ58}7h6Ng+8`-{z^F~VYV$P- ߽3ϗZϣ=(![aB4~UMxVC e: .g)U"`Pk8s % yXr|#Eٴ:% I|s(1N4@7U#U2`Uck⬋ojg#:V%ʊ!5KV[c|zX TX~j"0py+w܈ pbh|,.iw \>i9KLeHy6!ki->y*ݏ(g9Wiwq! L$Ɋۿ|Xj TL#R ha!Rh@K2-NuSeBȎ^+܎udndՌXOQzmq<-'/' nu3pe NQӶw_Գ:Ozws.%*Z&w毛9/;\ZZ f[<: ͎ld]HZa|h%!V;ez*oHUm~R!lQd4Xȟ?Yrj 8Oߚ}2ǮĊ[ۇ'*,^UCpw_=X3}570MLXІYZKn\G}6xK-鏠Ko P/H69 8{~l` S;w AR(hF񨦶.(R թ](whJILyPWK2H @#tha8s!̟dRkN3M Zwto+^n֟;ߛƯГ1z I)'6MSJYhlu>- URW6F G,A2{adX5>ӫ?EkؽU*߆wE}1Kn'ؤHxO\iJ=|xn?bƌ':٭LMHFeQEF:sp +Ku@>t ٩fX&~LY<{tew``h#߲VnTʤ+:E3EkH^=Bk;ֺ'-T<:AXsr0:pg _$^ɪ*,c-9#rn8 )M\U͎eln@ ea;xM*`Fobs4i0vr%ao:'8מl/i[([ٵ 'j)м5b^a&!~!MENln@+9ᩀ4/]F%,5`R3)!.%jt˓s؈/Mr#Dv/㮅uA1$%j)C t¡BxÁ\y7 {y T3o6"P|0;Dw\q%SIE*T?r9&!zȅ-@(c<^@IBARr vqy"ͣL/bSH ,JxD 3Wj&u_}Lr'R¥ϓ9@^F |ޚl 1hk 5V?p?;aKc&S{ ~h@̧/ܠ2R(MP([O)v_dՙǗ1ᲧB nO9>3`& rݞp?Ğ jmhU|%DQt`3DDZ*=Mv#E8 $`Z?dmq-%PNDL+xk3:)Jtڛ4?cupֻ!?)UfRt^q΀S%G艹SWeH/%+\h%[~Z1}oR2FFyVJHhQ[ZRJ=ա] ȵ"8AsNx( DV {%k ÛY`@ HdK^ h(N,+\exRuGpkH 2ߡܚ^x"l{n@b[5iRV jVOK"(%l)z4!]OjN[[$\}@[ɳ\kBS$󄄘Q24MDUϟ kQXk9L%4z^c" !yM_7h`r'W' 0 t̄"r8R PV87b*|pbHXZP:zHp\0e܁k5Q'{G b}@Frb`w(XW@) *sK%ˤ ɠW!JKwDZ2Y~H3N"?A! f+o4PRb j b wz\=H *}"mU|GZ96dOۋylpNA4 `BuOARUy2Nd˜}&m..X-kmVMIGgs#SpLn2B 1?ҤLIKw6O(-BH|zm( mfN(2zX" wϱn{sl2%]uF@rBM|,aΠQ'5FynQ˴E& ʫ9sD/,4У'.u_:[y;DĮO^aSb#07a&/]p\i%\u̓u5^PV]Oo+ɂ OK#p}{E2<'MH n1%4 4D.l? +FQ1eZ^ Ҷ,DFz=ĹX!ߌ2Ν8G$!e$< g@lsibACd$tңP8%3ct1}:#bic{q: 8"Em-]{hxwϢc89W^1Ji$dV1^=EL qy7L2xg7Z<.YqPoozfgW+^^vjת]wtװ歖Ve=.S_s6a`r% ]b,>n?xx6}[XT7f]iW}܌&J=YfW"f;XB/UxA 4^<)J |;9m?P.@((ňMY4\(!|HCԻ,Qkw#HOrb_ip@ r1ETJ A dYS|{\gDgd_i20bMl1_9C/I u3##} Hc @1e"I$()$A<~CdR&q2#0_B@9 `m$X^3idϿQC1Y~|L@=TFhY ]Zx}T=Z~~\0 G?'?~*'O𗑏 m+3ۨfd :dɃ@E"%߀Qbf'B#Th[_S+]&gop$ꪵ>Lek+Ґ/lJt/cwxe0將 :'jPt,tHn%2en/v~}|]<$M@B t}y=ߟɟ{M= 染e "_":ip lv$a`0Fƒ0 X=^tJj~Rr  CY @ޢ<^6UzN/4{ *$+jS|/Zu _1}P!J'e"D۱b&m uiAń:1@AvH4xgzTejTChBSygɟDZxIpgrr'i& 2JUxGv , T&5]7F 0%4T4{G?~nZk MGbDC,6ijd4nm/puTEZ֚85jg杻ݖyoua'Bܛn۝9q0Of!Pki빊Tp-%H*7z۷Y7v$6SJAA#idFFeVm5j97ti@L\s颦 "IZ AU$A;0v)׃(+{% m7qckU:") BDWVK`labRt/O,(g*)d0P j5GG:PmjśJ^@HX"&VldY,Lk[kEO[jea|1ˁ')W/kn `ItM4j" l ĐI!3c2 womᬵډnXT{ e0:H*Vy|Q9rqM/l=AwX{}yW:0w$11GG_#%Z<.~;>|&gbk5"MxΙ3b'5*jG0`Z=Z2 ?ճd`zzG|HaR4%*1;n30vw#]%+3nDk53wtǽf>–Q;׻%P&ip 7Fد-K5'@Xm? وb[K̉}UMݥtr:-l+LM"r*1ㄩV{_hkBOdsm:ZժZsOb5C6^9C(u]$9xңMvqjl,iVp}.1%2/h}ŝ*ͻ&_j=oKHg1ڢo[-6_MH DDvӫ|B;صTr r6 `>h[Pոmc69پۑ\ 1 /%+?ρ8V9iyΫaĢSY M x<7"ZG!Ϟ&0&-,H98{Hh #d1y/fVrE.I6Y2,jqu5z+W׬ksPC+lv@/9z%;%(M8.(l\ZXZ:,A}K`SBlAY}Ysꢨged5G( b*]UdS'b >-V(BZU(Z]E1W(BMLfx;270?Sf-_[2{ *ms2U:VtIJȂXIc;&>M m8=ۢO-Po(FE.s;i>-v+5%@pf'n(mFyWྙxQ.y(u$Be0߇~jgsHN7gBpxL/^QuGSE9*6 +3Ew]mځiQk cqcL)͜ޖJ Uh S[ΚS1eO?l<~h>䈏2JHCi.FN&"F$ iM A4!hZ- QcAV\m ^.E{3aHJxVp5_Ρ#~.Rg" 8,H6ޣ zg3gx;fh맨):QRϵmX]zdjR< SKk}ty>ކ~ջ0_*y]َ{{ΦH>"sy@sS^c^&zg;s*9(\7O<+ᠿbg#rA{Q'͢=AxGQ\3~94w8ɭw2Q_;p_k^?x*+Hm@sC?X>\$t<`>4 _sq򏧞𨟆ΧcdғL ;}- cG#q4Đ.y)@:ѐEu(|e˜sߔ+8 5S)7.$gnϊ ˚]+3>~NG7[P\n .Z~7y\>w מ:eo? =͒B\}-l|B:NLq5 r1[ve`7Dv5AQ,Uy+l[l+K(AdNZ2CPOt+! WGAޥ|(!D#+"bVdQMLlN 0$S~ayCD.h;\v\CލG?_E<#t)} HB?SNo9 h?,]3ҋ0F[oh&r;MyMBCQ{)A];!bJQ3 PY KnM>l >Tgy>K^0_"Mp$uY&ZQN5\pX 1w]1?6CfjD E y8ʽ*bb\- ]Gx nfWJCH};/"6` }뗀[" I`\<32D3 :r%31Ӟ(L`*h 8)~_P:׍j>yF qHsPi4^&VM";[Nd86ϥYEjp3kd EQ w|Us~Ϫt#Slagdq8 LĸP(HZ/܈);Pdk#" *%=2oE~gv̙5e>R{ ѕ똬gl6/xhl4DQ D6ioGȉ0'>`Fps-b@:]!Dc4tvqlD`aY|0x(\Az|8.6qSfIaI|˲N]پamc.v*:zd[lEU7&BƜ&N(M,mGffоCr/2 :r\plD}QܦgGkܦɿٗnܓ>g~MAJpJe1mRl(,(K Cx:;Bd'w$) !~&nmB㉵4L`t8e`yDnl0?c d.=h衆[+O?< J841>n R0ⴟtLy޴t!J8QHPv[h"n;^CxȔ\6''y69k0 UnF},i?I_8G&>HM~ m8;ﱤ~rST$c5" z0h6v5M:؁g,jd{ C ]BEH.AQ Uno(X]8_/Ų(r&rPb2nb 66lHCŝ.QO ,iEsrYbEXwP F+F?B;C/ZesGJ~NkH˕bgkr8 .l_F Mw:fC l:F$B*VwgM2-ţ"_O5v)0,/jM֕s+@ﴳEe]Ժ_U9W;uwcOΝ瀒ے-u\$j,E0gpE'{t! _e*p:Pq`ܾSmH\̷pӰ;|,Y#DZSJڴI.y:JcOz7b0t%k bo>!UtFz)Bm1ƣa0z!T*W v/3WH&2rq߾jmc̜+S2^<9›Dwbpaejej*QFw_f`VTp^1.g+5{L*5}i&-)D`Pl$ Ap%fT'-+Pt/{u@^HAdk8QOi#FiFv"L]uܙfTy%t'Oo S6TOޏah*]^ -s/q8 Vv:JXr#8YߤIdiÓ''7;&^|&U)e)[/UoϋIE$]JeP)>\{> ^`4W;O2jC'~434(̆]V܍̴>Bm7Bb uSjo~6=;w==pM0;xx]nL w!0L h\$a̳ ZmOIJgc 'h3MiZDr$iy*7'zd)n~/AmѣH֟!fD0IPj_H4cq)'/ ]<q2Ӂئ$!gt>b]͇%]!ۻ}iȸ)/|)$AF6 ~!36e+ lUGEl#iɖIuyDf}u K{1IK=q+e;V#C34Aҋ 29F1Ĉ![Sl߳gAlŪwJ84Rve\x4f qSKt%Cp0̺)s9)9=2 ܷX}uf1 q (1}vHPz_SLEУMb/Q 5)+ߚ;y3n7ZM~ 11mN0ONQqMN4s[9ǪQhdܹp^D?(~$ nF8!%5+'NDgc g3ýa.y0Hs;p:2}tP}W^.t@RoBdqiG!_EmU͘ẓOtP9N%a B@jE (R0An#\+ɰx81?F좞AR^tl^k݅Z3 =LN;0Y6E< g"%c߮giNc2+C+ŏ :(Я1K)O mq)Q{ g-V\+Cm3b}C![jz 1=HtԐ d=k r: HkB$o8:VMO7l<ށeF7u ̉O_6& eg٧_>9p9Eqj=)ρ< 12H6,AW_[OZ]GR2drBU7JI+ ?Ec謋 fM _}hP2W% ح-VGB+ QXuk9Zm X^yWkgݼ >ճkϸ ZͶh Z݉6DӀnTH)(, 3յB؆mVʴVQ,˽ނw(FlF=ѢEyȶI,J~DIo) \pTͱ(i 9gdF>L*hQ$qn85]!=Һ [P{dnW[|&^/y,&AhP1̡IAl5ޯ(4.8Z.aǟ9~hS0* 010Jrl̉x9;EKM ̋R} 2?Q VC(m \5ow[n?(~(~t_QB+C`?3{`ΥBT)ᣳ3SY?jy(.‡3Qvvg~u؄n0 U@- CWH:߃._J7uMŝvWc 8n>(cg^;e>]oϹ5g`T ),q8pQ1get'&vBr ^MgicH@oWN.kҰ?@gTҰ!դq 2swk$0QLFTsw1J)l^G< 鏖.6Cյ͑vpHұfpz=C(]*]"u$RХcHr̓AN͐KI;iJ"rFp'MKS4&uIѾ4$鑔#$ G4hb0-d! ̤วǤČI Om&' xPPN"ofec ''^2<82VYoܞ F’ͭ L+0T+`i<0C^ adP4垫j9##@ԧ';`䋵GggGWo?zC v?zxA ߷?v?BOz'L<;n<56ʋ̜X_u|6'sQZO #fߵ ~">,ICqH#$,k5$0J5+gM" ͚W$b#1B8O|JWoTR4(S}l3 Y -8'{ HNCƜ ]\c0Z|up{Ă Kop`L 8BUbxa_@UԜ#ҋlלZ-ԬNo;A,Jn&SEFJJAsmw8_]-g\qkR],on/?u_$d7u8yDK=o2 <O%BA#Ӱḳoem]} ceË8%kD\qGjƝ`IFulkgͶ/Tzfͪ5Xft*~QƋ#`š^[!kKj: ^dbWuՄZ| KD/0hmI/3/89T0JĘyQR"~]%GTVfյzc%lhFWڣu0E.݌/A2l$mΩR(Vג&?aW]}h0/!4[AAl9P!\MKn,},8 g:i4*[<#rؽIL pU }RO؜-E7K̘@&nwQAL5ĩmQWj0)jn83xs(ρ[}GpzځƝ8=Ty3Hnp2]\z,"_<J Fw>S1\P!5K,9Im.}ޞm* 6\!*w,@ 3Ae+ Ьöm۶m۶m۶m۶o>g5WrrISV(8m6iۉ׳[BT U,' |#r| g%ӌWikGGtIPRMn6K,zLnXEw5W<ѩ-[D9@u{\ZZZ+*B=Z6B!n ʞGރ/R?s0c.| B` 5HOC^p IaIUb-)^r;7fWK,*yUOw'Bb` r&ugθzj($ g_+} WD,#Nc"R:@iZQeU'24T gUtUQ(l8mxR (1 vw!PHdGA BLqޮM7iꔂ'LG|EZC œoGz?ճ{ 1ЧZbkTm*cM]kŠqb-M]rh[YHYT٨ y'I" C$|{7MDsjSEzE/ڝ]}F~voE˷ksW?w$wd~ЪPԎXV5u)C$mE,߬m yTDaYggMC5hj-HD*.ݣKЮ]mLp =/]Ikӡ y=B%e-9Gc껲nIO DrX̐  #}OvW(zO-+SZq`虨d o|z CzE/i1m$Һ3 *w27a w\d6;W{IRi58r$*rTTbk@h%] jdn4^qYlP̎}㰭ۻiȑVdJrY$2yt5BOF/t`WL5XwLS;hnV/ls|QCmT 1 2ЊЀJ >i ,]jQYrI|!$ R}&GIl@ro5i ZFV~ڤR&E&BIW"G@_uAz08_:N+v'?LK嵈:0q2f %`AJn2k83kTbCmi =FTԆ *?W؟{LN)8NlRW&~%ŅwX\]z`FIT?q+ Y(+*q``E qUQ=מ^+zmif?}pLE|9'j; er֏ͮom, ]_ǽp©9%Nc"-eǩ6Sd74."Dp9-][iv̄ S;.BX.n2 0-.%S-GWtېlxGr5>\{^)0lP=j3$b>߱XN+V/xi]Z"r c67.w]Mh&8U"8!y+Ʃ͘ɛrRAV8Y0ISɘN ZOP{&8K(= 2%)ov26-XИiﻙ=l:J[ X!&2=?΄F]01vV)W3kWk]j #E^t^=:6mFw`!?avE#z7 tfHI/=ʴWƋIN|]O+ +PwO\NL\e5[>ⱦ/DV% jɹ>`?ߋ qL3>aKse==◉DHCc`+(6˞˅n ia#}\{ Uc |~G^#~c%IĐp-*R ԌȶS*)QfR$nI.QNG@t(;\P9YK1=4G3_L a [ܸY|a"-RWӯf&/}JdwWTp,:l28ߊnyLH q;JGFڵ'4W&\sG hǐwF(`*ۜ ubt7;8Z_=XZԬf8s7`w VNVZ {;]b?$BT{93}DQ0ft+/jn>QD .\j4nˏ'ethTˍ/Bdd1V:1[cdMh!U>}<~4xAIS!?W b1,~?7%}=J y#̓rlʯg(oѝlw\+QTs78_{Xc%ړI/V[3u]wf:NCF{W+CcP&Yp9OvY%eR0(gNB4E…"`}uOE5,ʍOl[{~lOF˽FcG &d0oOۗ4lG8g=7.>a[נz=Sbyٿֶ@t󣭵0޸)G)QEGA]9C9X_=*9jN_ږ ;DeeU۞ZQ(eBd[ 40WM{7$}%YTiccyߺ" zryjW?wnVXʰF`1LM, em6[}G =[N0sͤ1eTQD͉"1^?_wV9OaHw|Ç5J6v/ZZY KCRpόlXsrȇo\4(roB(hFM X$)@u>24>DF7Q,`,RerHGє1#H)`jijcx iBą K啶&lP "0s a~0C<=^.ѐ16<\z^_r=uE$݈ *ɺ4!]ɜ zÃ,")t%M;S=ݳ㭹Q9 >ID-pLmK#z&6O+SR@^ށClܚ3'/:xK*peֶ+4!!ƪ܌A yR^rw5XkN5B0ӣXAi8$]ZjAR>Q_jq!ZjHMmvlI4ySu"hj1ˀbi"fYH,()a۹pwʱS7XQY[FJtYiRȮtvrG LHk 8M/ƭy∋N8ARJ[©s! CXEKei5&;Y%M]့Ido:'=Whf֬W :_X8NzeP0CgdDgݻ :(OU/R(iKӫ@IAz<*ELkLWOJjY's>H]{$* z{DVĕ\(RvIKʬ/>¦.X(6ߥ/UH L#_6ψg}NJZގ yEZ={ܱ1 9p@lZpQ J=NY`ym][nkÉy 5wTxq$x|= ]kַfgkyN}"wf܏]ʬis![w-5󰦿s eg~sZbؔagR$SťXҔNIטq5ś]hm,n\9wHWܸ+n6'3.[;߷ٻ }G ѽ^0Tu?+P=%s1j4 ݗlwQk (P =lU3X܇afl[\QCI92^/Ew@'Hw X 26VGW/O/1Lݴ칵.KJyM\Gn:qAhi>?*Űضy5p9bk"Ie &{-.x"4hR67XR&m)=G.w-7CJ} ?ve|ӝ<&Oݚyz..ʌXU/<S+Εx$b۟÷"kxV~{ _)z.̬ IڪqƲjmSt >wcn7o7ם'LӉwM㭗MjM=4736{յl'V{եn=c{t̵r얾a]R9-ih(FtLɚFԮAtOEwRL{fwI,UGٞ r=[\VAmMcߌiwnc=gUaCvL7 ]\ :KgB(v/Ƴ̎Nb)I%vsZ,N, Nia޸2/Zуm#Q8_R[,[pG1F%1Ն%`$[@`SN`pDezeLN{Tm 7(,1)7ۛ+NT93'|왨$ o/4'  0(*68#qQ64!c[n.O &yAlWඝ;91dE)BLLv@@!N B#Vݹʙ?,\py&!Oc3& "W/Nžb ?zo$G nldۣp<I6wƮST`;Hj76/n\kwYF'rI0M3"p-Ю\Jw5 aKLR=}NU:=rvq˄hG/ɘi/չ 8Y<(띊1S9!%J'a(;1Q5XG7F󎎰l)P"Jh8_KqV$::&!!XbPXJ p ZA=Sr łƬ/ep˓ܐlSavYXLj}$F[a ifߴj //$߬Lto DAq@M6Gϭ 8J6Ҹ&SIʜ CX Ph9QeF)e0xxxT ?K}߰|˄'_=6P!b! q 2#SHȥ?xycD%R`p^yOm\nvZg)0r,3;D%/@4H~NahXa83\%Zp '#qrH'慮BdE'GӸ-W\ GMWx.2!WɚE ?:dq(WĮZ%WI"8lP!Tw,bdpJgb9VSȮ5\":9dGdmYh@UF8}_6U&m!uE$Հ%]XLBnٷ6Ṯ\:<:i8-@/Ǘ93eP{V*c ]5cGv;֠gNqiP pzwE_3'|'{Ve[:f'&& /]ӯhךg^xQ@%/%kWs=ːv-N8@…w9N!.M'!B IV",i gg3uB:=v|\k[eY(ijzQ?W\+\SAr#`1 冀Sbszbjǔ;F. b!Bz{Å:J3g0eQWQodfSAwV:`!ȰQ#E]OHajehrm, O"̲O}> کAqV$^9_6=%OLZzKݮan 3mxp?k Fe_`V)J?H\.$ט <{5,6lU*MUa$xX2./s()@%XR?ga`7m=|P۸ ;ȜoVNs2zmJ7hhYl F>69_IDJ ɾ|7|\%߅x:Wwꍞ=u3.ͧ?G`&8y cV;s?Wu񾡯m]a]yϸi>ej|ɼ/sd\CDO&?񬦶=$jEgHX.G!Z9"@c5)vF $EcYwq&00}(}.~{lF\ }[v?Գg<rm]sV<-LoRI9u|H54xs^y-{HO e>Fd` 9UFi1d\6T2KW['+~y֤C&G.Pq &\'(pJ,!<[\CYGs+؄56%%"x9A r NQ\Be+d*U^ؤj8) ClE\ެnLxOl>T._ω \1Y6[2Vv^~@|nn][$<f7pCgr@'{~"E֭Fcbs`F~P:M_Ω=IU@UUcY~1&6VA 衼| a֝R!S\= qO/փfQЦ0"/(F fp^fν;Y+Ǟqz<<[ZL{BНκrlg/1Y;rGmԄֈ_n~H{%? c{g@Ns LVPoQsϔ߃c5:hxzH ?TYtŝQ[ _\Y,qXrz~LUU8qt>G+"DіCBIJi=)'I4*i=2ùNK3z90";ʏY,1%5~^{Hvkrn1ȰCNȉ/O@uFϾtMU&{h?yS5)BNw*& "a{|2qۣ^)k)͞һm)y/V9JFb;Q(o$a<#>8,i1Arv}RIaS!<-$VCuF^pk"^`_LC~6FP$b3boܩ59Fz>Iˣ̛2$sX1Sޕs~4jMY^DQXA6LWѪgz6($mSE, *[F<.$}秡rJާqP Qe5ڬ`է,ԩ]5b_O4PMڇ|z3#!D?msdyMqߨ<~U֫}k=)Zh7*c,E _L ]}Vq~7c+hqƮFgLvxs 65^k{1&riw0W2R'nz[(WOn~U\d-_qkz 5/_Pןklnȇwwx6X,`:.e8MKu75X788ʱT2&&Wwgnنjy2ytz*GwYDܪVr~G#i>ڏOlR'@dܪ--&d$ ZX"+QAP*H9%>|Iߗfo$$$ǝqcxNU'@3jcպ]0niIfvs1Od\/j=knѶinz\/4$k^Fv<2kR {u9h[ lS o N?[ nA[zT3D@!)~JB %AQl.^\ Ntނ%ʣr"L2I"/ly]ˮh j}׋i[@l@ Oͷ<$RThiI"%J[#( ?0(Ijk^O`Qj9kg3=oyX3\-aОIENJ}}4f_ KǬRyGaOb#bQkpK(im-3ʄFHG*J{g}jI9z['eGW* %kCPok+H%z|<;Ѡ3yي Qe}4':(:o;J]X0Q-l_3yѬX6\+H]Y&Q AV^g˰iK{BWr؏U^%4\ ~zabi 3#RF >d\Ҩ`o0d̞]<1Da4|$ bPb_Kz}lx]nC?r(jȽXA Z8!f4X]e`|r9*^D9*^kOvB߅YOaEX}D 3eUYxU`,vX3t,N c%kxke( Jn{<ۂ{M[M|Sw\?Ib9Q9Z]6LI$,_HHOW^epg;ֲŗaR *>o,l):lc_͚B6fMW#W;{戞6,fRXWGD2(#DȦie^_1v~./Bу^71[ Ϣ `=MPqLIކ ;M""! 1_*{TÁ{%HӴOVQxL 4lgQEePeY#hi}He{őoP e#fQ-.[~ڦc2J|Ms_± n1QE1e$Dt?$1 UdRR.)@ O!8>0ۥzoGg{zyO4!L݁^-< 0B|&oq̀qfRp\eL8'2F47g0f*2Y&nR-kjO^jve00a$y ج~$ؓ9:UG~-"m,ҿjHYzNv}Fd!LL"IdLv 7ܷ,E( s.(2F;EΣA(&ok8_'pĜK_}EIIJ sKK%x& N{Ddt=:qܳtN{RR ̢f)(*54eftYKN]@I5᙮u=z<Ɠ> IyhSsД5 i[}21~J$Yg&ƶǓ'*Fs3FGD(Rdh$)QkT JsuK2t KLL}5n7Yr^^=4\ ]tt6ya;?Z:BOٍgvqQZq((or$Ae+b#T=nw3ofv7ؐBB #FS}O^`3Z߼n77>r~~?kHKݟݱt>YE#vAPGC4qTFB#Akb6rôQZ`Fƅxw2$+]Ր/erQd'4֔/s̑/uro}JS unkx/Z{Jm>`[5i'+բ<*c\  N ؅m z-LCUtMj(6kj,D:o 2`R dFelƘNmRWaZ|2hY۸u_{s!^dA#[E<$n.4+~}V ;SU#(*STy*A(>*wJY/y 15~ "O5LRHEj5ժC=$p818HX -&8BHr4۱jd\֑a WΫ7Űl.͌?;D pwJQۺٍ&2v ^~{TC-VZseRLRt5Ф-Iٷ_[ctʾG,|r7st̶ +:=GQpCChqИK99'QjZWu>x9_q3#t7pCx)k" tZ.d5ک\DyO|^cG~tHq1%)jn922C{#T -q̲(>Ш'u(|~48J}il!;p6ڬmZy`ȦQqPQA6 mN Y"RAl Q n[\ _[H㡀X0 OLu!ț9"#6YCEHvb7xӇ؞i<ڙ^̞#]W c]v #vz--9vЍ~AZ*]M&}d&/8QxnR6%~b2qs% (. iy%ye[?kδ!ũ7x~I0Homv5<-1b5$ :zLz GWG0c[5 j}4lx 4`;Bqx_ :>?nٛt:m[_DT.Xc|>?6"ڈ 1AA(L/3T-|X71'H50:zZjo,PCYgab5osW#xtFJ RX.NxTXAyYq#{"IGc8_dF D~M$w%~ ҏ@r[GcRndBF:g#Bˬ!|؅NG$m iEĄ i1L~~8[c0qs'W&He f+]+e$aeX̚vSbx E [{ M;$/ձ` K&B? rS-B׺ W\Ynn _^ݥj԰Y/v0>qcrYWd[x߲s|$ x_l<혚*㺚lP`[,0yĴr<>_vqX8`Y$cHHge~*Yx~AZf-- k\Q?7f98ٻLD>}-ݜ(^KXR-ikK@MH+>L%t!‡ىYݜv۳'SXʁ}[{`4Qm)4tĢG 嬂 üzUL5ȷP Ofud=42Ԋڛ `p@<{mtoۺ{v|>&o'9>y+!K6[LdNpqn9ҳ||p~%Oׁ*-i4;[fR=xLF/Omԋ6O7 6}kRuDO4WaFaՓۃ k?q4_̽GKϐ?XImO EOwHA;r7La%$RTxn#Q)ެ t4s@1ͬ _=Ȥ09 % Q[Y:%D3CRw2erd%2ģaQl:Of]wA9xC6IRIrI(Bv. G.Ōθ(X%`?4%EDKmZq!>oQ#.t&oK8QهŃgPfSs{ӔAvcCVIf9&(޹"")3 n8f(2E= #ՍU(:½CPξhwn??.<Çm(gtTa{4#x 1 1.,]*BYNbK0GfE&z '^HbO9M''7m&6o62{tI143R1yK=^@Qeً13qG'rCLd Gk\"SK?ReS?}%V9sf S) R-+McI"e" ʄ og/P*C :a@> Cp$j\r_ ArC'Ypʹ6 RQ,Ƀ7qEK͂2ԵKE]xzt{MSJq@F-y K•=SEtCQТ?,c^pJ[1:rHd_$ jte)WjY!B)p,aμ$dsH,UF"/v^2([IracciVHEm3WHP`\!EQ[u% ApxH?ܓ:9D[AzX|l߲,vSsVG}jU3DvW^SCm^&{\,.>.[9˷JA(pE)(#S.Y:ka gdϋ g*zHwcri>֥G==V5rX?~nj;_Wwu"TQG݄7TGEyw >  ;\e[\IdKH(1x~6.$@nvϕfw¸m20KȞz~G6ٺc_̭ne&M@'t+bi8ʢ`bM*w!Ku>+W'JvԊQT3+)7&Gsή7` ]o S%t襇}kP+ G>( iu^'rͦYV֢rCყs40t֮ݸnvj5: D&W.rr4'dUY#?ޞGhr 8x9Ԟi0w\fR1k$cU5`VS 2֫e3_Yo:wy9r (Y$Ko@\r]?n NݖfzVs-}1D СɼGQ脨L}&̲9Kd:n~"=c<-B<q> xuKa1iml67nJJ)BN>t;2"#ت_Ԇ7' FԿ Zbܚښڱ^IWL9F<ņ{l"xO8.eʉODA^@xb:*W%"5"w>ߞ59APn$5:®o9o:z`yw7fpfuA P}\<|h>SuLq-"PI[ŗD",SP/ ܠW5`R?h`ġ*n+èK-Mwk{+hqMO/teE+RLCː5JHD1ِ ޲Ȋ^UU2ln.}:=z)Wxt%䞎͂AO=P&- UZ~P%㪹x@nQ񮊪}Y į =x,ERo tg4F #L,YvG;w[3d+l`KmQ_ڣ k-+{m:E5wT1 J_|ⅠQ MjTFnX^KQ}<b3I}dԇZzrvxMa@tT8 ֱoDNMM _W]ZNOi=kdUI3ز(jDn)E"iiX=PWjf5hۤv␊Y"\/EE=Ykjfi7fpk v%Pa"hJImDӘyxlzFϕ%jV0V)ifALGt'n@ؘ.i"[ C,ed nu؃>܊4_::uٲP Lً^#WE*ƩJ3-$<Flj6#3R=/KD-5s' Rˬ7nB|b҈wX}[wV$pq%u(s~H#*=mY$,qJ'F^2ITO2c Z|Ca'ma8=||Xp:HGڰje}.xܣh'i"ӘX/DC@'k@b%8s;:"͋7761ȵT_7[hM}#]oJƠn0H@ |5-b\_yk_y]M\~\_m{]7Uqg*3 <Y׃E( 4(h8=Xb!\!(ÌjICb=,`O}Y8}ݓ{#6Evwf\-Qei!w.ik>FP +G=.,иWmp-'Ŏ#gNJ2ڗ >mE1rWr-\E{F[H]mطR'mj{<%5jQkOq[މ [;`5Ĺ_O#u@<&ߥ^a}m܍ 7 :RzdkZP$:TGR-}AN;VicRGJT-c|_ğgoo|wS2kxtP00$ AX`bfR4Ԇ'G֙C?%$~g'YIW,_^\yD.sPFG CEN%U)%] .$*ͼF^IGL@ L>nq@h$!0"0Ԩ u>jMe5QIJl َB< x p96xO1  qe>VeygK/Itp_^;2Xam #7bSrQl6N2LVcTܘqA=&u&*rZJ*6G ✾kIR5JEQ谦bYN2Uhfua۴5sNBtt $ 뮛LxCfq`oI{Kx`((vok 6f 5h觍Oɖ1F,[LlAz*MH=3R8:8 kڍ*,>dAayCj3NvIo5Տ6x*;$RACRZJ F\DV3j ,u)WW3c؏T '[FSm2UG?tjycT&:F-ABl1{rc8G vY3e\tX]5Fؔ~M!fc~Liw}ͷ5E)BKmSW~_6c k]Z-L(oI1ObӨ ?UmVr[}cKj32`&VN=kyS7ZE]_b2w"Tֺ-z,tvJ1\W-p״l韛W vؖ'ry[BXVMΟH>'",/''4Yn'sFfYER'Ӎ{Yx&Pn'$V4Eݶl,Gc!_>p# g%˿BYD~y τo7Rҍ쬽Kw[ m1 /)͘1`[F\C<&@ӵ_-4PqU QgCsIJvz"vc+?I^lR=sH,Ra&mR9/ד*uZB8ijBcc-.h|m#:P"YTbB]'E]'qsi':)kneϱ+ JpBMι'Atm!3>D0t嫺#ݾ#VU}(#0~NpO(%dR.lcUSL}ஜTCd;dԬkaN;ɩK= x/&5hz\5D=RI+KM>E>Z23Oa02t*™[R"E>2iGd>1b".3|LBa;jh*mBGф>ĪG҄>4-r;klyE4E\M[UI]CTi]Sd\Sd\SȄBO1 ebnKVeb/o;{6}{p?ֽ!bA%&>2a_^.e7ٔr0כMx1כNzǙm(a_s-tFb(1t1XoCPԮ_J&s!wV27_"YW\(됟TL/{{ 1UtjTC<ՇUڣ]U] ʩ[u_Ȫtki5ܐ۹*s%ʺgXF8/"^n.4ʴ(ɯ(]ʑrtՔ]>yMju!{= @0  OF5 4?㦟#p*ӓsƓa#S\wM\U9U*:#CBt{",HM掷B 'bkfd|y;gأQ2nvVBQFx7Nf7j289%3V"SG?GhF,O< B@oM.蘣IkDK*ӐJ[TZ,CaU<0 ٰN HA"=ؠnT%<@մknPW 5^lߧSzUau%NH6Pj9Ws 9F׏Al^v >V EnXSW{RF9]ƺ/bw_h;;[*1nuF{@uZ-ޱU Jd:f;Z9dE"S*h܌nCjXRui=1 Ӊ.nw;+_4}d"{Wb46yһ'}+VK@M, Ɇq&J+Kj=%/jDP^ ; Zf~ޚqjɟ9" bŅi)-Y ~<^T՘k2BզDl"xJcs:c:JTo *ҏr 'P~.`[~;rJu=!IDJ8Vjbc'(㲧tKOވv-#n$"K%o/0o{FA#M9ެ;jPC^NQ1W;Z$ ձŬNeNbxC ƒ:_ԗSu_(h #Vg-úpf7g)I/qon9d(qkƭ -\6d"CX-ԅyesBX̓^h|媟pOw }(F`jBp V{o/OмzOM"cد@E}E=0tD ۰4?NX)Z,?Vu>0L,a)?}wz ǚ ]TOo]fKTtS*˕l yڏAXM09+sgk^"Q:p2f2XWI;5 z3mqJ]o ̏G뼵lVAuQ1G1~0MJw/F5Z`h=vR[l?NwX]+~ f[H6@{࿎L$ &k^#F|tHW|A<͕h_=s4K}7)XvuzϖV{*s _M#$`Ryhێ(y!~1=lGd_#c4&+c?ښo1O iԐ1x,ظqV?5v(rSd_!D/z}ega]򑽈7q ܐM*rѽJs:g(Ysӟ\ SE0r6{UԣǎrDώ,RhQ$6QSr| elfc*F `hD2Cp#3ޙߏrAZ~UjcSMoeU2̑*R}PkAeWn_6/=Oߗh}q(Fohw}/|TgUMdzD]2ɫ/!arn` ]Ȗ{4,}+lXvר=g.o4BnZ7DYqS|V(@GEjL / uan#0i9-`:z0! -6sD ӫؕC 6lAyhK*o-{D-DxߢKu*KxUM/}} aЀx;{yQLFzB1xZQokUt Y$]HIЉl돘|.ҍQHtMj ;ǂw[?fM)N|Xv4Rpw[(qMM_\/AOkѩyĠ:$ZiFrT=,؆\mwX[$R^=U-ERMǫ~ y#% dsXJ9OLGbӧܪ.(D-e+݊}RaHT3[sx2KٔDӧ^0=o"67e1"H !PBSwLl1ҞijA&GM6=xƁļ猆L/$a.CN-2-G~ŃHz`q226ivE>ƅ<~H_1)&TvB&%XMH1%\o6G',zl)a!j"b 53dbOa)WJ \E\< |"@0\Xe<ln=U% O !6+4~rL=ooUH TI$$`%*O04j'CL )MO hK+p*K㕷n9 %K$ΐpĎYD. 8M'ޞM*-&ssSu N#[p_[mPԂ[mxUsCLGsY@2g4 sl |H"e UA\ق]|\]BVfzi IJK4ECs!eReu{܄D`4D1[ɕ S7PV v5lT9 6}uRnFΔ>XhժKp;ݰMjU a bM〬SrU$՞Tsg7Lk+<}tʯIP$5L2{=}Gr 6LG9$'k\*yYA-i$K8f3Q[O橃.2zqbvi}:o3_?|c?!qw=#z%3 p [ˑH2pgotD^k ~J+%8K8IxDL:6ڠuQ;>JRu҆8,TOCck@ bLů8ErmLɇxرyzdDOJߚ_-33Yӑag9d9T{^ {%-vycھpNVVgP}\׫ 9$`Fٚt~VKWI͙ZCiaCŹ9T^&Dїao[t.=>Br򥇾|Lm4ۼs4zg^<֨9ۼ;+AsQ$N` 2O]g^zW%-6K'!tQ`:I-^imvWCvY-v5i]n#Fu}jԇGI0,Q9Bz, r醭 Y3 {|Ǻ3s{Z~_&wg-gn/bPY"ɫ&dbe~~ǿٝ>61d66^fVNN_K7Gh[.sK5`Tbl ص QI8%ѵL~6piLj؁ZJ MYc_[O| kl5v0kKe4??<aF a-Dd3zsbnr+Kac)1t[^ [q՜OiFAiJ4c#ޣ=N;pA9 ֟8n ݃>vSkF>rz1ߚx6%A*@<wfPȔ|%cԻJyd DazU]6W(..2e"byۻ_Xn.+"]Fcetn[ ak+d@ $js&m$@TB{zz<q4d07ʞrY{Ȕg2Nߜ{(e=T,M֞@$oL ZZЂr" Lo̘Nť]g)"AVޙ^_Z(AM8- kz*jU~iefuE@@#a21rh){ϫ_эx3{&n&}wqoil8'>YJ󅅤lӍqejԪVx9lƒѰr@ e°ɉ @_8iӚjGQ>[%[AXLen4]ET1gz̉4_im/ {QGK'Aj`E2d&N`b.o絔KX/{nЦ'&ֶWL,~,b"ѷ#0xQ/BqwA=R- lqP֎aeM,&y7Y.Sn9Wt&Q*!K>`"l?ʹyyx4ÁJPGC$h{ GFklM Ozs}``sFoudRq .{cTPl&Q戊22$s M@tI$a"vjHL5iL/* ԆCeyRM/RM]OrpNdㆬ^3-{g'O^(ZC`ٸa`)< XĂ#B@Xr~Y屲fX a/}FT Irm hDsb|z$6ZO\>:*(VQ(yJjڙBp$eshGe6*.6Ok*|)adVtK~+UZ9_aumVV6c+UDUԧ[@w˨k/CnxRx(9q&MϏӀ\i[lGP8u~t v^jʑq8\I^e̓U f#ȷ⣥oY҇_yY; lM׶mwm۶m۶m۶m۶l:^eEeEfe,1jmMlI>hxpYUejj??؇t0JR.[b߲MT2$H_ y|]5^.s㟨I+hBglR >|Hk0ldJcIίS7\j)㒟=ëS} z( #U  lrO[^[G)5smY+jCZ]Sɥ1ksuMdQDsSh%pw3NRR`X*=m>^L9iїYnZx8AW ~F"g>g(=`{Sw-CFe5ynZSԐ x8274:hs:MA['5ݿ+&+P%6yc[YOގC Ҁ-/i1E}r^(fV s,3/=; 7AwQzإ&8@ bdnb`ldjaRF+W/Оo]T2~m>TDYxX6GL^A0ahg3,pї$; m<#I|j t U10Qo)Ѻa?Ruܐd{Mz֡o{oEΟzӎHA"`GNQ0oPZnhb3UZ̶AN@J@% ")}̀6be7m#mvD*<&DLʕۥ y_Ct6ΎxUu? t5+^EߪB;Kwi52NLǟ̅KXA!lPHqQwVmʰ"uD8!K2DEP YRB 9 (XzTL7>6+ogYHn2vRXN@sM2i?L:DE;fOe 'D]!2׺Bw&5TJyhil;C̆7Rx)\E'Y0d}- 96-,.%p-t>_?J ;/s^r2CN8~%g0!$nG0w8摌f Vp!'j N,83_ɶ+}&p#6ohsUTz@[U^ӍGeeHdu3^Q ˓-ENOGW"hZQʇQ)Bgn&-}T#tlfFj؃ Ru^..ZF_L0ɘ(UC$-B^p!0J(tј_x2X.0IIנ iU>:`%5wư3%R8d' 689W,ڬ\ UW>oMfT,"Nxj`erV` Rл.u7XL;284 fJ-=MO~_},V̷ #%rYGpf_-zM p7]ڽ/Y){9gHzio9jva=\<> Wg όf9HAqe#.nYyfyqTouq5ySʻ4/|~ ;Dsv gI״ P̨p1:,ٲD&ޤؔuݨ`2@+Q&bHq>S~j: Ys,b>Ֆ",Ne9&$n)ld lϲig:{qώ>Pb\AE=]dS60|B+Қ{*Q,Z@j τȡyfM ,K?uU)-n (CEI}WZޞk.Be+b CMB정%ʃDJWG ;PV]gMԿ߼tr1=Zr3nzڷI(볔9ͧh%S3# fXGFXV# xV]JngsϟWx7=3А2MFAjQrWH'8OK2q:b qP~$HR* ?Qhlz\udg &+hUxfɫã8Y"(* Ryx+|L0kiYW DB0x'5{y(lP#A^doH{g{[nͣi3ܢ@q0::'bL^ ʘӰhy=m ca6|7|/%KADs* =fEͭ} 9[UOAmj0A~\9j!M o! %*1!YN3~ήG^͖*GH>%Ɲo:ެy[mReUbpڀ]t?9.?oh;﷡DvyL8<DHj8m#P18'`n0$:zpU Hi/>NL)=y:%f w+b͡6Q}mB^5\0R8H`fU=89:"aD-)"~mI\SI=,lp&Fdt'!Ǝ[ܧ4r~wM+7sRnLR?w_[n.>ócpGw{obqah7ij3;#p;}4_@5̓i0Q;@0'HKꁼ<ұz[ <)#&R],x߆/u>`P]˭xWq3Wqi,ӉHtAd%fUҠlSeܩm.A|"k'B%)h y|u:4|Keg M/F^&&|`T>vM' #jN±ӎ0ĸD[ AU>x;B6I 7*TJa^RDDT~Z^Giwa7^xECD5RÎ5yN ՎtXeAI7Y-^vuwjOX{TJA4W- Jds$e/ϵjFиxrB<pqv8N)*\3LH bPZ6 f͕*4)mE Bp=Ѩ앓TMsUbCj+KFnb =C`bHݏ%#ʉЕ2Z$[ũc40;(U[ |&uZ*#(ԑTNv,[@prlg#I2!1^fi6#f"϶?5A3jYR2wHX/w}fG|mRl6IF{kj :bqta錳Uj |2 +'+QiµhUs)[`{L{%: n.a6kd3zULK`SU7)0F7mAoL.zEkdŢD I6K:ŪlF$-uad H%/Nd yT'W,%Јt80Q:&(Tw “˛ 6*"i$5#,T)#@ l2*CO,PԹA1:U| tTo]Miy(c1)XuY(`3ݝmYeyIء'_PӸ=|27]TXݿ#G>:#Qh'mQoLj5.C(_֧b K7+Pw;0b2ADE[\r].gwt|P2}ڙ7iړ-cܿ_ }3lɊj] ^ z x2\P3Tu*hjي1NqS1򳝠(5(TKJj ßCX ׿3At47!Ma`xҁNŠE6bsS_\Gn :S [QS>n*+V .ZMAJ{|"w ?̐Ő=U^5.x(Ms \%qjBCc5vRݧe1 5] }6Sl9ƻteSHrʦd _ +#%n99߁P^D{X*t3R0۝ }q4흨'řvBwfRy&;ǡ-LƙdS[ԊfX-/M\&(Ct@j Odޟu>G9zVa9Һ'<$UY*4W1gtf//B۲2<:f tS8~+]$^(^oL l°ZAy䓹j3p|"bR)XC"K>PI *; 9Pܫ'hݶc^*ur}$:oAQ[ /ci9mWMǡFA:T7jP&| {NY(r)d;qJKog׻a!z_c)%?z4ޓba\WHC)}z6 )4 ,P9}X[eH<0LЧ䴣n= H #F}fes!z܃dǡZf:^m* (y }EVx㿦s2`FK|?y*JUGoѠO;A%WXA'H?Ytv`_,N`|'6bBDD,*ceD*CKb';(@I׉=X󸼽""~^ϣhBJ% #$E|Lڧ)ᘢM0k[3ȋ.ﵜFFeDp}~0\ZIF󼬟^x?JYOa)mE ,ږWP"ar{9hݴe|<+Qo娫1!f\u`"3}_b};V+^F7%8tfn&HmkLRZsrsR pR3bx~/kq֮bJ?h܂Yڳ8K ,%t˪%_āЎI|p| [b:Ÿx!e|H׆HhRV%4!nIx՛ Kj<&?CIݤN`ٔJro^ÔPh?kd/Inԣ5e#Yw y|O")U'j<깆'E;_Ldg)dOA "PH Iqlqo )']Uen."~aEegޝ2] \/^PR7aelV奷S6H: .$'"2rn>ڊ> 7`by\eyy\ez5m2`*2bj0eK# zmR# u!q%s#ѠB BUOfnBOog 6`Q`nry/-O`Q..;D\;u`.2=]kduizUt@ N:^kkNZ^W'G=D|Jf5?б]^M,./ 7-q~)};[HroPKxF_lZ:T|Jţw߂n}߂0:- cĕyRdy^ZP< uO_N^u>: @Ae]*4dk]I/þ7֞ W?\?ʹ!^/Z1H3au`YLQ+f By'߱ĠI" Oy[-:MhS3_`b+.yTH>=[wO!%Kgφ%Pk?V]<~Q2̴arCQ#̽lmzNʴGA#.:.VR֡*Iu$[h;Y: l91Dx8ҡϧ~X!7ZdMͩ[1X/[1=(Sp^1KVs/B5$k ҝr64ú.zL*ϑiJ1l1WM-E:[~96̕$'՞6Eq" hac%9溄}e ZܐM- &Ne~r$`RxH#FU| Qwd}I^ppnYn2p$@w!U`+a@#D*CLK GO*,I#}}Jxh էqd2\`&qh|>pC},/<4SRޑ&>?:d$uu,Avq^ٸqgKBIjV^,э椝^*z+_9N`$_}-<U>Tz~A g3@ǧzCnxն)d-qCzhK&p5'/⌮ݵ 0V@  QLklϰB:b2 z_D'a)5?(~"Lkl-GѓX 6$b\1ȫ: SJ|L'S2a._:-gH?Vf=!kIh!ljB&i!D4XL>W3.l D4U=y(C1޵540@2](s DN2%'xmG+_.=s*=[UN65|YQܹVi !ˈaQ-3V :JkYp^E\pI .T/b 'Jg@( ߘZS|,@_J#q'?u50N, DF 2B+UI,ٌ0CaWG{72"pdZX|q#Sn,L: SףSnCƟ#5j~W6Q!UU(^̻ 3f=Q~즶BZ!!XTBHλ=yn y>ТҘ 9NU3#p.Hk 9ʭ}ȋ&+wrVRcExS~ ޜ\t@ķ2-1!>ǔ͏PFf8E0tG61аcL["G"Xe X AIbCb̼tU&c6a7ZG#~F&i!"/>mĩcS)ɒ#-G5{M՚~\X\aƪۮ ]%/ ['PX˯hjyn'Zu9;NNU0[Z~k2X :<<'u.h'vKD/XSO~mHVx|ւtP"? ώ9 q0 ܦ)iB,chuE- 0$b#?$LX] %˶+= BcG^XU'~9 h:kF;Z,پa2׺*>$燽Aκe{KM|ܙ| ? _m ^ gAnutֈwbE5 ,,g\6Xx:UMn.24|? ]g9Wnss,?DqK:IORcCg9n J1MJcFg'qKN# =eW-ygmp4׉$@2ԯS&fI&!RKfeWyh6?{2Q?.!_gM^3JrpIň`XsHg(̩[|,|h=~>ͯs'jT{5R=\D&Z.0Hڶƺ#{nu[u4N2ӑ/_Q(o9)stG[㖹) RN/>cinuID.(x;0ě j):\*&sC>sP~:ĬnaUb\.%8nkZs-Ł3v6j̮&C;1HHk < GTi"+P<֦:rb&FEʠE??IAj9՜ԙ D9 <,8;2=d-o<8u {(l|}~̸mA<'|fF]- ٩IЊJbg]7mux62L؏'RˬdY;XK"u#>$ 1+'#OTIХZi$8 Z ӣKM~KjAmxѽoXAe`|Z;)+qĿ>I--l !4l ӺZXIqyBH%1z/򮖌܋ۋ#n"8N W "jW`=r9ndbD΢]wYh.\D{B ib0q6Ws(Ŧ.@DƏoQ.(a8MDXCgs_qǑRSO+F\onS%Ɉ2,a 4~PnΚ$MҬd,Vu[p_ W-fu"bk$ ˪v5<ֻaOVB"F>ܬwC\qD7xx KѼ o-գ#{QCwm .Ѡݵ>-I~$Y#Ώ3O yg/TAg6؀:lXt%X%lt-74.hpLkPMMf#_Ӵ<;$ k-Yc lc2h#6 '#8ذ f^t^n1vfs urs;hkB!+x}a' }OpQHp,^k=:&2UXyȢ`}8(UOy^h@ c |YJgF4u"_vmm Y$O]cX߿H` N#a<34!d̝"Աd81;MR@)UӪ@VHdjW`<>Pڧ.CE>$] `B1Zr\| >+oxK 9#)#?MiًdKO?@fě6'6p`ɗ&Yx7ue(OPdyHWMjS^= W/ǿ&~vٻ&&º&ܒnЏ>Sn6pkI^IŴLx2M<^? ]yay2B3;,9O9d%wYG?-x(,T$w/6 OT|r`jXKU˽<dX)XYgsNn/왪jswFFAwqw(6}|/R9Kk.#w+].W:ywT>VӡI4K[T4{^W^_t_VwWtntZ?)JFy1\lGPo/ʉGjA8J#F,Z1_VPZ [/ATHs{ձXǵ+2DzOOvm>}߁e򁐞 zbsڸ;to3݂<ޝ֋sb*c,흡 G &S-TjE =-DmH|MJ"jf&ch m*b?N)DZq^d X{Hx*\TSɷ*}Tse ʤon]$2#wۼM;n?&~]\^x~@X*ž+$e 9Yl]'_ШU U)M!%UY4uapW]nٚlp=q!~~8aw$qjMKADd]W(RҊ.*)nWBui9.@6:zM<{ D+:= 2F~)a1&,-n75Ԛnvhg"a{ {;7c$7wa`ВMՕAeoA38--ƀvsxB0ũ3#˳(rc݊H *Lsq“\KA "hƼGGD$qik1#5U,VљgҖU561C53@LTZ_hڌL+ ^~ʹ UMa#47Z-qn,i*Ne$q0=lh:kx|3P Ogr඲EjI\ieVoۄ;VvdFZצ(2bYp͛(:}~>N'pI+S_N'2Idw{C3GʇF06kiA۰h.HܳeU(In40,17 5F'f-iUHn *~f>nryڧ 4HPZl\_xy3^Җkdø}!@rh^̛zb4b>/)!]{@S4Ө\\d5x]OXV{1)MmwG*HV_W+Xk?+` _>߉M\w](2Onve;v:Y씴CoBc2ioUuQ 7 < §H%Nj8|weXiȿpgO %e ?Eq\ I.=ll,tISxo*(.Q]!ԡv@eYPk^ƃl*g 2SAXMHrw>wVD*/7E>1-׉eIؕ>z"B%u=g#Fq ^iS9ʼ{R #lc\iַfS=LZ 3HMɛTBurfY_Gw<^̕`~`RsЇEЉ7b(z ЋuTڪ|vOy~!e1 +[3'x:vg^[`Y#5ǫwEލʸ EEn"k2B")/*{^ T;}6%R Iu4L||bSSek7fk;!4b/y*cB{' CO{WOqXU)ȒmnRP&eQ!-pQEudxOn x,:z7!r>h{gZ?;w6eqkB&,hqniF>u9RF? h[:o$cuR%ؿ2TwSkw=Vf|ğRehE0DosBLtG]:) K:N~ 1d ?]O 9!OK8srO` &Ǜ#5)ʜXRoJϢg9xhkfn#y {gb;Ky7 L*-(AaCD\km|ǎ'/UZkڽ8H|o3|1ؕq<-]2o=:PR!v͟ǃ[:U߀sȸW\&VL|1WYT=MNYCݨ41Vz.1`~CN%+M>WR|С fJʩ#ܹu.xɆYpXFdIӬjc=7B)SU '#>У5TꦤbmqhI #m;$fLΥ5ljVD}$!wyeN)bdNV5ђrVs'#YHQ4:u^`;z C7^*zfYmeމΌ|LiJQ6A0N~6ei!Rvu"1ITx4w}ֺx!#-*wA,rѐMm(wlXeQ 삹#%MAm0SmރBe$p)rs <0#Ndh'ي2lF)礍|ZE'/2=,rھnaXܹ,8T|GnQXߣ2 Fcѐ}SlS~|^RM1ezv.MmX66RAX_~o`kQ@%9%ZIצtdظ׊d?аܲ"r^zm3Z-Rh{cPJ9& Lf"6.߄ء=Οf~EF z 6oҡX^2y]wyBS׳iwZKQy']B)YKRA=+Xʅ AL}Y VMfffۚ.o`1iG?ez˨i oFLw" ?AEhTO'_ȿy+: g8p;0;J5(IgeHG~#pp`|w[Mg"Q%CcǔQ6U}c@]<8X?I|mbR^xj;RJܴdZb6#r[uۚku ^{MLqQŻvy'^ |+Zímx+&;$1ZBe6YpB>/MND.]3ߗ iFBaKq( HG\]&~aob\?=3' f$ xK#N~{ h)❐ԏl=Eq8L;OH?!!>BѰ;y{@,!\Hv !64䯖K$,/t!Fx;zr8[\F/2>g"sbY~PinWNa[@zwT j/F\-P\ƺo#8H.[#֧nn܃vvH#[لXads1 K7 ף jWʴ"cvߙ³YS)YWm4eJ.'y$6C$ɚѦulW,n|chgn@ܦ[J`дҹc#sXx.ؼ VA5.7cC.D9q݆eX .5;*TM̡p bV^](sX~([fd M-+-b/131u5=fDe ڤ[pѷO5Z{>g>9g}RڤݤQNndڝDb0.j t(uJ"ump *woiSv %U8M4ǐ0P"r/fuրͰoPb}O};B*H!ȡN|lZf LI6phgG-x`Y->10q&mL+AL?&X|(FhH\K(?0/LZY\H UlW,R^ud# .;r'ch&wDݏEC LJɋ8:CH*DzqUvgUP%ڋVl6 # ֏qGpTF͋nח¢0 iSq Dz=0B{S aLP@XCc13d "FRډR,T^WRi>9?+"TL_!X;>/aIٚ4MNtչ/HX-cpړՕشZqn(B[VGd"d4*-mKCźU5!Cx7 ;괛ϺLs.[IkIf"%n]cD51Hx d e) hRӢ|=z[!z3W] Ts 1~o[ոFmLΘGTtU[%#L4^`t3‹" *nG/GP7y$nF X0SY7DЈG͛SRܚdeaŗG0YiD2׿!j@vJ/bQK . mHr(c>Z1NaPjw8;K!۟TCMP/E#҂ YPQ?Vkփy*=VҹIO (DS,i0{!IWδ NeoWU\пnrDyzM|v^V[jT%=sTF>DTs˽bxcDj6A8F(WLQЙkj\9r oiSU8RZREEX P_> `nH vݴse韖D{X2ss 8l! t4p]1c9n= gX}ywM@z8 <@fsd\0,1U4h WviO YÄL3('b#HVm^dH6Q _ @v_X Rh탴҃)D`.겜n IR%)Dͧ/`* uMP'ѯƠK"F߷)?SUuM\<7($%:ë ͱLZLJ\\_$Te# + aAETDdpk(Sg0Aն½}0h$+gH?G]l*.DUq(LU*dG Ix|2$j%@:Sv!QqX=чV7l 1[>%l;4Po!UG^ PZ;mv "x,dr;S}y;@$G4(c1Ø+U久dwQD~ɗӘ!!y mRaLn_I?cbFFֆ ;[-M7kN+˨W}n}t#ja's ,f9hI\F%J? L5X1$,)< pSO"1ȟ`B3ڹMROJJ߯&S9NݥPEA Ǜ20NTtgۈUݰ4默n.aʅXO]g晬|"q$XAIҙY8%lnfQxokͯ E6aM ~6(LQ$:\)1BkU qc$LҺGuF j)^Ǒv/ᯞ>k7BR@ 6ڱe{]L{8Q֍h\7 pI 6Ml]鲥II;DN#<؈}4?‚x=jCR7P1FS NI|){w-ムԫK5wh`y됱bCcz1p05-{t1y=tb[JvEf0ʏ"Gct0cö+Ẅp ehڄYgܳ/36P|Z לk n'7$3,hb0V3AH,Eu\w|Gw/vIǠ yK73}-ӞF^;W ~C7#q`ő f> iq"3%WDXU_D}fjH91Dq>D:~t `(yXjoH=%Ms۹hmWXJ)O8m6&zQVšޣFU n}Cڋ]f >[vs=(qx@AbGMaY#@uϷˢݤb?ݏ5p0Yefy Ydn\{Ya}KVCZټ2JWfT^.Dة,61p0]>WY[RJU>JQwXE`jH'ؽB|X*!zLWCȃ\Wp&2_10ߏMZ 78800NaLXj@5撴(KS4N4/o` Έ_3tȔQt ww0pDSz_)G4"QIH|*P[e7>[p6"cy$Y^(90E2.qz锻$I +0'JΜ6ž#?fVH*<iݽBM{.1% [р>d&sFj3hm9b|Bu<^hs ^1at있G([S>HRǪ]xc#<" ]4 i ua6ߦO.YFH#NBXsBHr,(|NOo}j7ž#lPOT Aʲ x]i`_Y=[^ߋɊ|e) zTyJ"yF!07IscPUW:p`mĀh07@f ypbGTA+*?fsrdbYN 4\:3~zf$dj֌ko1|UEJȎ<4,<6QfM^ aW@䧶u _O 5ь*=t]p*ĺء"i5^:(lLc7ˇY̩qRg(-O_kKny/[>O*zu/[}Y?Y:|o}3o)/Qp֖CéﲔڐZL=b4/N<@"a@:[51b epJy`*}^.0.8XK{(غA{淃a~᫜o_f~t?FTwo`noSFl'L!JPt5_(ZX5L1\Sz)ƾPӵz0uNRm}8=c8l&l[ƲA_P֌&ѤE0Nk&*_o21 &ԣGi0MM.Ycc vK6%ޛVx6T ͺ 6\r_"v7Ky|3oDoӋIK,( VA5ӯbPwSKsyvpsG@÷ %O֚iS8]T n:eU2pA A–KNbGJ3EW^X$ ^: \0M {9YN71Lix"qTYs%QDӄFp/еŨOp'ylٝ t_Nĉ`I')ݫ朡~;%d*ՠs}3_l(. 4)}W O-j0(rPc f&8/YAE땘*RցuJ>("ipuO?aUjz8K8U i i 7e !M5j>gâ8 %J_([#" 2f"iC>HJ"aZSiX]\DzS5l4*woD?R]̆o}.̜JѥȚtw Q j0ثjH״h+Ut;q ۛI{L'd8 qlЦŦ t0zdL۹غKԷlD|`!̹Al9  V7S&,KW\26j!Q 5EB"jğ1<,@j[U ;~$+Lkk <*?PQ FQH$gŠl) UOir B&sޗś CH7׳#$ z)(`e͌V<С\ܼ[<%>tuȟ_z VӢe$ #^84*( b9-AR鷫C·5 ]#xy|>};}0 8;bˑ߀&ݛt`&SƑ$S%@\D!~9'[*:3"Pf`]"E0b=K`x 6hl0u±ꬹ_WAqN;q 6KP3^t\T~o˱ 6Ld7ӤO \q CGH{<` (nc 88r~G zGc8,)eqapn9HjҒ-:̂ҼB/xbd[[#-uҩ>{}F9rIܬ}s68(Gķ%n ZB?@) 숷/C4Z6~ר;v/Ȟ5רY&~IQF2)ϪOw*xfsŅ!U%/YZQF@Τb XWE^txgHB 栱ѐkըC3nn:"{ec+Ņ,I csoBטk[y7\ rJ,5FAX!cdMun]A~EudAsoؤRabRLRt hZTʗ&LF̫lҜDaY P n2l ۊg{vuhra^W+#ڜ:Z=״g7ޣړ j/'iӨX_q5AU8{d0֫ ;_r[5]۝@Լ.Oɽ~9@Z# R޺;=5sB|U7]6U=⥹?:Mc*M.ǴJtjZ^ f 7z<>ϗpWbou3vg8G>OnIlW0adnW?_]80W~1B9}5rCgDqW6{⾆RwcီPb~0jJ@.F-jzNGv ܮl^]E(m."("d.Fm#X܎$1r̒OSesCq(zA;J- Uf^k~ʟfMm}|_;zwy51 ës o2} CzAW 5H .`1kփ qer@`x*`؂m0Z CꁇdnBW ua Gۇv=0*'7bxNG%-zG`j4hOV(Vqu;sa8JR1qF"Ҹ:CaN8pP7MB>s}9U_ɏ$Gn}i;nG'ncF_NZ 26gKf ;7ޕOa K2/[j!;Zep |[&Eɚ/.(WCвe!Eqr߼KI|;>޾ .]ȝ )~ir6\^cc91h.V):5A1 iA DrYë˹H7ޛA}рӈ V,73H 29DzfcZ` &E4氎st+MpV(KJV ڴ` Xoxi{LaS-]owbpE{'KFdZ֎Сs Ha%1? 6;r9W!(DjRJ._h~Jƃu#`&|حԙ~amgQ+Aw֛}ƦLє]^- /鐻]z_{<_#A[=G?Nd$rF1Pz/C検^$x99@iQ$ِQF֧>)N&/a^i5[['3@FȪqɩ=CJfJՓ25g8io$bB kriꁫsf%oyƁ٠DtԲpA7n}hI?eeDamzջV53jY&4_f@sα9v0n]8+?7F=A3ѡր_'RJ1f+TBkuNKxK@RZ<* `4N n+d㎆{!7inF(wӡ3f)W: pzj@ $OFBBooZj |vb>.{`c $FD+XÚK7L]u"bgy[-Y"s&q{wb6?X%xgBJȍ 9r2y6հ.TFbPPaN#D l;~s_Bq4% X-QWAR9 V4Î&Igo+NQj];~;RN-:r[Ɂ=Z"ExiY"]! IG4<5wdzDl9[SeRT_-SF_Ab%F-y C{@FH?KṊ1r$qcF|bK0}R|?@ J=j0 73:hyd4&=ŧL_?F"~Ɂ#(HkZ6]n6a_I_g3*w4qr6`Y9r@ A*WD%fKH*%J$9 Nz~h>[f X]}R CL9X#O4<?ݡOҊ593BL$t: džWuO0)HOQrB^K6H>;1Aq϶,k.|ǘ c!@){\Zxԅur^5`"Z6\ܥ5P)ov]@jAP^LJ&ZY[,,lkdD>{fn=xܻ=sɫZLKA\U"qNC]ks<ȹ|`|qPL״>A܍CР'5i2߫Z3f}k Zoǯ8I[bkҕ?((S!ݞ }xF hȇiSpW$ S69>uI(>d߹ ʠ dTR?+k/B@4 (,iUaLf,f&:-}0\/,kę!#s߈ AL4_7 h;NhqL/G5%RxWMQ> d&6%RY+دݳir"#7Wgǂ} %2}"D_%=֋t5 оA6huC SL㤌UTbk6r*- 4!MrcE!C ZfMQ^V~@Eݒ#9boJM{-̡B{1xswiF_*4CZ 5:`_0׶>7rE.Af]_F$ v#oނ7-Ec{U8<^B.*]r0yBZRסV׆<=sW.-jp%bDj#< ,Q]7n#V hMQdz9K (iXձ/Fy @},͕༠HtL/M_|qỺ.- qWLy'/Mk=B_Z*4SNB"͜<1璟KCQcn8󳿞cy+_aeN:q1&ϕ:GK6BXcQDMf;5WG"ݱk{wp˾5{ ؏،=:= ~ޞ#̹Zytq8c%%?D>tɚZwrk=^~:?,ˆ=Ds[}=*ᜁj۷.hv/ }B1(J/v:@s8ݠrU\44.IoM7K '6BME;dDXqxp! {t^ ]*Ҋ;Z蓶BDD]s#á,]Zn1lJ 7{MJ_t2DZҡ i8R*cS')'yDZl.YoR=T*Z)+Nq/MV5Q3|Ms;@P4/&=קkE d\;ljHX_>mf*uBUl/86AP>ԏ+OUoy:U;xӗ@^Nv\pߚH$YW/T=win[T;DŽ `gw{8=LaVKRUEa>uժ^D}(gW[bRㅘ㬱MPW~.,\U˦6_L% VPB{I%>KJ/l)X^jJ$d ̼2s@67sh[GŬ/Fzяv,"Q*tDvL! Qxÿܘ)"E,3Gcym]e"Rݾ1g<"X{ݞͳH5ĵkogv cj}8!#Ǡwl˟miyꇆLCLYz ŸoOۧ 4p; ۚm۶mo۶m۶m۶m۶=Lܘ;O祢˕2(^%l}A *1r`ht?->) >>0)<V`,K[pI=480wqy&ߌ9 9;4TW~wf; 314GCh 4gyiGEPoiiGGMf. 3o`guq!ݾZ؋!#+퐁Ŏ&˭m儻P#wwvt|!!q}.*wQe|܁ԜCQk7' [ôvAS[E5pFG] }~fILZt%ief=;~PT٭m ;lV|Q<n8`g"m'2B/S\9 Zie_ƛ(W}9˂fvީk"yy0M%Z|친N}&>WXGG[o ,wˡA$2NqRN^ Z\ea}.Jp%y>TkCMMJ/WY)1 * ,OʙTdQX`{RZP VٱliSPU|{qX5J0}f8 R$>jӯKA6.Fn˧^\rugܾ)Qo" :ȟ$Ʊ7ϣ3wVCYL%zhKeE"ƒk(+֒HjN2Xv4 PwjI>E&j|] !`‹V?:\u ිMV*,ucĀr2;o*Q$gm s\Z*n=o 7U݅g w '7,ԩ*'-./T UUtƑwc`B~ j6{'Z;DXZd`DEUJT熭M{v^m <U1{z frzz;%#"n k6&"$Jvvc{%C벲#'5u]'FZǩN7pcddcr0e#bYbvY񲰆NgzWy_IMHDNY3Oc@9ºƤA[pcj8.Sh@#TRrK+H12Pg%>QU8̐ VAz+ \z?7&l/ '+70g&uOP`Q+ս%`u}$`u_' ٮqoDLM,vJҧԡ}4fyiu8v0q< [XKZqKaAb=fCD&Ƞ1{Xj 9_Iy~K3 nnKQ:4yr\2񋁾w!~8-ywEW<͉ai>/p/ )rTs'ϛUI!̿ͅEΜ@Is@Njz|S $_'~"55Wj|'뀢**5mx'=S)wk!ɞgVV{RHI4͉aStUjxc$s`YKT_ J m!JpU5R)6ѧ^S'92k%>Hpe 8˞ ɿ[bA+i87g >%;a_!? J9ӊO4jQ2 k/ ؍Iɂ6An3Vf.gv .:ne|QD2t+DbgOr=5dyt6@{`'XLXee"ѹ$pJɦQAVM̑TD|!+F;Y*g>H62/K'bZE= mT>.l/VX ŞByefgiSH/5ir8/gGS o (k{@ dNEt2!M?uw_m}HW᎟0j3N _}K»ByzstX%-Qps S#*XEI:&@H &C!4~®8t0yh QL#nWwpdى6$p64X3h}hք@p|A~RI ݫvmڴ=izPx{qdNqȒ*tx (eUdr)Dk?AlZa ӣxtQT)qϭ;Ür&>wJ܎ 8`ݎ͎:bʐ`ƒ C8'b̜&kd D9@xT+4ʱ &.x#;9?'gܗ*6Jg$m72jȔb"<g{u)twc_r#ڹ]e!#Ar xJ$# d֯@w#8 ?6Mo` SCh"u{;I=q6%A!G3쳙`pN}s<9+M  խٽI6ӀJͶ7g)T{Dp}Y{KED ?cr/AʲD&cSOx`㽌Re4=/Z6KK-zhOr?oowkFO_`I?0~f͕\Bhgy^H: G_*ȑ+M[*^.ʋ5?v }~8 zj5wtco4IŌRl={  \i?˶:Ç8tĔl ŚKf8۽8W${ {^h:RLӜae۶ŨjNᗆH\%F }d"{8A/\"w CԠhA{+,Lomo ]\Z)_O>- Ђt0rV-^<Jz&\biVjjgNQOtu–~'Fd\RdMz,\ wQ]. 2O0 z8 ̠/yӦlXv-"ÙQ X ?p9XK7KIZ6Y3gk\^\B??_\( {"gRE` ,vd|N`*&˺Mׅ,Ffs_c*$ES)~bۚtKƱ Ns(}E# HxQ3|-ߕ~ :Mlր׉ҋ;t+ (oMP05+ѳno@s1:Y!Ǭ, . 륊_wfJ(({ƫٻ Yo^o␲'RHsJr.bxNL-<ڴE ks6k\2DK +˩L|f Koa:2yubQ0>=*&7UǰXkS3m枥ubg@jkC?} 5'%mK@d p̕vr.w/Ւ+YReDXQ V1m2UfULx:A^훭i9 |6OJ8ePzįd>xa°2]c9>eYHҦVRUbz\l0T`pG"ctx2mQG̲_g5_ΐǏ\:n*aB I{K0LnN1cm OBuѥtTmv7^o]=qwT,!/vaϔ"qKnЂ=:qbpAEK%Бv%C*!V&Li탚1d-$͡ zgo%deKKWAꐆƑAY$XQEj\!:Zjʛ̱U3Ҍb4+ )pXٹbӘ860GtHu0RpS\G~̓r%A? ~IbL 6'[/̀5dng^ ɩJ@ֹzj1.hGhP!ד6њ7nu>R: Q"? +׹,RQ,,xAƲ.NO!_ɮXq^j@t-Fz+Ż!NŅ-`S"vky=t05;tjChmw"G%kp.)nA-gqE_SZZ >6#5[]6T <"z`'F/gKGkNq"ܰ4Zc⒖Zu=U1zDVXj9'EV&,Ts`;ZjZ bz˄]'|Q3JE#|`G“]$BWpJbs"bSwQfM8ބpTeϬ(F*@ K#RQ' p+wQSE p18ϰIB;[.b/z=b)LVGe zXkq֋ꀒ2NjZ]s5Uå4}kỪ{u㒼k| J]ua;"v|QU 2յϕg/ɇ3M(U"*L%d<H**bU2>[H\4 2)$I&HUQ7!_SL%z`CmD ۢqee<*8^u%ڝa̼nC=@gK"YA4'Thx$j#ܣR!jTxHa]ckGW`ݔ?֫n* UPN^Eۆ>,> ggRybmQfĩX$F?Jt-! ï7H;RE(E{yO}35Motb/ CLK2Ѹ1ʂFڐTXmS@ZB^$5aP~ъXA[M`k"0T=Z4L!'=KX/Hu6ۭ/Ghh$6daKmժ:Dp̞ n[hh/],ݖ|];5?,X+!&FW:i~]W49xúO)bhm5T_KcLwi> 6[~sڇ7*q0'&tRMsYaనku'K-vUv zhwijQ*#Xz`w\fU; 0S~UD猓1$"Gff:괻[دHC߭3_S˴ )Y[0LP``ZxG&:qgHB#@nI]fռC!<=KF63Tr3Ur\ #$Ai[zxQq9 " Exg f(Jɉ.tܼټuܸg}nk X9bd num'^$Z g!"c ql2ެY!T!"%}kBƅ2 ?pACfēwꆯXԸVMgܕ7taW5f?t!Qy<|W<g)" 'Iqf{x772'ËWԞ\I(Ufn9OUzYY>ʇM[n,* 3MHղtohuп$\ABu e|\,xU afX<6a "eܔ;11aɒY6R˜|%Í?X1jw dE)~pGg|Z"ǵQ$c{mw07}<9%|G`>p 넏{ޫ}/C;{8-8^xNZy"'L&nV 1;ЫY ^ja0GH0]}Btcx4[zsѻsqܹc/' 4Q[+ GL}Q^q&?V1Q@A _shqg\AԈfXwD]xC׭zc)ɛ:quHenqs$S+r6%se\7UA0 Ũ1o|ybК7f$sTֵQ5ќsyً ِB#'{kp f6[K"w6&0܃sS2x?MVMg:>m'zVi\^~,BuO+id<(Tg_A|w*$&A; yrWnpEy;9> ԉ2r όF?C&{xO.ۈN>#}#|z =V팂k -yR Z3LΎ*Wņy. >Hv˘ 5!Z!{3\Ν,el :+(6 >s繒¡ilKw 9vUBo Ey45r!ϳ'Tk&' aZ>#-)Qj5;]PNtB[z|>< Gr{7")6OEҮ@P wcf.5;DuGLf [.]lʴ0UE_d H 4ׇyK-OCE| "΍">XJ'-8_~ze->(Jë>8R8%16V&^CͿ5ut&P*^Ld/_f-.f[F^_Z"ruN,Sm dZp{CYOt<?FwnO^9@[_pf SH?0"ҽN=~XOa,(1Nع0ɜB #`޴Mל('AwzfDܑz1TŀقnH-Ol͙(6j\v'J9n.>Qn=sT16 [8m/X{I:ؠgK%ʗe} dxF-Cbʢ Aӎb߬g#iX/ѡZ-0_Xqd<`h^zһvo[t}'΅Q_+Ц+ \.d&IK GAPb~%GSȺr<23K \>?b>Ь~CݹIu{>GP_tqRPAPYN˔RVz; }P|Ze籙h†Ml27tǿo 9 Y8|< -sH,.O Wc>Wեk_ˈVT+)Wtr#ݽ_[`-.z0ȚIbҬҧ`ҷ#c#26uyV |o25y~`#F;L_JK狝~2/F/3%5r:i]bo+p4\oX#Tt\T*Z* mT:&ySdNYu44,į{ggnc3wcgGWܦul7u ) p"Q@ 4lde$.ZH3Ȭ6Au&J4 ~&9t mϘK;{[!!!vZ즟]vnnM];RܴG-"cw'A_#zov*%NƏ_8OL,QNWHp"1@bԟ=)>xHP} ˺J1~'Wȓ3Q#˭S`1gN0>̵iT"04{t$xB$Z;ܙ"WK( ࠮u=Y'J=Y-PES×=sᣈk%2" hp$ w "ktX Ff},UiV 5";xrH+?r:x0DĄ>& =={9ܫXNYTϞ]%KUI3GTIy!; sQpF}u+E$\2zO__YS2ȣhjCT4 ͱ%%;4xkЏU`>*AKH&g*UdAmH)7Hr>9q]EQtΠ=1 pUa 8$p;Pc GIgF@=A:{ ɭ5m.`@[-:1 %xl5βP>xsAPJ ٰzzb|.VggчkՉS@+[= Qf܄["]2{6R1֞p;GPeJ 0zٕ8*Som9=# 1le5R;G&}##h9Vuo0xi#2=$6;HSa#UC ܰ6$ϯ=OnZ^ N#dEM&ܢJx ڽ.տ~xppFfG92\U&G=U]쑵JCWa Q wO4xPx W3mPiCF/\ ;%e7䓡mZ {3bHmS9k]|ھL g){?%2wO202=o b]M+l۳sӖvڃ*ri},Tm{Ņg^> %C/3T:!d8T'x&C=˰ rsƄ,-ONG6L AEv@l,@Qu@[DLXkoM~֦&!ݑ9xG1PA<'^!"a7hmjMhU_74F>@RXx緹6 (0z5˭ߙ2x/|?oK>!_S;=gjs:T|[S3;־7ԣa3 q$J~̘ÔbUL:.9Psh,7~rމҁ;ދXFMbuUAp$@AZ_>7,f[N G%FG$b3/;CPVRr; ~8cS-5;F7[À7ytFW+jXz-[=?XA"/%kVq.MlբjtJoM O&'>=on"Zu*|8Ӆ-ORŲ6WY=2l̸Dk&.Zg&T8Cz]_@mUV|j>X)0NgD)Y! $V%{KE"Q,\st aR,})U4ii<9iaGDQF뾚 -Qj-cDCrWIَoJ#01 )`eH#jxSw  w~5o ;I5@Vt7b_-Pfgzi+CLD#m ֭R aSֹn2s|OVf)կe>gwyL ȫFʓw0 @"fוZIzDe8i;{(MY},R^cXy FK?αIlG#nI}lwv fi*B+:NJ;ҡ;G_û*DĪʱZq E(ykD pFL~X+΀AC=Ό ް4[ *s+٨aEpC< c _tOEĄ /^͍OlЂ)!gx:q]i<& il[ E&3 ],c9fSP]B;8e4Sup`d'nluٻjSD"9&y2uf.1~zy㴕#wM@IE.^@A'l=`vK4kR"߸QH Qp`V8ιJeJCفAҺ,(+hE9Ä2v0VYn{o F [ kԟcA9l-%ɷЭp/xOЄhȻ<EKKSba3? q@HQ&UI{6% UMPX *v4൐K}` ԩ~BB6n7ųgRqS̒3 ~#Y죻+@w Y5| IzϷc -N |sȟ٪!sc%,u^?iR}sN5l7{ie }+[O xYE 'WP*%*<tdH.0Q܀Od|ಎ\t T^1 YΕhXA$#cY2ʚZ|N$rD4 =n_;_[E#c)V[Kp#πnY#bܕd"ΝWю_א$.PSc: zJ](#O,.+"#!'3O[ @CF@VȐ/ıAʎ2E^:K b_1nC\ ѾnV6p ^xh2u4(< L C{7^HL*D!\yx ɾ%0oQS+Y3$醆E+uY:Hf3N}94#J>4בH Q ęiؽ#z{Dkl!hDQmcFУ ձ͔ ~ ;YlهvhFP52:s8vDp$d]ĚcUC t$H-^nsܻ7/E_ߝV Ck5{rˮvw7}˼͔`^ i> 3?XMЪ υl'{ R֖a1UJQg3 :J]i6_veDNxԆU#[1ifSE FIHtn4-̛giK0 & )s)T@fS2nx1Öi)CbR؟b|$DBձ~Zeg0mmBD{߄H7>_u֘!X^u㛣ߺOKJÖVK+ AᅌzgP0DOer+E O4k2V7{@~uk 'G;`FR%džnii.:V3(`--(kC[fׅWȓYuD8ڋȬxJE"9Xn{Yc4(?][1ųg١5jFiPڶ(Ppv1M}Bqa 5v_5Tc v :>Ch$㭔O]*,WYb; )Q+Q<u$0h,#78!/z 5;|{4yxŝ.@5ghueymێ`>vkՊa[0}RIJykD:G2h&t+[}a;ɽZ.ox o4ԍ,|ÒCugX6LNkї׶WeN AwIvtqlҌé2{G1a=->fABD)!K,n&ȶDS6/F l}a/wՕwPDkXIY, (-ɇ*^搕'НlYw՞ aS zޭ~㉞m&E]敷J[CyRJ꾔sgȌQtTeJ[T!7zÎLű #z,?/ߣܳ_)s3k>H:%A@ɵktq*9v)3' n]A"QݪH>P \VYﵹ[= ~Ƈp&KH히繹1k^%KÝ30ጢvrb4i ʔQ??4oVO1M$bnE\c+qd-Ugoĩg|l%XWMHY)n;<[EoEmM)byplROG&e)L 4R0oj>wG6T &G(9 \Ϊ8gW!@E>b,A.tdRBm=WpQ c4hi.Ӷ ;lSp$@ﲷЌfE11M˼@L}\¦Za;<)۹ |1ec~Y`~8Yj,>C]и::#jBߎhDz1zOd׊|+uXZUźWX/*|X:SζSҚJffNajsG{l+O1|O<)HPU[\0%cTCnmsNB08;oO--KtI S@L|\.V>.ǐg`,ߴ9NXXP9pʾ5) 961V]6=^5SΒ+/Ld/E#wzxή䜮rEJEaǗQPt+]6@' s8xq$=leeN=@VEhGoXod5.'L$]fnd q:vI&h(Z'[H4K)a }WL]NjE E댠.H'`{x1<6϶S..ŚQalYդVo(oӷ @JC33ƱOF4t}L1s'O&?»iG(tR}^j*Z݊X3*ǫ3^C] "!%PyJhvsiHыuΩNECYրC.$C`"@{4 .4>,Ғwf;EiFoʤ&auM.YRZOhU4IG\gD-5丁Go(D%xD/oi<*J֚[4ʇrqIvd'o! Z=9Eb-t1E[}3bPŨtO\tÃ7˜\[Vo0 'cFuH%3|%eX[gK9=jNƁwDZ[b ~Fk_Wk?qɓҁ^XCh-veK>UƵ\3aI`4QQPtHpY<֏8tB&nQN_Fog^ÂHknIH~ph ^Y;EJ=$KM5_ Y%v rvsfg]? xsBk>mљ^^5qZp Dh#yqLfbhmtȔ7} Flю޾?Qeˡ` 'nR2.at,زo-b[vր ^Rз!s).c >楅AD(Ͱ :F;tQ-$졗<ʿ\8IXHNt/w;/ǺJbΌ7oQ{e@tsT22+\oj?o*%¦MQ{tv i׸4p!f[zJ:XpY 9{ߓ S, #16C}=TpY#1)fS.nhΚ; 3#۞_pƉNεrXyC8c,V#T-H ~QOeB^DZ'sbII;70~1uhޣRfuŀl CП q0pMr>x1?L"l3^uO  ~|%AUkE8r)XDGE$Ubz yf:8Z$.ih%4VyvJbBYL~j|˾۵SH4=O0rI_~Yeڳ+SHQȘ#W]To6ow/@0OkUEE-W=㸹kqot  ,d ?RJ8n$d,(v Ռgk:߬TI5SS`rV"0&O<4-euwQ/;ݦ+8]K\Ԯjf5kGG*I*oEzd@["̌˫Y枑 ugm4:}9"7>4{_sڰN-+ɁH ܉_;&U`0z`XrmD 9dF9AH.bWpfO~t /J\hD _qz2G ,d[۶f2}%. UATKX:H=P=ש-Xnch>vw_AadQ^XS{SĘ. XV\ƋYI#Y)B@ziKDIq^E)h%k7ZVɯ"F_ ܉ɢ5O>Zm1Jxϖns}UѿO8Í3ɞ";]2X 霘  tOz# %Rn?;;U1gMf["/" w?r-n9e8k9 sfCxm9s_NIhY5!".3%kc$kqxr8|w6@o/vqi$1vQssd@ /U$yG D'4U4WcƓe"#wM|}z0> ys6v;v0[v_L_϶v)<{ Wjlwf;|QKLP?*K'r+ӝRrJ9*Wϖ%uk`+/AU(0'`DۍlIJU]7+KrUW\+>d]dW\>++Jx+{R^+?*+J|,T~U'WHU}2W>Goc+(S* U:ߢ*pJ{V~&ST|2p^\Wɗ W*V~>U>wW)(UR*xJԆU|W橖,)rZ+0W:ֶUmjI{ii誾)_wBO%EWYw|T%OS\dQKfbT%ODCYu>X[uN(3Z*?(3NP&쿍0M~ 2,rwE 2`܅G`eċE-K\}T^k]>vrUqR\>B;4y"_GE`@2O CoOy wcq`RSn#l" (}bE$$j).WL!ԍ&2u ^"II=◂6ԯ3Ѕ*[5ktY8ʵMV) X7]-~kIWVVffjrdل@=,3Vݤ M:}0hm0cYY+w ɥmzdZe5u#ʰdp}.:ZԻjؙ%izXNrf9 34>"M\&SBB#fBU\NWe@=h٤gqܲ/37uI~fvncB[|tzs; ,Y)zqzLpk0M|[R;`zG٥l RR!"_♑OpSZV'˺AvCMuBsYڪmYF<\[K@xAAพi鍋 .`)؄o_]);ؒn5~׺ۍɞs5%hc5Zh*xfzf,(}VpjV&g^eֲ\Uݍϋ.]L2)kMszhh? GVT9G[joU9+o1d*Oւ=4Ê,+hݬ;hTϴخh͌!7t6hMaG5a?T,<}-&d[Kr:^ϮL^ =<^@&r/7tpB $ b-҆)0_wmAD8^y\h9sToov ՞ ~ØhLx_Xq"A[`\r8[pMխfS6< V hΆ tԆ O7vĞ j韀pʶHR>OU_㓑ާ$7f2 o<2'z4`;{ȇ] l{-;)Ɍr\>;~^OM oѰT.21BJ6w?(", iC]dآǤ9 ]$ 0 H|/ho.dIZex\~ëfU9V#{Zh’ْ%suPd" W׎+ |눃 YJ.uKB]}E1FJ >;8`m=Ɣ5Svڣ|2{ =Nra!撤ISgiu/w /` oj\CG )uSks /Ksw T<wyd*M{ &AS\ ؛o Љ,zț/3A^(!5d(Ɠ|XTBEM n9W)oˆ1[AvxT9xgyjz|*A5 JUObj]"Pf/]F+~TxS9fK 7نl{zh_EzU+>@U[*ۘgt~4SI1`Փ X]ob M})t5dx@9Ȱ# cBK|<*5MC)*4& )3TI=돵s a f30/q7k.?"K P &[ctZ17А}ڥO4r~PKW=~/b""MLyP pSL6a"W0.-r(x$EWgxL\og4Hq2{>,)kSfJsv-ف\' ZnqWfKZyi;t5͚Ɉ#-TC(3p9L|쨘 0Z@oˇb\1_\Ґ,`Y`ܿ 5e5<C#9ОnM=W%FT!T3.Eᓋ;X5Y8XT31na ۭU&R:bkVFI':]áEwN`i!F\a\'Ra |6g͙2gB[(pGy)S =g2'4VajMqq7aWŻfM4d]=:˓?SFT -qe ySBx&ýuW]ei\qFHhiڮ!ٰ{F)F 6j1ygӡ{_Z.]=^2uR4"$%crSsR~\Z7*V3=TL ^ lRFTI*|am_|g[.XE?3L~,V;%>vle5;14bt6#f_$2|^kšc݃u}J؇/x .tQbjkoiK,J6\ ۻ Tj8s2v9uw{jͩ~8E}29<| 9T\{n8K|eF)a:ӊ&w%5NՋ'J%]i{ Qqº;,b|z4i.PJ d~)=)msQXԢZVlJx N[-5-.{nMX a]\.b/c6zDف)il' |&gEæO\@꒒-^RhK_C`h>Ony7XUr4A]AӯFQ3HT^,Q,Rz[yIvQ!d~Co=(L5 U:&Qm\T"$y'Ѹa h26m:4; p}azint>< OCh\zw?GcF[4(2WU6GFVky Ӈ ]LCH˜MQ8`jǭ~Z >ubR GnGtk+ D.Ã/I3=|ꅩ]rM 2"=}Mhl"ȑ@}>&q Mg$a;j8|)HX4WC)Eٱ $`HnC{PzM-+s]/S01O\9H\P`!S?`{XUȉlf1Kp=C|NM"Aydf20cӆ͑9ȕdU㞫D$3`_k (i@F.E'6O j,/^j5ÆCcm5diAg]/A?b5^7 \{]\.$>/p0gk$ʢ|'oiQJLFQ 4os_!62qL FwY)7rpwpX߃zop`>ZQ m\D#5az<7|`W/'PtRX nDl4`輔vIy~gEFtgӹ>Q4u9yCvD~n9lDK%!߶ILf hE~C~Ȩ>3eVaTzUm.{gIhAY6|3߯F>@:^tteGvX N[ !FLQUQ("m!eAqRv6p䍺J>V:w͋@X xBpfmi"2uRs`c'A0>fg\PNSFq4Ds9Pk CER5j .ɥ*ZKAa@اY`YOr*n0DsFV VDC7KOYۊĩb_1Wz_'ݚEEyK\k6n/t\'5UĚmeL$M>l~p>M򂛬c0t2{?ˏ' g۪MW[v5ΐ ˑъHn5nj23'cb<,4[k,[c3BLEmNKٛ&.QҲ!^!t²F  [/5p( i˴b-@DTаeEKF#oʂܸG嫽@{_m?q ]/3@uA:ys FcvG0̦0Ca #B4XU8^cDfW?A =s簇EvQKG+ݯ MɼЕzN.l)\,CRJi#9@D d`0[]V+F*1 oUF(p!+TuH eb~uXiRң N-@W=PoVщ\ܱ=ݧ'D04[Q?U~qXxu9nxjW̷gsK\koB޹7ߠr-;.R+Dg}|ˇ jZo?]yQqΠaT^8(Tг_L;xi1LZ$8mIh46Э2G<=:I&d24ǀ:[l|{L?yZWZ)i)֢2:w~K -zG?˻sň҂N!ڞGN`wu<ϗieٹw}z. dHn}),6sySc8L-;Q;NjǦ8Zlӂe}\ޤVAӎp )0F~ϧG m.&`b->h+F9lnP'MjU@3.'-PP)?C0v[QAĄI*nQl.+&: P]>!]4bG=FԝߡF`kc6[(7m', jh \y63Z+|) fШh~Q,sa3G}#~#F"18cxQ#11_Ht# F1D,:Wpsg\j.3%x\zre}v1"\. %xؤ`vv tB w [J$2|Stn2Sk4)Y`NS(*Ʊ8h6 ,)}3Si`<@R(̷&']J.s;17A?[q>|MJ:k,"qEMt|V4}1fg6}c|fahCxw.5 qK\^ ⪉/vE/ Qd$Xǹ$DC:?$$l O##6z,R.,=s󄈺Ւ@amxXX),!@= ϱl.f~ R*"aUl(oA?X./R5xxfYͳ7.na%B!ZD/kJ"vd^P&c$M5oc~C:bĢ0'I 0\bUD$p?>d)\dӲ9-eVoI]V֫,ilkڛ).@_:W7u`u9v}gire2Z̖%HЃs9&a%D B*vXLvoZ ViHxp.b`l( iHj|#cΈRsu*W1ǕOh_uQ190L&!*hB?m+=|Ϝ$*,;3Xf9 d>7b{̩Lwt A]CB6n48L7́f[Q\/4i^A53 vBP Q TvIN/ 2m譂Wt(;&/[vYYie(C%le/LTRkވb S"{4F@W0R&DL,'@LuyY6eRLgؓ ] ;7L1/ךP6vH+TފU%9Rud۠yHaF&O;ʒC/eO0D1D;f&kfk[uSwDrgի y09 +?+#Β|Qdyn-Cڊr+υh˕=IwhZ_M]7²Ƕ=zq4Ĵ},^ TsӇ~12iGYfbF吟mν2Y>iԚm0qۮJ^=>RjpjPaתupT%ۯ^يRF/Ck iTHCV\ϳd#" QI46b;-YtLtyXN6bB%"uW X޲eNp˩_kI5j0Pc>A-tzngʤeqvqT-zw Ǡ ?xϐ̰σ1Wɹ+^"Q"IJ}BV/pDj&5͙('RF:dZ!$0Mcj R ]/EJ`\[iZNh4缩\R`ؖ+66V5Ϳ&܃wy;`Sz3QdDEKHc0&4i_NX) Yɦ0h|KrKy@j!ƈsv$n=$tpn @0I:>\Z\2^ UDȷ,vzq2{=-E7-y-Iz\|v:=R;C7=}2;UVD݅o=Zڝ^鈪LC4MHwgR]*ȓ:O rv `~:Lm+ G(oYk}-/~Q|,ru!ݺqu,AJ&}%TZz8M!w9^SOrls-.j55uBL1dZ'ܑϽU43sm]A׼Lj #i|48 UZ~9A5>Z<~=hЂbr.!)eI؈bz =ڟM/[ 4"_,V1q_}c,WT W fgfqst}>R໤^_뗟:A֤n"Sqvh,Y+0-66e(b=ڝp<"w.{d'(fUW8Ȥ=^]ßZ6[F9p8^Ӊ& &MQnk bs:2`=DGu3zY vAɹ[뀗{:)zI񈥴 Ԏ x(PqB n؃#2k ?B@hH]jx۔bHa~v{4݈ aʡo̵C9*&fn+#ג ϥaMq9Ww3kR= =GۗCntuť\5RgJ q[]"Lw+T{[O޾N!-$ϩ\AÔ {k NIsӡ uO!, Z[i˛g,'@14FLة%H#h@{&&zjr VJ :+ûl:qSy{Hl$闊&ۚ!y\p0 H|W7N@;B5Ō[զCܶT <|^8vw5]y#Xٗ^R7lXtI3,4AM`bHDVO11-QoFoC3!bνͲH ȠX&042 /VD-,ݱG$+3AM45N^ ŧC[B"cgǵZNհ_"7޶j'DXcQs(GT< POx ]^o^ w(s0^JdymN{~Q98LHn/ co^/F}tf+H= LKxA,} wW3;A?|-  2B(4Q-`vM8o9B|YD̒  "؀&+/[-SS5.Zq¦gr' >BbshU4qR2/©H.UǤT:S#p3M 5D(2~Ɓ(5 "TnseqϝUڤZ4147cmWB +Z.V3vJmʝbKo(6@3Wmw8F4ȏ_YP]f?X /yGg߅8.H5 &.g_4~tJɭ yBj *+$D=N?Wd=٬9),`e$y0P`ǘM/p2\b>Ko7V,<'V2a c/eVaz{8xR \k'1 ?\3IǷ/ڲf]?u43IU_$I冕`d`&1]vd'ɇzL=oJgiIk*W!ȄTs:XH4Y'z5YTk]ZLgxp;|교n&1>9o |CKΤܸ`XMMݬL1]VNU1I|7KJ y&R+mXę+гBwP2~F]t>*(Q  l";q۩ RU͎N% } SLo#5he0}l =970 iJ%87 UJs 2[Hnb_haӗ| k?b69H/Nj1@XPѾ+i/-oW4% +5}#iy dfvO꼠a |;򳋵13ɓ~I{wq6 1uc8<)f7 |Wt/ڪ>SUH7XNp_[)x 6kԳ>S${ gkп*)O )ՇՊ׾j:NOo>ixΰ&JۍAr>zpZb*ϙQ$hw8|kxπz\>7P^y x#6bY@?Up Hry~xמ({8ǟ< dĕm` we6Iڅ]P5ʍ`HRyk} Tt޿ 6",6W9Ph;m@~dV3f[8_X'&\{AUP[]1ux5 㻭rY'H #y}p]:~yy mm0`KNȉ홸BTFKq%z"ͬj*Nfw8|QG^%+xLEH*wUZ޿Z |.9,ȏdup>2P}N&w HJaS]}3 /۟zh(S7FsjaH0rEkM,=4Ld=yU<9?kk`=OtVڗH.n #&Ý͎͜%@IbnԨNђ*xhP% O5ĽJZd>;>%u`^ Ncϱ|3B?gH?2( V)o}J;nG5|>2AOb){|B7me> ༦Rm&G2pu8]ݐН D'wc6o:>un;FM@=ȼJC晽 FljLğ~wp߆K= ,d'~OhJQ>etXTG}M?s3x}whFfp1k6~x}ma;q>$%M8 '6G!'xgDA86y5fzYA3^ sڻdEmomwe;ϊGgP8Te]k2c aޏw$!GLrHuDmTԺ> 'N*p0^:v[i|䉬6Ah\XR3|.t}~4b`Atm5BrROKfkӤZuG+~5_aQ=+̗Ҥ'$"g'a=t:@ b/)GQvdIc hf{KkԽHzlY\Jt\jׁu y1T9&&pHb +{M hP3ӪH!3Wi)s6{7F #ya=/C)Pʜ}+A~1,t=;{ՠR@{SY;#2|akkl:5{Ohu<BΊ}7 KX$hkʶ[CW >k49=e C0rZHRV~]!寘/{oR둭6o9FqdaD^e8&[Ƶp_ͽ):oI%t ٠iƕf_^^G_ UWN Ul.~ zGR)?SE}>XMw4]T8t9 B6局idm_Sle܉?͒٭zOd^TһnMp9%}/t26M|:ScTh"sI(orxG5ySC z~6.H?vZeP)2cj( q͵J^OC89,A^KrPGw"%`jcSM1tVvr7ucEP/%iGX?k٪fl:E1 C6ht}-%}A(ste~9fCA4,syw|}v*ye'/q3Hd4^Ѝ; h* +܊wDc@/4Rx8|Not6s[{'0xet0{#:V#1'c <fKxɣ:nOk;s<$oМO"XeZW83X(r1CXکs&mf-:MuDc]zĞ`E䰪8#]):[OLݘWa nԛX%bOvD\,3(^.;{Xɜ'qÛ$,˥/Mt>j)Jbj^a|"DS1ӊ[H ^=.LjeO%I0=6Q/?O[$9zbSrxLB}^lcz_I:r}*e^?OUIk^СjۣkʌR!؍g"+wWwjeafIa MbE`t<$ kG) > EhP e|EeB0lUBnC T&K;wN3y77b;` .KUXb[-yz,0%|cژ ^5/<1(zh~,uLC: q~q`"%1v? 62Av51"Pq+DJ_yjZ!1V>)^:svKUU,e_-o{bʔ(I};J.RơTz,_-H_i" ܅JQG|WG4f*=bO.'ƀ8D7?LMN yow rF{Qu*Vwnwlؿ.fD{]#d; H>C/QBm9Ʀ{dOy8Vu_%98,V&fReJJ82izhiBzcre4(nH`ߢؿUX9?GَCu)qu^19!E miku\^ ^tߦ|Hٿ1La^61Y# 70ޯ9+"ץDtɿgV<g~6 w Vpa >1Fw9gۏ-8q}]\7s^6.ڪݽ)ѺnԄ䄠C?3kPbnz4h455?QH:0a`qNlCEmHFwFg:3K0\_3 S://] n]~c0yl%xaJb8$9fkFUknSYZ3"V!KF,?[QXߛw>?@/عqH95l:F/@)ީxdp  $&͹q&Fc~ h06z0JLAA$c _% +2wi :# B;UțhD1q܇G˭n8 :ҁmMkw=ղeFzoѥTS e"^(ޢI,CbӅڴSR>HdH'tpPb($&D^ Dr`m愶s$R$Et&3ĐZ잂6(ygxB(X^A|G*"%aQEB$#-š<&xL$ "- i PÄAٍ7+ǤiEO{&i[*20@d@]"]DZ !@ݷ' `$hf?rȴ*$ pŭ ֧PK$o(x=D@A_Ȍ^KWBSk='W.Dn.q ^X9jtypNpKNraK ,ݷ G=8eEUޟ#DH0LL[OvY_d{HIiLD׆@D0ubo H eGz%M˛.=X=j 4IԄ(ӵ, e fo0SMIs u"w+QYw; 'Z~.ϧvv:A^ 1Ԓc QY[Uwlt7YcPh3}|~?U;3AeȬ}? ĵMkf:muƍ`eu&KV,dMTOWhRpaozYOVَjo^CP7cFmEzI?)u!9:ZsYQ<3Ļl9"?TCEdB.gR]33gHcW G2S(7tC0vʷU HPB i''eBdCgOؑ.vbw{׶l ^-LN@oGn*\BE,؀3A1 RjT>2i" I}G~c?AJ9EWH`b)@*Iw[E\0?gK1CƽgK'/hHtm77;^6L'.%+?7tW? U'5K7VH!ήj 8>/qi?_N!yj7_c&W1۪ڿ'of?ʅ}:~:][nqe;Dw?dK;dc;R}{gq'Rcݫ 1!F7Pvr{pQ ;$?ܓo,;5>/:úᆥy٢p%gJyu j.,-͜Zڛ7U&7.xea H<xBA&i~o@{߳,A.mێi>;UǤ9a;-gJ*=$ԋ-֟ht_j(KeGCRc]݆)&BMކE1&s{Ou<\QONQO-xLQՒe\eg3QZ-P3~Ȫ?ou> zRۺ6VjR/i*<`Wtl|U%o", Y+ݤE6;2@-IͅI ]⬏HG<Ξ(1ϐn * _8q6]0xm Y f8#b';aDR@ KwWa :_ò+txx`#2;?^.#Dq˽gOix(rF8E(Pa+hnC^GYM\^F>SYy\RoFi7hfEʢ <MMohiZll& 01l,o8bvIu`$NsM0˝r=UuuYYJ'o U!/Ub)|4,qU" aUrEkY a@AIBhB '-„X+z>?9Y*iUoK%+f EҬ%WILwEÇ:CW m݂<Ln?ELBᤁJY*J+ %1`c/Tq}t]h?tS*@ɛ[z `1 r ݤI1ELH\u8CS;,& }>#sSp`K]G42.Gb7ec㄰d+AՔ;AȏzIWS0kl|7 6$/j\ʮUk9 `h`)1%Bf;96""veY< RnR>& P1xvq;p\vS܎  6cKxW‹-E]7w5GKqZN݀BJܣvnY?LT6,!j32ȇՌb7@'WPb+K}r ? DA(_7<.ҘX e(`߂VXuq=`H|=+Z#;hvjQ #@gЮ׬ds:W3OC%09pkXp*(fs1a%kЗ6T8Y1ȐCƌBW%,:{p:]/S͐?V.b΋>/).xW ckxN/2369OXhôKjW-2A%:%5mdIo!KSL ]YU0o))؉VRx]Q()$@6f70{ׄL?Ti٫c).x\L`$RAt <(̺[*[^7ud:jقxJ^w(َW\2i5Q-clPܴؕ#&p Dف[ɽ=@dhnB2ip>m!+eN0Z5$W"@)`0Q{&|9{qRQW H}K!/)Ӥr0wM꙱/=oCZFD7O/$#q頵%;9Ո89 4%xKNyt^sJ)瓕85#@[`O`:¤Bf~pUe+Z/T~? )9Xi]NbCfG6H:F$ 1[z`fzkr>U58jo1VvgAஏ,8oyMAXŞ6#-Ǭ]JXΧ,f|l D@˻gv0FO YrA&O^q vZ[ƿ7➮h:˹N#|x[ܑ=4J:WRw2FI`c{5oelZ?^t. )f_[xsE_t 36[QXp'|2YKiyosD] 3}k>@.F4D=`)mG}IՄ8n rDuVP`>`FWOmDZ&0b~:rW ߰Ѝ=?hGnVO_CfduSUݨ"?tSJ31oF")y'“ ;S;8]RF5Jǻuv5ZN,$&lU YA;Q9r-$_0Q"( FxEtw$4.6d/\ 1b.E0ףFxDް-S#hDF{,%2ʹyMspfA%ώmdgzh|oz,ړ>ʪcȗ93&}ejGؼ%=`pzk%`zmBtKQ෧CO쩱hԅ!gW.jGo+Rk22~2B45 fտzhAFv(ut=3qN!WIaIR}rf,c',6z'($ (aF"!{( |m" ʼnnZOi\zӪ4q˦ qu&$ i ".A}hHZƐ95&p<Uie:(J2Aú߄F r ANǡBv۽ gδA?G OwoD~og!8ۘc-Xo\!2 eq31B'DG݁%zԇ'B8Q\|pD_3R2ɺ&30|BP<Ȕ&0>1~Ր.WuJ*JiJ1l26 ٶ693g"Mܹ#쁵w%B CQ$( úA$"S,uspgci]6Iߗ섯Taomb7#CS}:'QdDC2L˸)eKŔ~P|C[~{(!YEk5߲z3\72<؃-Av<.0CY]hlԣP1^{#6rJp;oj{xĤKVtJ4"YyI#7z5zO})CZ4{zG)[`WDZ*NNBwB%Gcmi0DcE4`3㡱sɒiCԳ5tܿv{abײ*jiU T)x|z%5yFN*=fW/-R-Q+ZpH\<^̇2;ٸ#ږtG ?Ks݉W֬gbtws]᎕ps/#Ҝ]lsa&.Qk\ ZӯzT1S4Džtw= pӐSW뎃l"_O'P'$ ䷵q$%?]n~ptф33!NLFMA4}! ̠ d=O1-[ A4WC}Q#86xlOrGpN-6ׅ4 )&Go.0Kf`jXƟxkUQyarںS8n87"$g;KR]R쓮 u޴8L$j{8*uu;C;l? Pbx<^ߋ{= 8k[E:ȝ^~@@QSq<7s M٠Ұe,4 SRJt {TQ;\8m.|r F qU]Lb U.+8I`LY$b:{|ԩe#4ef=1=Q͝N{^f98 ;:Y;89Уnc%;[WN4jT-kc^zm#ϴ?5l&GWWVo3QYwi篕@Yq2ؿT¾^<^RTkҡbP YHiLq̡NiUH-(SIt59o|ƆCpx!( dWgwӗZe#2q?& X֡/2i@<=H/+8 +dH|aag"սm tKHCD^Ng'6`@&2FƵD<0mO/5p|' IiN]^h,MzbQLưJoPT$!-g͛ {~taދǞ{;~ 3z>g/yWB !`\ekO#y\rD>]RhL']S$izBQ7+}+ȡMl]ईdQh8Uh+ihI:&۰VI悊?ʝ"bTh:@ qT:vʓnKEQ5_ב!XάHbbb} ʈ&;ڸFh>iʼm}ow)_8W~ 8^]]4;uNiMūrt>͹چqs~P"K!t8h'e;Y2^Fŕ>aDWmsyL9-}< ~ŦR.LjhxװW`;WO؁*27ۜeCux:1)x齾\Ob3 XVIXTJ3bx<\ZfPo,7^͙w嵥{{1Xĉџ4W~&=G{Y~2w5 *h[ws&d8vRTFŅ@-oЀm {8|CdcWp yf4w4Fxˤ#g: w5h w.װi/238i,3}9s+Kϻ$5p6#5ɀ*v8wOo6;SMJ:,9X29T{z0J3H?Kn+k pU)~gfwF)MG:"#.0 n.&*;E==SXf3߸V2Bu>4=j)W,' ,?yDҤTG[&TF4?N,WydBQ7fLZ>#`cb^QOG`i_ 4?NkߍxN3<\NU'o?vwim(لc\Mzؖ+v.+L>ؐ]Ӡ1go˘ے@o&pYU4Ψ#XFg$) z8TTCŸ#BoL1np8QaڕO/8_ms{7oஷ{ZPOtʒ밯+>\-읟rX:lȝgex.Mm_5t"|5z'^ʍh@?^0n`X5_">A"%AOr0jQCP$u%-mO3R, ((!ըlO&X51cMi,X ,[Pˇ2KC!lu5s;p:e2SZ{tky!h ތeo /';d/rh'TԓAlh$}oQ?V'}1 F<%N{ o >K:L^q<1D|Tiur'gۀ!y\O6^&}<~hGR\NpؔaD&kr]G2Pilyx,mFhyӜ@ $4pgkliv/l}QpK0PM-%iVl$ybBçfށ_%r1VD].]侺;ŬATLecƫU[F&`ܛ.D6T ! ts wbirY&U{d33,h8(k%V1oi M}'[ b%߫yy6U^c BhPS<}ˣr:6m|zVL$ty4;,SȤrrvwBMjY l↱'4bѓ\Y9S|Qg׾OL˒}1W#nɸFŠ>;ytnzDYǸSeJgv7GT@H^n\c‚qfR0J4do)6&\BH:sP6!GWVpC,fi}a9'M~zM?ux1 /rGbC[jXui x ~ DѵPXDlT)Ԣ?C|eT\A n@|w›zWPRb) {otoM+_biJ7#OʓCLe3a' R]3nP>n^r^~<JcINwg)|̉j +naZ(|CIzlgϴ #$)hQd\s٣rݺN9]87י]m{U$[_pQߞdz^*?QbbEҪ915]k'V(D/H/Z {'o$b(Dze7 eMXՃHҙ>e\pژϭ,jR!jIR7 8de(Ƕ)нPRLO$ /FG72}lY~z$%zVz7V)9\"Q]~3c{ .H٣T4IOM>" iקP eY݋5 GH(S{ˎ~u]N`g2J&*x}M`^5GȘ`mBKZ;mZh/ q>'lhsə?u:⩲ߚbEp1CU(]+uTDy%`!fI@!( .x=tխ]$e*ܹ\0%(405q8rEۺPrU*kn!`]-4W;W3h|-5~ (cq&H2Q5}"*ʭ m37Tt6|:S7ܾ Xq]v!hs>79ޞ$[Qz[ZrkDQ.N}2DZzr~^K"Dף`17k1bfG &ېP=F& u#3ԋsr &]̒=KQd˚DQL .SkP}0e(W(ޫa~$-0&iKC6jeq) ĩĈ*da> s,[C?GG: F$8R.]958T1tIEI`SLh0CV67Ƿf7kAS+k'ז5dݬ(= H)id2!s9^;NRL)d T%S*}Ry}nlhc-Q҂1L7 o{:Tv7A1?#븇#PF6ݰ"5?]!Ke}4s)H2%{櫅 ؞u"pĆ/"֍kKZ=A`d2 tG>nx(' &%Nn/RIV7;[3nG!p:(bvqr Ts5T[pɉWU2Nc}z(^?y/\hӂ=ڵtPsLhDM2+gEwwҠ1Q,c`ɾR dնR 1?196pF q.]#m9#ac q7 M*{tm9 0K0Mj9FOIKqJANXu$V>H.ԯyu Q&4CTJ =~e cy}ͷǎ'H~?P 9\7jѻ¸Pw]N B5z#pý'`?sq<~EKhL2~:Jg}{Ƣ&E1d*pkli`QzFP7\-Q;I~x`҄TmUG8HӃlzҔ$5Թ7^~˦<Usgsdp0wDap3f#'v{Uy>JWi$6"Tn W.h0}&cB ]K?*f,m()Zni3[*2{?›&Oe2P:u]ƴ }΍fxW#̯~ጙLt y [Rn/o@<S//aB-ejGM>+]`6ͻml-oX0N߈Ѧ43p೛dt|`nevoW|R *I.WZ=,_zqw :jpҬ8x~+  d9q 3a9TIib "aEz ƘPʅV΋A*%0iiif|cR L T2k*NY5J‰2^>Lvh9Ld9NO{:˴#yuh;)jTQ}  [5z?x8r C65rr2lއ+kqEutﮟN>ߕh)Bnt.GVybAӔq}s\}Gmٷ#æQ~6 Ġq;/|Z˞༬`ZJ+^{ f9# ֡-7@n J) *436W)ٜΙ35/֞{QHҀz]}FVSbrN)yBT&D'WߞG[!yjgC ө6HwȰέ;f..$\ mI9+a_-["lz53r.vPJMq淭KBYo1)gSEG2) RooO0?{v%idiX4h~E29-Kpc#`$,reC2Ŏvk˶vzY,R};-ZtӟƎrt}~΍ֆk\[a8u9 H6Cm^u. ֯~XIuAfdIBb_1∄H:\MAJ5AS  фԮ#V1Ne?wŬ3[{{m6O$Q&LNMXJpXϟp Q8?[$/WIϞ>‹Ͽ$Tm_EDN5|` x=Os+~(ւ8}12*!y^ ' ^Q:OX:tkC,I}>XL#MTE(!00H6Z}8S46궎-Z#vH8(@F\R4("g"2ANRCLRQ8LfAw@ԤCcl6'q5@oVr_zD"O@iڊot2HIo i,F,"4RԅJX%+H\  uw!r ;\H'P&Z!ko\^ՠͯ;sl&i`ۦէ :{p7qXEa3 qyFc.iv.ļFkЍ"ChT|g;'u0420ʅr4=~X:}eZH.q Hy/ Lb9@:=b9 'E7.gEƮ$ n=z\ n>;B ,wRVOMD|N= KOI/Œ 9?o6i=gNG6nc~'#omqgHQ0xY:`v;wlL(Oոow=*( 'B:zRc)ɩls&/g_3ZO00` Pl[$z 3eLBF8T3N=>H`99ťH;!QJzC*#+ KMaD6:X8MpW7= 14c5jkrZn8-[S6A&Gؘ9.ܸpM [nY~̈'VH/b39Qȓ_-3+-#r??&OM2,f {h}`"#qsI~"uan DklRĕp;dv;@B!&dˢ|g~JZc$d.֊ ߿1ŗ|K'm[ prx^P[Bc=m%| +c `,Dip35M"KFY@"eV6.\ue哈Qj1YTH_aU&S^as*WxQXε #wWF];Oü{60c(S%>{s; mjNC^4g^.(Q^[sL:rLRefH~ x?[p@X>;@F^7{Tkս+vV,;}9$[Q|Bz]k0/9ͦ1:peg=dx2utd1w6zI@?ƒUxX[Z蹚8 =vD2n⩝ +Oo%9/2WZ#)zPvE`Tܵ.q YI,(WrF ɤ#c\\]:q=]sƬi2 'i&`ȑb)]z% ď cc49p!._ۦ@  @״d$[ʈ ^F Lv+I]98kȬݾ~29W\J*_x2(ч=fw93F8ęseGC=°(0TM1KK_es W? ,M1hi e- ^mLͬ,חh//'珝&6bQUx EݤD RtTq67 ^)Sj?fmKMA2o*!N18Ar9lr@T XR~MѪ&g2WKؗ!_԰{&Udv__r#*m!;NߘqV[5QVR:Mj{/Щ咆?8/{IZҔ}W0z)Dţ Ĉ\%w4?xD$_&~G6IG(<3X<zܹ~ @}j~ -KbXk}; Ebނ%}㾱6,10B0^ph s6`CT}gk,97 ^`fgIcfq>%pP{L43BHAa"ɇS0YuFR~s a4Jނׄ!¾I^"K=DsRt [T CI 8ZGV:E"zdT r$@3D6:x[4MEtt ,u 9EuomH4F D /ic+fXI`5MH׀d8{-]7TMrN;W/£3*t+in*d{dbC 'Ci+*GMˊZ76jm:[ݖlEhM++XMO\ɓxxBC͚L2WQceQr- DKM&gd<᝹ˍ;JexI*<_G 94tprShJ-4aF k]#^oV qˬJcY~oDZK(|C֧s8>C οoUEbd9Q 'P<߁.ٜϭqO ҿAM8V?MBxj•  y[;c&|C4F"4YBHb]}vF.X/S=w//i:j^Z7q ba\Gm~i>Ӳp5hg0:D/9sVI%z^[K3-I5FǤ$F2Cp{iR$Un,g2 $\|ot0Lb“37=6.?>Sܰd5N'$'؆xXݟ* W'Nwz-PeɳҴp]]JMF~ŭ_~BQQm6d]a#%s~8Lm;b!w|x{gu.(&jMّ-f̅__8}Y wx;@c5rOî5E= $3ќ#Ca21 k[2!fr!/̭G$+{[繽s[^ >@&@:Uyi6{"D؍v:\*,^. CDl]oL&{'1Oɷ P(E|{[&@dNrRGȰ;x3E;:,mޡN8V,އ&\ !o|Y ]x__6`avvdQ9:M }'og%Q ˥ PsI/pCR ˠbڟ?zWk7 s:|W`H\?({ a(Hm۶m۶m۶m۶m۶zߥTY/^U߸%)CmRX7>@meQQTS\b ]NM.G%3kAb:4م.rߵ)VWĕ@pGz۵ZػVU5hj.<*ɓTbF{=WK92$ahMzAi1+!oâ)D4|?)Cwm]]՛ֻ[ EC2//n؆@zO(᠊i iW5pmm/&4VU{j }N['|`\$)eY.PzE3>( mCݞ]Y"w AISgOo&AT w}e"% ZD4AX 25O:/U0"nr@" Jd0t#r$w\`*L"ZA=F$(ibѐEDSmSu ~F-qe$xe@Pk2āZ7 !wOXEW(TJ"^fe.2D n$Y __Jp ^ͺ9 ƺD&H u|<5֖ Ձ}A0-ٜޅ B 6k8`Zd?1c09ºqM`2gKqqeuqK[LnSJAR_9{Ie"kUSI+N7Ϭ&A99Hb`Q2&iddKZ__<΢Q]'J4׹D)-N7FjҺ{yY `ABugтeˌv)ˌJ9m&uO^ALXx6"0 S_:Ms<9I dc] 6fIS)eXl gLN{0ͳX+O|#K-Zd;cΊuqh֕QD#l):h[vċQϯ]X#zbQv@7>w0dkS=InoJQ7nD Ƌ cgWIlWv"MSK=vXKb!kO4cLf=˔DAB<0n҉5Zw2H@{hn[U!,׽OKb\vCiUN2xZOXasʱoh\*1K"L?A5b4]ӣE}\ZtAVyޓF8[<&t)pfZPi򗗆;Ѧ84AHI壈xB;iB 1ϳo*?+nOEW2Qus[㲖J:?qsBҼP/KVM6[)@M}&; ` ^ry.v@SzNF$iu "j!WPRl詼UU:yץ6>$e+ufVènAG4Sh˳'3x)Ŗi\k-41tUU8_B,2LB)j>檟^ڡ_0</U9,dx&gsl.fȍ 9{$z#7i'/zgL&/]9 f%,%@lݾ u8Zj Zjzrߺ_CCY2} xٛqs `-qۋyo>4Ey&q.^<$p+咏a|sל._F2 cȠ$K/O)rA@R/Ի%Yz{:K^:Jpܳ$[6 ';&d g*{#81Pu Qئ϶HKhO%MrUiեg=á@хb,Ke5hF$/oZb:{]H_Qڥ2\+-_5_`ZnW4 m?z=W&m Z[ d0z}Y. 0xwYrKيahm߅|nqK[v#ey׷kW.XOYx£Y}`g|w7:pC830Ȓ:"@t ]0V̲7k "3B3;L~眚9;v{A@ @uC{F!Vq~c9`G䡓-VDS֡_*AsUϖ3:J9(*|khJ%q`O 08s8S^:117<} BӿsDZ0O==,fE&2.ݜ,ʷ7f:Z[_a5ِe S[#Tw!#B#b?, ϣa0 9~6#nB IGghj.)$baS?O,sֲX:ͮadLV/6M4W҈=j;Ik,wn%a ̼'P>Fk⠳Z_n9Z.k*Oj<'6& .\Ffo {okIBrXI{G-K#o]\n]-:-r[quZ.O#n=w/dB҆[Ζ(,~*MN_9jhI_h;M5h)bQf+NA<em_ec+5mXxx@׬ԵJ7,U9MmT'ޘͧ#@A#(W#C/w9 X@W/w{7҂T/ˠ^PA:/W%iޜViji)ަ*=fz3~Cs?U4ϏZQ {=B]|zJɢh"7E-4$m`h%.$ogtJfv^HGIDQ H.on e3]j>$CM_IiJOEy7␹"ynI69㡻 a6Ɖ !m;!/23CqDc#HƬ]6BwI1ͣ2uBχndf_&~:zҳ̸Qڧҙc/ӃBG2 7-Be Tl$ V]UrvWRMZ1Qapy:k^:s-1[jYonnRzՂ-Aqە(E *3SAO뼘G ]0vBFez{CJoB.y⹅,)Kw./`2e}?/.@3}'Ѡr3,&On<9y?0XV@Ӟ+^*ӃAˮ VͶB?_< AU3!Ӷˬ%GEM0:?MZmsmF i݇t!>o3ϳmhqxebrOոax$e*SJL0,2xrX{|iz?LT,Wʃj"R*qa{gс!pO]^qlͭ d~5V;'sbN S5CaڙKfЋV´##-M Qr4N]eVf'xewʑ(ix|IBKl^,:8hlƤR.l\Q U~8Qv~.}TT/ <7oRiNYteO8P>vPLo(9}Ɂ^݈J4tѳBlNV~|?78BxRm 8ZҸWkG>st3#99&^3AfbXC/mT"Y)]|+Ws Xǃh$np$НMIXR6kUUVɈ?rBe~tOF8chޖЖY.ke],o ன,h18Ch\ L,Ŏ< bW4ًEd3 dZG?X~Xh"(Y~̒ lw._De;v"n43# \o1lS0/ j &` իJMEp.WR&>\mk0Pwg3ǥ+v7W6whHg;jlwAvmtwv0i滶ϺŨ캀jz;}`{'v1Htn˶SȻ2͑fQ,VFSuS7ڶ xl. `T.T1?Gg+#= . -@BMfN1 (lX("MHNj&J{aJc1%FtR-pOxɠ`z䈲ֲveKY._N~Ph,4|[!lJAG[k)NJ&mP.9B݃rؔ^R,J4G6L*4>g7~}\C-=bhlw\ml`7*.U-\[g-ҹݗs~v{T(/=Dt]E{>NpXn XB̳] y6sFC 筬'Y2Azu)P- ?pWPz<+ O ( uX#k߉H:n1;} R4md-تaT1) THG2RM,3Q5`_̮I.gV޻-W ( ybd2u!*cKf{<=%jmt{c]k>Tz zEHqMs#aOvw4V!aD|nv\qQ dt%tDT3K$݄1!$Huv+ l N>qʹҰh-Wk%?`mA[ JTp3Vj$H t_Wk"N}7{Q#><1l0'{b8jI9LM#/؏}c1&"A{[{E7,_1cI^"grUÓ[~ t  U+Iq;kile0 7l|/\9>$F(kNɄE̫'`,V~1#`A 瀍E! ϗ+KpƏ1g) 뫳t 7^ =ӥ 3Ne[epb` j<7P)ЀO,0?|c\ӱ7 yg7& ?T;4grE!UA@2OqnZ@v*6hD ]S#f&&ޡy EizsiwMQ_;.TP;Q DƂC3↏K`&\HOLfGh;$#Y?$!q *n_%J:E]*-$*gȞGNj|hVq^]rm4n( sr[.m:wH%/ ;R##7NU7*zm Aʲwuhxߺ_P }d uYoIՅn34;Pb[Rwp˷~*$A悔"4v@1mgّӟxsRYswd3(NxN/g 3#w9\[l>!nT~O䵏7ԣ`Nb(VSh^-*xK PVQQ,|E?S ;D;E&hZ i~~dz58]͔(걁ot/6.di^lIXM 0`r"uqA̳ftf$8D41+p0,G6p`6~8 >Sg +5&rnΝfͫv-3>2Ҟzۨ2X׃d\t_ﭷݾfuez#g#p?= *1ƴ3)q tJvY (| %nZAwt4(39w*?|Jʦ'8mtIo8O|E›$PMcOgQ)Y[/eݦ"=Ͳbb Q24UUe3^YW mo <6US*B.v<圃!BO'~1$BgX~f24L'`VxHP0PCkBV@K#\ז[q8;\0[v5`k(4u Jd#evFQ0c9\7i ؘ7H{ːoSͭOG჋ u}X KĦ¦cFPUy?IiIҌ>5yS]ŘM $Fi=ռ@XN5G75' Xx*NSkmWjݨ,1lSy8' Tdq7=+\D~\ײ2r_~q=$6hIkayxC{X@-&$DMN:bsF28p4cx8/E鍡P?ڦeqNG:@ \|Xɜ9fw<袉dsҺ|HXh@t~\' "qw'qI+ķ91O}xG/g|f͊'lf&]ʒxrڡ|Ǔ 2n.pM͝Ǡ@j|P59j]f' Bbx:";DVE/s"tǰ-`(&'_AlLV̳]V|a} hᐩWWsڮ "t{BLeoٟwrɋA^v/BRgb0KCs&&/ .c>r ԄZvI md9GkO@&^0x7No˘J!D1fـ Q ˖A;~M}l>K;?#|Yt.G e'//~X<*&ώ~ՆM}Ύ~p1r/m3#Lx=-?QLF m-E 4T a!w|i|@GX.=5`|3QDL8U4ݼK澪O9LY6?\J7+u#!RRM$a5!Mt=XkEF $&CH_Ͼoq`YՕFRy+D9bLi<%N-<8Pv)\SZ"pC Й4ٙdMx9x;v7YRH8FL ,rAEQ' e#s1>'Ig|xF0= p=B yV{f[D *@܉3F{^\ÜNwMݑfhx!d df [1~gaw6ym&@酿!ߠag2YϮ;WvlAT-/]¥@~=8U '$ˍ۫q&Cϡ1r^?a0HRfu(&V@mQ7 ^jת >g Yh=k5CR:qȊx!נ;>^}LV+q-@8DynWjj^̃,}8&Qe[yV^o%h8|<\Rk< ]* DQ@ ?8C2_1SiN7k"#'O&\'ԞһGy NCxUA)p]Y (&LR4=Ҷ[NQ FZj1t4l\p$NX)h֩ekntKF:LQ>L^ ylh*4&eI +ZΖSJ'u4nT͜mF'URa9W^}Z۵8E4$jF _wܚᢇӮkO/( SV|ߖ s#B?iah)m[SprwR;|XtUÀ͡"%uzhbί.?E{vG.$s=k \EŢZt(LQ$-84]n`S3AHt{z+sƍsj "W]pze<,`:!',oRZ|i6t )ژND&j/ pp̣oH;%Mxjt z"36\!2$m2?ބN1gQ]P` Tڴxlw!^pRB,Ӄ;'B .G˃2 1߲ ٭TۄAjR89JbcڦiS 6t7ё^bOsT, 9bފ)x R Kvty^q blJF&yS09IuRL!7/"}YPNîS5ג]nOx? loRU'ս~sr0RKqҔ&P}gK:*zA/^g vIG3X9Zz:M+inCbq:-\!Uɉqxky<#ktN0[]d&-: 5vZB[Y͊d;$jiNkZsK15VHMKuDrDjO(l8}!*0KS*[^lj-Ĵb^k-~s| 8a6 i\ǿp"13B|)Z߾x-SђKpAu+(RaJױҩg9u0''_$U+)/\ ggwzUҶd&kdEI30$.~3N \גonRL!87mAVpO_]Sd~l⪽ꖺ뛴ZZue̘ħlgRtyu`C ϼdy-sPi]Ffe|喫>?;JX{iIv1TzhVxWUv(DNoȯ눰u *\B[icE=KtjHA L#so2R /UC{ۏAՓ2\عM)?XRμs?d,Ou7zF^ݟG!Qch\-(8߰rx?I22&15ՔB?oT'[L*c"($"ţ<7e.ݙUWRo7:|)UVO Zգ[Pʽ7"XcۈX+չj}2#|i V<>sX^pRXA+"?I xHC}ővJ=Lv%)iY}XT:QZ66_==cc 1Yj{~ yJH>k5hI6jOM{FIPY&f*bXDIlP<)XC"߾ǥ,e~ߟW\w3npniOT%RE$%.* {ͨA1:%dK)5ٰG=%d( dt׸G/SuC">rfE.j@၎Jl\A-\B.j}I$El䬒"6zE )ҳ(U@8-V2FCTWc}#\Wtl7wJP-t=$k=DGZA f>P&C =5ʯ|0@JOFYGaͲƼZXJPcީ"Be y#\#uI_4.L|Ęv|(x`rGb~3 ^3ބNqOe|c11$('cayEkeuwFkpVcU);ȉo@J5㯝MjץWx98/bC#y5ӳYgnYgQ w~&3Ԩ `RB co%F Jzm1c͸ ه9qiqN6ǽ}ǚ7O1O.gR~Z["t^#9~b)nVlUYZEmG~ar7j9*WxiuN/5rz4 ' -RP՚ڨ̓ժ@Hҧ.+d76cHW862M 5^1Vm\bZ=j(c Y=|OYkHb^bm*P.Fu ٣shQ#C;G46"x6i0&]J݊X5] tjNUb-Glcz~^:#f;`׶_3u)q !ATʥY6רD8l|k%{W(7/wJ_ JM=fdDeB#r:hR{\c hU7y>l\v QHlGwrL5EUK0"^{yMjBRNcp膎.S ն+R?"e EOybF&JOr~QW5Ɠ*)w+QVCFYe+:A˂ &ISVF-on 6aG-;÷qqC(Hi f~qFQ)ҘJꨉ:xW8p}@he ͿhX)2'0Q8x"2YXVF (oXZ`@*w~RHB?ޠOC/= ',嵛' ӨS>- Kxv#:H81`{G`e[,^ِ_h8*e :>Rڏ/nVB`_1 ߁0+pCwom4/ޚNvG;FK8)؃x?$e9J_)!\ţ58]QD, DJTMX33uwM Nx2Vԥ5]&kk-SѸj00Ѕ "-=OSjm"6}vg&"IwSi%i Ӏ!pѱQyi~*2_}dCP6cu`BL)p*_9{ #-79ԷHcl7X,P `]b-j^[nKҊdXqi iVc+@ =cLܝD$/aW[K <, ^OO0d_ :=A- ֻɋ4 +KЅEdf\lQ ;7뚿 ;$18Htoy3nY4ט3+޴(N MR\KM|{GdR,{Y(N ,O1I3I#=Hg ?z3<?TpܔOK;EK'StZv[9 k\g&nRKCA:;āJb̻l3(s{ELsPП8Wq*g 3&"iGRZ!9TR M83+m'˱WƯ 3+"2׼zРl"̈́zHww>e]htdOw-y:ilf(1aPyYb%@-l⽢gԔf])3G3uf;/mbk̈zX'm[h^AUIі!֕bG,2^䠰i!Gus뜈 fWC^ġfC6F[)曕cHVvpY> )<]u@jѼdoTC$wrO)GtbaJ0/5Fil/`pP!ԇ}N/6s>CSƸ0Q5QQ+R\[C9cT"E?ۨ11'HΨTb@ƗPd(8l 5d"cEvA'˿\dtYI0#;5٭]C;pDV:qH3#6E_T?#<< u]RqmB$"&GjٞC&L /JK3|&?ѫ.^n-> 40Qx:3XE&jMEpo~CR^<0zO]&|*l\"BPh]?z@\ $_6Cy?[ens @eAC;3m ϜOFA(b[: `hd|А.DKP -6v5Qd+sF%4E&4=͋&T@h<,DzsV!c'SMDWɦk7D5 טW#ksbQ/ zj<+S`YP{5gOʂC,Hy2/e*4K2_`zhl\gqNmu1^+\b쿝c!/ 0탟Ii=!{1qE!o  )yZ׭"?@m\`7ܬ籤1y6;8)mꈁW psdϙ;{ϟAK&%_l]K>ґ2DC_i~E.,MY]^}̡T2r}v⠁.<0h5<ؙANzGm /EdcMI䃥ⲕ+ܪ+`Ac9Za-, ! Sqhr{*!(pǒBlg4!z_Ac㶐y]1K( U] Xehp9B2D8;z+Yl 2ɼYS,(94ٕWWQ~ a6rѾkB>O ӘrG#N_ R N +'ԖaxMZiO;O.؄IOqy \ц00|&D^X:ef#7=bT,w,tHB=M/K*d]6k[>yI^ϫ%g?7{Ҵ|>竐J59%5ڦyM(DUs|]̓F Zr/`BLPQtpQz'I67W~,-ZWocktt1O[C*3CP6fB3R(@@m=Nj'D:ߏXC喂ŏw8[[a>u0w:HfOIG9Ua |!$(O`DɗoLEd<ʵZ~x^2&1-h_FŧM'[~ӱhZnuNC$LRr56wVgTV`̠'[d es$>s}ޡJkmu . kӡ_*l@+GS`Vp>KV9ڸt`<Ƈx",_@нo29 2<Ӡi!]O{d,׾aKbaAk?xk-˄& ljrxn20 ^ؠ) I9bQBdO[OY65BHủh/bt vYyicn'lDMuO廷!79̈́M|er)\lgM<ƨQ\CmУ٦1 grUc+ZfP6?Ǖ|QՋY8"i?y9)˒S$RNٮjǠj1seϞΨk‡E-vo8(s<9bL/}&q5\lLkaaPm=rIED%I^>!Q ju 95Qbh_tP7G>T[u=ij[~JW~8sal'*:mM 1&Q^epqJ}*U42S &2܊U-e˒y14lajc?l[mmm6$*$̤f㺷vǹ/5Ic<2 !D*@1{nihR(_>~"JhMTJ*No+^nG]qE+3jж$Egp|y ]!~LOjh2[hg  I0P7 .'cRY%EKRL=BwMտstFv{AQ.A>`q5S~]eEjD;1`h !15+'PexA0XN=W,<3']R}u@X p8k [hYtSK(SvJExy ؛TԘ$ Y- 2@sDlL#TDrda(+_ou".aaμ LU̶TqD~MA^$SHlak?ŅW^l>wNf̓'O'RTzDؼ#_(cMm^O J8 j}SX"2| [;.xԅXV=yDMG|% tK5]Iճ$6g1?(A]fCZ<[1`DR2<Vg krv /Lʋm0Wͦ*X{2/= q{;c2<{+ѯCq^ [{vۆvTm V؜QK飶3؍V2k@ދ0fpTj 2nmƊ;L`.&K3VU M<xõ+5gd}RH<@ʮfqCnhΎFV-wﬞljS@&a!>T[x_7~"@zVbYW%Wa,rnsa.N1k P-r )Zcb坖F@0C]L18)I[paenApSvYC'_hq?L ߊƗfaGrE٘Zzh~Kςe ܕ00I0F8h^z B;@>b+xV79^xU`^^c$0qv,75^ ,Nh8kMVrKo*Ww΁W`)3 \YsCo3Zk̘Sb|yiy_k a<9MiH ?KH\೫hxz n6UY31"`mKOv\æ؉wN\;WUp}Jo~פek\N_g>]]o`ߏ:K*WM*" ,Ǧ2o6eFukޮ*ryK=4{_!6'm 2!G/-+\J,2" ;N]w%0M"@~n[P= 2!%'|5 TJeuWF+&o*NIe+̩tL]#Xbg ĦO8xQ%\Ra@9Pll2W+)J?*Y[)zWDјI['R дo.h.K8oӔOv oy u+W"Zs;*QL rtql ZKW󬣙xC]̓ݣW~uI/{vӲp =kE Y9qSeUL>baoAM X k% CvgKصW{,+P)AZ~'w[/p/frG+@;?\!O+]KZNyEDҮ#n։5KGN]epHxMǔ*Se'wh.(giyJuiE)UjޚUb, 1nFlc@mP!ZMʉΖJ1~WNcいAkAe#A'za rn.`4]f8˫E1ӽ}2XX1(=q7IV3LkC~ev]m:wx~ w*;#/}U܅هf'gbn᧶&%Jn33q~h,촷yAL FQs n7]CW,]Ymh`!w$rKD({>)>=疄-Fs]0k^3cWKe{mMNL d*){;N}ɞH[rљ\z,Gyq؀; ˫ ccRȂhևtּ=k3*|kn5#gԿ(h@H_݄%9bT:Ʒڳ8yh H(K)V}-FQ1 -"aL73F34`UJ@"tGѹQJ: SGդU:޻O|$a޻4+#q ^~P^B=K=l7q"W4j#QEG T%o8~+(k ITWn " l)(cY>Xdeeyh#Ci B-39 >Ah$yL/^ZR9`\T12ץ?>Qhl ehL۾eÖOC3o>)UkG IYm /jv :utj$mn )+OFh͆cCE^  V~ ɂvHCw;o&DPDP(@@*z'S8g" b"@LUt~ɑh.\1m9I@O-e:u]K*̱Nhg{mu\{,[E:`.ƻR!m:4qiRu>%S]/"P^Vwnu v2vkR gS|"mQOJQypI>}7$m۬\4iQw@{'=e%-60ʳlzXF3O+#5AErnܾ8:@5;9LK}LR1Fm{Cz6_y\> Yߩ[sERjx vb+"o> 6tpqe$wA&̱˚YNx M' /*ve{3^̗} lPL6L V44 jAmv!fV-8-%8JUh5Fpm B%@x» A$@aBKpQnkt!q!× =@uL7g[bTڗK>eV^[J/D5ұ;TxoT#q;$R 5KTbjMlarIZhyJǧFh\%(V!cX}yb$FWh3 4DI~{ *@-*FwY%, @ lAx4 ,71/[\e 0knr~UG狈ݨ+ 6Lzg"]i^;εKfUSC,\lT/jzY4~>%LJ= "##ևt ΎyŦpݶAa@931@P}OYӸnP}HYтVK0{Dxh yYCZ|P7]bXZ=<'g43QyY^J~xSW)zC-v{A sjR,C)p7|-Dyx]Q5H-uBcVB)+)=='!emd8',ۮֱVB|~T۔oU3z3۫ ++^\s =qݨIJɌjX)Q/]a#ַ-o0͸FxrLdԤ؜B*:󦐍^u`D\N%ߌtl. rسsP^O7h׾rwMj<>"k3& %HzHȂc`vAggbA3>nآah "NM?YLʳ3/,mf8+WER}ZЊ.(!l{G,eB q 4:0ƥJ!N1vDU9[1/N 6Hpԯ>u# ٧HЧX(&z@W32{˭UB;8l\Hʴ فP~iq=mߵCy70+"1 ׼I^̇t/xM=7d8 tmavvO/80'}SiqˉT53QG zUbw6o,][яDe$lϨ)}>''IʄSrp*y[̒>flQ,/i_s8ڦ|8ƒE[br K&5MKm|.|^T9$e\b}D@l 3H8SIĐE792zWB,ml!MɪSgtK4S yGWtzs{wM < 1 m- #G),r1(b#$Bu@ast֮0yk)-]0@qU"%#n=/^;.,t>:eꢀO UD0=CHز Җa9'W9<&6t(% ۊfcurWKr<0>3b?ȧ64xC; n%{['8nMҋ;m-zɆdS/;tf+Kp~ 剟p_Jd$qdimQHc<~K`3{ gq #VqZ!: )|."+lyxoxԇ{THܰ2peO>#>}#[!"~O@sC3\sT`N_?:̹3;"{Ջs҅8%?pUw!^,vcQ}阹t`νʶ0g6?ԁLpc84FF[/CrB`.Ʀ$YjhR|{P38:G,o`h giA"ւ3 Chr !myzbSl~Rv.ĉ9 X|D%`5gVI;f@EmrOFIj5#@|7NP1TPU$rLr &Ζ"E&Q,3.z˛T% Z40<0y h.{@0a3C@ y#kӊf;&[d<.,]yFb29~:m䏍RJ˰߀q/HzMVZ!"}aq~5AS;Yk ŊZ  pxsحpQBnCT}/{;G +Y^*Vؐ9\pS.SI:~9nr԰uv1A`êEN{)ѹzCgOc?a8hWOLV]HlKX1):05k&(ǧ!3l9Cl?;=o$rNN.oMzEO-}@,@b`3}m- ZV'P1:+%Ҳ^;yzUskSf;oد*;1@i:{|~Jw[n<ߥwe e7^ da[>fZ{=o3NYu-ߖ^+ۗ~־R^'^CKePܞ CuQ93^wO:~b$3p[zJ,웞1?#+#6듵 t2V>RZq4q<C Kf yshZ}OkiIIv@C_<݆s|D.C#6)h.WS(^fm=cq20?] F<7~k 8  1h 8c%{ |l⏜v!u:1{I7\};O@ D.8$ܥricTu ؊%a+gcDVuHM* `dDʙ@;4̓fދp"Q1&OZTdoV'axDlt6x:ZAkR[͐FhkS:s:ֱ,Rd;UZH Ӧx0CiH~Lu/33֥H43,kwi_J  LVh -aUh.x(08rX׆L296Aej#Y~QdkY61``CKEIWT0yM}QlUЊSC๹Q|M} @,GxqvdD^x6bhAbD/XlIm?p%$:۱}B(Dϊ ,ɌosYMv>/\ )wX'|GdX^ո=üAY+Q1R Nnf5\~ @oiM >׶j3%Yo'1ti3-\&H_d5GBjSgpWVܝ*&B͸@ʤ~="@k?H]\`PVUiN58|=O|h,xc7Ȼdaoq1j$ZD`r"(ͽ4`rMI,ÑHH,͸d)x0mjqMffx:PlAb$lM Kk.~}%ynRZtq ,&Z(s>[p"QQcEѵ0WTŷ4!Xgr GLɝ N!HWXoa[}x(ﮱiQJj3O;KtAu` dڗR{˗ྍgf5z?5UR R2{168{8ϲ~6il1P{ЖҸX{;ԯq:nq=}%lhL\֗ { CTOY!IqUH!2:0DhpQ4ŢOmŅ/j (%Q n *9K.;_Wt䟂 v]pڶm۶m۶m۶m۞m׎}֎>Ȉɬc7 <$KQFhUAnc3B2 !Jn](s I2`kz$܎#ՐIϨEOJEKz 'D/-އ)]꽷36VF85pÔ /TxQ$XPk :]g&Q⽍lo~$)᷸{+%XVYÚ4jʵ"zFoߦ>KnʃJ\x5lso|參KAGf(63J6Ĩ:.WԷ L'rNjLnvf%h$ku!\pHE Ṭ,H{U#rl rʈQӌУSM4[C_7#C uY^>|||럧a?f/:^Xm1c]lk5ę_b+Q+̶l6jPtJ>nc|/ ɥ@ۡc-nlpdIٙwm`սD\N C08&^cUr)M5b5 ^^\[;y|tQXd|8?n>?WG=V`27eLpfIQ4//\F&MH&9Zk&1QGiXBΣLkc췺ݳNO}wg dgu780˕fIĿbWfpi26S ww$]Fg巔,uL.ɤ®qZH";,KؕR! P*(q,(+Ep7+Sp*s/R!BAbR|GeS7Bb ö͞ɠ"/L(Kڹc423o,1Ќ~ |czGEuv$3D*L=g*h/"hT14UϽ:$ /&HoiBADhMLjpi H30D*pLjO;c{@(HPJ5q&7F=ZYjK3YjK958Hg20ϱ,!_\Se$I9MWnZIW(qH)q^'UξLID4/./|Lu[ 5}ӇJGЏvxJh#_q&GLx GZ<K5Ӱ~VPrC# <8J2<bB!E2̈́܋Q: yS0S@e&qdZP*))h*ySCc"JRRNZN bNS04dX++AҘ"VtAHUv)'O[Pt3_&kh<jB"8`V3zL@ՔK ҿ "dRreMD>Ȧ$^F]wAl]*ʌh!ENlD/?cUy}:b Oe9~3 T, Re! u#&!CCPRMU%Ȟߴ,.QYBgUM(?dK)JÎBИL'{S$Jg@! 1d߶E? bц+Odd j0$" %͂ofZRG%_61bGDj{,f#_H}mn1g6tEŕ\Za NJ&NPjqAbP2LE9+F]}' t t}AL $YVL"9|>fW.WOsU"eDŽJ/ 4_QxLVb+?'[T wV!W"`6d2qYvTm Tg=; 9Zd:=lrV!WlңaK,.dTw.5 e.8;bw ]ALjb^6 mm}+6 &z oB  DVA:`PP \Ng>8CmQ/emQ(@0vkr!6:)JsG^r; 1 h JHaS`\=̍FS$8N/,?=Ē?0v/AAGۨQ@Y[zoIyc.*\zcx-;,41~q ^2 xOeTO1z0Oњ$f.E +dz?HX?[,!8 `vs:X"lHӾ栊WU*>3fL*ŘmN8g/TݒuA}f,֖아xcR5X\p?Ye%fȵWbmAڏY.A3%I?x!75|V%;wY.io hTT r CmW[ifر(<-n=/;je]kphDMh{,@\[r%N.UM1+s<rX*^Y杏R.ם)[mAopz  ;ND$d*uCVTiMV s X>\y Ur.r{ם!{)r,zgxpk-kˠ֐r {;|(bP(rS͕%bL9 /@ SjUуR"*@NS^0z K3RJ|ח=h!^Ӎ(-ДH/Ul5m#R;MfԱLF@3݆3N: ~C IhMD  M)? mP]!q[V'suVF&n>8yLWu^P~l6D>1QB*`D|-F5ipmx #l,am҄iU'ZK 2ϑV`b1=*P3܏,YF4h5żDi'-9mS1I3 AObNJGn3|:%ІhFP"wLT}\Qzg7LQ%I !&45^7~ ͊fE RHR#4;B=XԐ7vP R /7?xh^ ;E&+[Al.w~ G)8xh E=x􌢧wi`^>EMo/N/&o鈘}~Q@CD< '%=_ "Mi5IE2? 4M@b ; 8x/WZ8R[\g4=H3AhܚJp֊qt`7r{WJ`=6Y`<0^ob"EXM۽ܺ VT ;w 2R{BEE#]pyI*oRPkÁt>%ʬA,圄;dhj ~xaH?Lb+\#~? ѫim֣vz)l4ʥr8D4*"ttSĬL -\A ͐mYIWMo("ժIY҇{+0?5$ȹTW]vL컝}dCc(B`ls16eU{W 2ϐa46_Bp7-TM 0?dPIDB0(fAor%X h@NQ{qFmEOTŪp9IEsx/x/l :U:4:4j0S -;BU( KgN6d+[%Gea{{̀W3CR&2J)As s2a^[|>~6`cq``A2x^_PVR 34C$;}@JKq~&"s/U,}2GkzaX"d.NX숗:G'Ge"_DeSNT t:&QGNq,\a=w5tgWu ʅSqx5|]IPO06 {y>gnU@7xmd4LQ^c\Mn| l ֈBj7Hb篕Vv4o@%d:U$n6Ϻ `o7PkA4m %{Z#CI:޽Q8v%OzcTOYS8sֆ[h_Cֻ1At!vRB\m"fnn9=ڡ_T2\~1uM_]X]q^#e.~K!YE;|xѾz|ht 잒 \3EU+zSC9Xn=pNYg*N90X .{:WrŌYۑrmߠPE n{\x97Vcț7;5zZkcͅp>Y}1񊒧 H2[?d89}'~/@I) {~n 37 uxG!fv;FJZ@_3>20.A a#c'c'sʓwny0 wv<6G.$`qRuUqGGϸRN^ZU{^x~?M qZZ?CE-,]PE6f$1nK1j8l [dX~>Z*F>+@VOEq ͣ7n)ı&XeM!'Okv(2W p[@S ;}N!x%y[;3D\ت޺/&@xZ#z^|3!ٜtjjnffQ=ٽu2iFoYXlt-$+\m -3\M[{"N<}E"-b\nQp6CiL[9eہn߷ꦝS#*RHIZs0/b*!=^vtݝ_V(LBhw %=DQvD~VQHl$ۂ0YŐdN_ړR@EtS:D' -BƨKщJCل nL_.3 [>zʉՂwe ֙  6u=LϺT 6&X@, K`cxw=?1bX|0xń_ qdCK~%͎17l<3].ʍȔxsu:9ZQr*{aJ(cb|=*ֵghl0+Drm} 0o|hNl41Fg(aKi3NOy{02R"j:2(;yF߁64C>["GIMZQFe䌔FӶ rw<0#Nt3HceLVڣ5s?ӴM՘- SIcƛs:߼s`p`ߠz  Ou{2x֓|{cfSr2><\t^b-LKxu29%ʫOR 4uR1y0".T o<'.3zsH~?Xӵ FGG׀k5)`HT= m,V<5ڸ#4+䘮 t0fTJ9 Uf5%IPRnYU!W┄ L"Y7VEEk쾯9K>fŭHQ3_ h켅3 s 𧇮?WQfP(),L@WŘ<~*Ej?&۵iC}gU#P ctEmEy YjD< \Q .L㨂BMRDW6bgJv<|}|kǷ `ΌD$GJq*7㑱%z)EsŮZI M^,%XZO!?b6?-N $L?T*J1W⢁d9-JQ#Dm9f\ &R!D?PAFP6J֢Dܻܯh4o?q3 vӸMY.Y;})RiY&v_RfȨMp-;2‹0RsuJޯ$Xv+w86p^JܗԤKrM'X9… NW:/fX:#4 4 ,|`_<.␽eg;`>GA|<t$nRWEhc yC%#D=86$Q  y~4cy`<9CCuGvâl*:ܭ1b}.|X_'ڽkS6..Ry -iFݍCMcKwT-Cuc bf^nhxؠ;[Dmu"0V-VNEW?ȷזΌ"TD_{ӖX9޳![:ebwc};ܥH 16 6Kdv&|4Dk\S`1m ߞN[)v=%٤X-FutY'ܝ+uCNMqm1otvp9RV;‹cto^Y6$⽗']#a7Phj@Tr'6 L>x JvCrs*?ޥSjĮ۹}b[.֪)-<.*pm/v)̭'/ V9BXYn@.0UJ]. ^fi*}3fEǩ&x3bRhx|0mxY>v<љu׼p'ĺd$*-(m[adzA ]]mC$/#1}ɣ.ţz#HJGj缪_<ڌ8gTw"7"x j*1; m`K epX܄y@X D+79pN^HA @?hX*~>UBN4c\-qܽ_břCpI]Zo iQR_Z;hP^E#eJ>[oB1Ք 5ߞ+MdX+m!LkHDԣm69ޭ= ֽfgƞ0s#k 7˓nr:Ҷij|Kby,~cg O*w?$eFXDO+lKI ̃!xąunQ̧pH$Ca_ sd1i<^-b),6X#iQYM}>t@HY[O%.bt")93\^vam3&bK)pW$k 0Mva9QuGY)blWf3y5+FGQ!N=+@v4?!+jGfpU|ZfcƟ9/o% w{XY%?o|pٹzwMcL*rl2;fLIćCoް"ђ3V uWX)4Z-wI$;j_P_bz|1"k_zwFGC P?etxږ:LJD$v2Fh6Fq #q"d5ɺ}YA&93vRqCmTdՁc{GkBh`.4:KPJ% 4RY :j0y?|åzS(KJ2Bs NHG=ᬋ O5r vYa՚v_/afλKf<]Bsħ$IX6eD5нt<CVmdIc)vڧ@M6@'9Gv`öpLY8O0L>a};COpx26O4(G;*Rł glc%(1fd Y)݈ 1‘~Ah4 ?e]{744rzffbP=Hy8t|nhsvv*V(#;:ۻ:X:[nev)>rXCJ n`Rʫm_2U xoSip,{D4)~m6/zphqJ^D(jF% G0'~ ﶀb9:_8ǍᬨP\1V`_fFO ɣC:=O].vCrl #TJޘp赚/о_ޗ{sHKGQ¸&U` 0U橙}wǒ+Ґ9Q *Ry&mSkK%SG8IyǼcE8p`/$[ <6M|$ee7iJ}} ͸)ӹ **:oUS4DnsF()iK¢ޟN6p~=_cUv9*6܊8u14F*T71qgg*xXCxϳgKk.Դ ='4n<̘volp<4<7CT5- d@tX$z%_BF T\+AIgd.WuD5Ld(n1p $ia|c3ͦo#t"7bMQ?X@8z-,1zkNm_e|(y/!@sW? mŖBlIˊDiUu[mΙĕR@bH?mE@wm5(3"2ћ+%7|7!=9v1vMDϓ*Bviq/3o3d'O4K!t% DS0$q-F dLKB}W(  d6?zH}rd^19N+W3y/##G3w^q*EBRoSc? JNC*+cHc*qD߃E)Ckf54U dF:EVM8䲝ՓW?'ʾ4%EC̞CLfw~_\zZv(F%hؔAO g˖_F~gߕ4'POj`ef 9u~ܲ0qHmmtб$4m8bvZk`=;d!H*IȊ鶯~]>7%P\jRsLaY9$& Á](/ݞ^Λ@Q+mxU7Ѓۊ9/2Q^3e#=#{~O ߨgH$v%@4TաFyiF *b4b *DEDB @ÎL0#i:MhYi &t50jh bww," نb*\8Wݾ2GiW w ],.Y2 ckzpe@dZ6A$Ŝ{Ťa6S4]AcC›3}&,H"{ BAq&M,alH˳c`M"xNǰ,&Bj$zMBaBSMeh!NF21RE/a:UQH47>Yܝj4HheCP̗MS, h@kGh#{~>=lE _)9cBAѮpƯVzT>ސH) cDNH IF4| X4#a 29C?G*WP?{q:vA9hHjad4.+SP3[;2{\TPz9xKG9:Ϧ5)!Du4l ;)@n(:9%\.mX[ֶB|^EGXs-yG$b=,{ҒVs{?[}{c{u@CkTŸ49j [U@,&+ܲAPj%+ 'Ѻk"]E")FU)e8>,q=rquŒymL  HNXp0X5gl+[/x[=bو5=))[7NG-,inAsoz(V|>CbR:ѦItj|D/}SP|@x s kj0&R{'qXW@(Sguj٧ĝrHH%]4\.ɸH v;D?=frt{ma:P=gܡ18t@> .>h|]D`o_6>GA?o ^!~表C,xMN]N33 |V:/P.e!*Z\*6^@5CS71T܍[M AQlْ13 4{(r:>ސrPG"_s5+iۏ R3Hb%Hg_NZjk=OSvk=\Xq4kD50=B9a%RȖiTTCAJTŵfTW)HC+$?v%%A(#ߒK:$LA+cB%,%ň8 # ?Cp9ŸUk[P !@"00'2VZ461vS˂gF`8%j6߈!B "6Vو|˳1q&hFf}uWҟ2XϟKVgmv]zLѺ,K~uUSfTd ^R+H4!;Fsml)q<|ZޅYxycIvIE p=n}@91 Rm5kd9fo4W.ڢA#'f;5^$1&kRr+ZOf"t`?P~b]KȮPѩxw'o~?^N%Һ?z//O"$RؾW`U ^,z 1X !(NB? fl᥀I[Ce;0 ls\.d"D. s%0L ڻy}Cg*Pr Aw-ι ͏I+sOUPF+gO@w,rwM1)iב'ڵI :HmoGr@3 $gqpg_i&_꼛2/ɒȃhQ D}"FC A׈)!K44挰q(vLGǿOBNkv̵_ї7GGAɿL$᨟.Q,A_RДi4Q ^'"M)+5wBLTnSDs18ǘ$,#k!AL LpO"yQ6LL2&fl"LkqH6:2VZ/瀚,&M? )u%z~K`2AFP7i㷋8S&<`J+ydmm=埆JF%GԄX)bt[rm#1xfà͓Z 'W?jR4H?hоo jQɛ AkbʤWƹNMN0+?S*X%k%C^p@ p3㕿j' :xzXW\pqj7OlDt@!#Vޮ}4I.rTG#̺Ac3xdU8_qJL~$+P+~4wՖxTy}2< {4Mzcep`M{P!o%ߖ\QS65D8V"8 8>J2bb֪b"=:aP)4MtEvQ,;f-w\!sd~" U=m_ )#^3U>R> G?\t (9-R٥F~t88HDʱ%]9ܢ _oriR_`Hxe끝4(KfƇ=S8䒮K<EL4eF9 ?䈄FJQoCtN T:?jyAEhUc!Ԡv ÍM RLJ0> yOp ӟfY}Dtf\sg2 J̸d*A#*;t _ (Gӳ(!^XZ۵׺ %n [Sz5m +)j?׫{W!pg3?t=&"Px%&$# .2 ٿ!tA'kq{rj1O>]Rsz er# !k# 'F nXS%)fڜTifZ%o헲,d5\ogUms|ٮF׿ 7J 3-b! /.YM;W^Xu[K%/A;ȍ|vA9$|3hN#MfYzJ5K0kvz|8ڐLq.wՒ(}%|>0v Lvy8 ame %)F3Webp{b$U1LNuuOr#[V׿#\ii~'~KU3n;w܀q$o@6ϾۯԲ ΋/=jFsiB{Vf]FJE%NNKOAmrz>3_ 62Ur o"V T?QVyiZ]׹~lj:2/1$3㥄nbg y?? 9Cq.It5,NfN1*J`39aQd*,) r,cu5S.pØ.}w9U1E_~ܦ]ݜpO&PhugH.`ίi߿37NQ `B;M !l+zoJXզ<'m+dC(〞v4r>¢t#k _NVK<Ӓ;c=WφTF5Qt}AEUr\q70=4T!NaډvəW]wb(r~H üf'Qĵ*^TVPZDr?MRCK[es۩w̥ -ߺcD׸]"5(bys.SD:1 )Z7 |lY0NgS>ۣƚ?Dz$نD<3DǪbG!6GLA)o0OEiV(A42GeXY//^(Άξ,bCxɯ^17[CIu($'UueK,!tKR*JeGB9+FD=6 PNṆ+$Pu U FjUb#i4:#춲i8xT}䡕ꎵ.{f:?i{0U3j-Rc>xvʠRZųu+ÍL4/oh.DX_YENѬ@ (lDS9f|i8#F^ TuYq𥳐O`."tW@p]J5~=k5~-Y]it So0ǢanL ݺ1[vF2`F'D('%![ZE2=ߧIK|A43Cc{'O:XA-/jIPKB*U0CVTQlnk8\pPF25%[U4:3q-L-+ Rf+)c0d kAe2A`+ap%`.jd [ R}0<}) tiHCKI")&!!%_-V;uՓ&jvϵd,dbƂIڮMr3jf4˒])i(KꏪpŹ/hswE𺞉x/NNH/g4W`{om`j > +Ğ6fr^& Z`(AY2BON'?vGMljaĉc[9-=XNXch0 Jژ @wpIDpMk&v8Xc<;I.в)}Tw%"wzԶwQ?;, +ۯNN@R8y-yOp]8kߣ/2<|wNfyC{|؂eE̾U iZαձf_ZP1:=g&qfsQ;H$b$({!X_$qpȤ˘E5U?⤍kղ2z:2 0:W22VD4]!U!h2C}ݦf6kMg:fd]p{P o8/1[u5ST'zG_ElIg/SOSLJ$?R:Kҡ9R36+GQ'̲|+Zl:QC<(UB!jD'igymd]]v[=- [4T-) hQ-#__AŮ?/ԞÇI')NvGPR82j:o!7KtItnsi#wsκ.FJ(4e]Hj>r&3ҼlKHɰN&Y!pX J1̲xQ 1F@AQ 9c?;T*kK}W2El`!*㨆@C ۏJ{Q+fDBWΪF@U*/S4`c:*~zgutTC}|t3{r MA GMƥ ,/0Q+ a<( N*K `FMi)Tj1[3Ia7׃k?pz0 щ7ëósX{ ^/C/B x ^6Ut0ޑ9fcf4it 0eC%U&XbP'x;T֚'}@'J*w+ ,++Ȉ$4BH̘][6KwDyk$zrM}2ޘJGb)'Xc>ud^#Mig x̲{4}_9M+ "ъ;yHu/5[%(^%_m~Bt]VtH:'L=D{(>S U%V%qa<KShTw`o9ϧI*bpMSNҀJ?!UFc i%R;2rd5no }#8!3NMu~xN zAe V͐cDŽ7p2öTeH eSLnlVϴxkeYos ~KoDlេRVƆ=n)8wwJz͐- %1˱hڨ=T"KUohZ~8 SpV3i?X(gPp"t{Lƚ$ 2U6h` \kDVk~ r;I&' Oi@ʼn9ኅ9m@!U>y?P%ye:HxsA{[3VXpl =3OewP**ps, :ןel EDݞ-:+QyR{%Lѐu{{>js>i;{*ЕSy6$W]7Ќ"$zl7')a74-d*6OL#ڦ.ъEh?@ȕzu.sWC˔+KG]A=K@+!ѷ!+ˍs $rėtĵ8k#|=s7/QeEmZO6xKT uKkܱF^Fpm;V_K˂|b֒}XsVƄho7TGTS5O˺kw#bM]>e'gsE%Ax!ᆁvX˕lr(cI F-DtqOoG9BȤ1{tO^CC -6-$0 *x1!nH`ٺkЄqg+.<@J^H N y爏.zct\ÿ:­_]7PSeX /9"u1l D9>rDoL}# m)v;beQ@WN֨ZWW31K=f+zyZTU3_\%P^[U (c:3bGbL }MAUUQFQ:Ch'".&[z& džКGEoĵ8 Q Dx̱4P L'm#h|MxĻLȫMV4: 6?m|ʰ|& $h6pA%UN5URq)v^(![ bPq#E@m4u L3gh;ˑY*C_HB1FVٝ(^B\Q9IҞIeΑFf%t ;+@45X|'w*LnoC&HX7OZSXwUy:TZ}^/MM/r5O7} eWaX>܈Wε_1oR9?]cyjͱ' 0(@ 0w_b vHU,h:]x<[!4 h)搴V+r1F 0e|tvȭ]vV tIA.ؐ%m@ ckwd6̮^1mp.+|)8D8+W.h<)0#ջ&}3G+ppYHS0WodYFZqXK1XH]l(L͵NL7PKْ'^?X 񕁗|Rm,V\om{ԀX#P} A@CYtS~QŸTeGJ\?aWpTO`xWʗ{7AoGP{(*1hVn eRY%SJrwH7'-a}=rqF2#(,saq0@AHe*; ƵŁTQ/M$lO@4M4l~l?LL#=1MDah5W$^^+$wn;n3pJSV2(Љn 99x8B1S2fav˕FMZ(_|\Qk1,|ի6WC=J:%S2g^};q+m9c #JEYY@x=ÃDKeFD0S) d|ӥ5)2._G|vklQj V.4b~=ll )ŒIc"d e[mS2ʀV7c:i!r~1ŒƇm{w{zz+]y_ ؇.yz+1sQ@ P2Jo#<0YceNYH G0(֊إ ₩fha<$7bu H>R@"cί.љ9 ]}}>ʢk"u 9[R{" PbVn<+k%T:|La F lnM.åӖ]i^ЂܚaRu:6qVI>H?x|>ԏҽ祾%yHpV[, U2(|ȗ@H:$粑x-LĝLNbeb:nñ r{U.vI^"fEkpeKZ ^^c>g?6 yju~X?n5*z3z`lj!F[ȷuap;U`_866Μm3 Woʗ ؘsLxӚ"9'~rC|сKу}{ױngIt_}KP+2nB3\Aa [(¥Ďg3ՀAZ)KIݺH%pzu]L1Ϲ{OsvUq#e!h?׾5BR0@n]k]V(z6H'7 YP2r0!H;;ٸ#؄`-2e(yZul758r%D(Y yKyYbCi]ENIHp罐DTuIź䝻 M͒_ aE/Wvgq?T У1Z#ibfFTB0ԙؙTF LDSh}gM@3 hAXhqG|U[;L /̦8m0DYɛjC }3 4`& YSH֤U#rOֱ{Ze KV *2.ִpfE@Įk_((Y MDr:3&*O5&v-z"q&a8FHi3x|mcjPfuwؐ xΡ}ƨZ "0$ǨdGΨ~A6vV hSutl Jy tưuCdZQ]Ȗg2PCVR1VXeG'*n˄p-]ela?#鳠Q]Cc,C|la[D LM ѻ1MKW _b3YTfh]nw~p"E0';Bz-x{\J"1 Wײ0?"#rsMHt{]2 Nʅ6=zy.>* eܵM|y>*vNѩnYؼͼ 6 > ;!u|n6Gwby9r o@ &o?3>.`6'Przn/۾ô1b9zRcHw2[_6e6NSs|Kn_VI6 ͜Lh`tmd&s4jfdfkjB{jۧ)׮HV]y۷=Gg3 ]:A8fPcC+~Uk2i< _̀|f54'XiLJx&?9n,z i˾~& 3ԶӅvvG3o.T/ }3~v-)2]!8rsIky c!R (' @2%!:n{vi yf@QTEi~)C\Sdt`t(V4Rgef}uɴɦRJ37bLDIۑa$ $sRd2Q/~drɐkiشШt qā(0H5%"|#GUis2%Ń ˅3 |Yq?khheeUTe&-U5}3˜Q0w~փ0xS10&Pѐ++W Z>o[^͟FI@КMm8 b-uܜ<\~сYvg|σ-YqLGBw"0gL@I njew`̈X=ə`Z"\cv&Umb.`4^qJ&%Upً_ka+/*BLqm{N9}AY d:4qxjc͓ @GHd2/*7V,t Xׄ6 *kIE.\-D0Jj&q>nVd zpZjԊK>HS6 u,1\:.(&@R@ \r} ke]]NO&YaP왥0Ⱦ@(v[v}EKV^sBB$ | m|z yU%nW5վ\\vdKDg_^Vs]6֫í7oaۖFWPr0Pdb8(K-_6^@s$򹎙5~7yp.fS^d~rπe5Ӗ5?vDr5U.lhӀӹ{xc0}޺-cI?}K]~LZ8.?K@ ~r(\ (fՐw[:LR;/}۟ ^HoI:wvl]Oܪk!dF"=?m +hь#hO+d7M9b5@"("L>B;'*NW+hҨ`BkE鸛#㍳m?g?~$Ëwz%g¹y~dlz|i{!һikvVk2p icYtY@{l W^~ݠ(3޴f2ZIq4mI+e`|(V4e&ҧNj1ep?_P>H!",e.DI!<'7(ܓdžrлsd5Hy{lV%5!,_I!,YL?5H&Mk%?kjLDbi ː&^ۿn)*% b9f|G |jg^m;t\\VON2 MbC?fPQ߇ڏC[+6%:30~ϵDbyq{66 wOU $L $O-(;FHởx&72gaZ* o4+'i}51 7`8xIS.'15Р$unMxG"D$\14R5@pH IE s\ a;"iw00#ڼ"*D${扱mJ8&ͪKLKߒ^ aB=㑂U2F#XzP.DΒaHj3X-5j+Aw3~Wtmh[vdW*MtTM#Sd,M!xh?]LI,f_;)" )|#Tk167J(aݠ 1 )ۮܯ>= a2ﻅ~B?_v2'הEBFyY \v FhɺQz+V%9%:BdK9Ѻ+I,`vtܞUf\ؚʙ*@ren9 ZƉx9ڐ`.8cQ+z} epI'Cl}9=IJ ZbN^'G4/&7HWZI8*nא} _ Oɢ ?s'5GM mLA5[rvaqZm0 ?mˢok،ѹ|f%}i@O'镏5nK K0fUӋ'%;@hW>25=aG&:oN+&NeZ*'+Ln&?/jch-3Q;|_C0Pœ0W!;{KٵH 8mײ,6_^|qiM<`*>KA\}UneJ9$ђFX5,h1x^^&&b^ݺ``P[_I JR k=vrq2!]ׇDMڛU'8v?nfR J/#_4ǣ_+#e+a&[3bWWf3@\A?Y[9Yۙ9 ;[=qve┛|VGչT ZFM[Zq;@¼NnrXsgG?΍mN\]ٻrvf74szeMS꡼e@7D M%(J(zЫP Ê3%hKNGmoVDY;dG<@" #m*fK>ZKAgpͦUCiBī0U_,[`C0$!F([5xrջd %1tXR+(W"h۔m[4Qھ#;NhonϐE/Ֆޑ^2* .8ۋRַ嶗Na+}><8nxOCfA sL$f帽 ed=ٝLf]EeLl.ݝ:}v.v*/#z#+Fi LD?wvcvF.N0T6n$NMM05ut*t!%PgPu2T.@-JaJqZjGŏcb86b2O'Fg^$bGQ& \($tʷ,zh``>.PgTfvPoZvD$3n4lEX?@M %Hؤz͡Rc F)+H:p#|l`e%j-z 3 Z @8sXm[t!~1B~mm DVE!_H'i=&(?'](WԌ>W'y!<޼ao3)1`::wKt}3l3]͙@ufz +%'Wq:Gltk n"1Aj]K+1;`]6hUl~U;ٲ= 804NV" 'Ds%@>A9,ڔEﯪcfdL`{>Ԋ)h. I,;I+ܓJ<_Ew 'A@VKMJXj=7*X8^ȞAΚ``Jhz2ʹY)dD\@C4A[QMVU* 9E:YZ  +Ϡ&W=RH)LA[Uk)rYx] L (N BΑmo8bfi p{8U%BFSA)) [md8Fw}CJuI^PF5$Sy A/]e̦QR`@V^ʯ)>r ꨊCR@](gݨe4!Kr\IWvv x}9=8eU}9 k@,ДsrBG-Y?!jDC՛!0LF]2;+3< O fG;Hœ^Hwܶ;+ d3c>#؅9Ʈ2қՀՑf-PHHJ P>%Gfb+E`s70xLGGaD躠] *i&U! 8ގ|>ܟ$j$t-mC+.aŽCd3LS(/f(x벰$1oY$29Խ*Qo]U:+3=G*7 *#}Q['J"v=$aJ{"UM4CszrN=Ȕ2o!RusU˯ARӸY!dvYٔxNBϬ, jYRoGvС.6azFnP8Vh "KS -vsQhѷ"E^W <%Qb_}K_繯Z: mR{8Ii0 U2ZbCSܫYs{-Y5RZטrLXhf*l-lZ#`-9en_ h S&Wk4۠If @kh/Z4ot /O[7C"z%5'Sa6I:uw)S”))*%N?@$+! ȿtF^`KЅi-X@-}0G ]361 )C Ec0C\,_Ac㤐b yh%YJ!*iƅQK> P@U0T@*b=6!r@)q@XwtKpj`{>`{fGOnϤ%˞\yvtR:{Uqw,N8xHlDX 5PT)bYqtwn\ߞɭ=jM9>i=v ,u/to/*N84ȚbfSzG9Sqe"m^do"%QOOAAɠ5[Svd(>dߢG{:~"|l_{A 7W +ϊ+/دX@_ {-`YMųQ9keĨd =PUҰLӻѬG '7הȫ+QT=wkҪ ׻BQ"ѝ|R7¬7FJ'ە#M>3 {Ń.,C %1I2M{Y&5nOޖM+|+؟gvTܪV<r :8+Ie>wKrՉ>0,)fTkE҄HtA ~N7V6rtB:/JmRxC==Aç>Xv~:~?мshﹲ?s%?' '>@6gQ2?Fy3Ђ7{G(&l[{ۜu 6LxKgm'6#?ՕVen5[MHèvk]{`#@I/ hUƓWq 4>p"O>/9ؒ$e:W!thE;g`3-Xm3|ie2KِO$<-clZ5^CǬBMg^oW%`נUT,kp\:9fZ.zb-wQ\kW=G6_vC̝^ٓ-c?}cz+"cH̨}4ܝ.ث&ρKf_JO5BN)hjangIn!w2T|/NHr9z=M2muסvi)BWΰ vmU(%CQG~gٌi_x2a<`kG=@CNqz1_ӑ׀Q%ؕ8LoWdlK~X/bct0 6:3h,xcnޝ~-G5u`&ahE"$ ƌ% n$h9 Ѐjz1WSɾ%Hł$f|.rZ_M;yܒ,GBbt7l Ϳ\(lh9z!i)*a[+a Z ~Ѻ1"\, dK4W+8 1d&{7O>W͕e9$o Nnjmg?PdasJܔX@UU DVEMsX!(neFɅ&O-#Cxz2fpܽ{1*J[_ g7Kt$͚FO:z |;P)ZZyB8#-})sWv5=F&.J+(Єw'qRƇE΢D]i(;6ˈ7-}G_0]aP v_Qޘn\Ϲ5+ِbq$mSs@L *5侅GPE%Ӕ 4M2d0lzُDofRetf & l7+5-vF,@|"7( a.|`ϸDVa4 UPv/mT^n_U$>ٔ, CMNeL3VC݅re3A);'qtHV|_<6{c2CaV\!(Mf nzj-z2]IѻfT(TfRߢ9&]J;ѤsD x6c%7 㶢o]%fFzkNz?av`TtxA,H EatfwB޴8cO%"ckou3 eLYB 2I%8f)PN]6VOsлv 1:A!(?iEo̬Qp '#E đO nI*֓d %(R3Ϋg8Uc"+*cTFE ڳc~f0h6AL>ү %ծEn ]H(}o\uEz3зMMODlQP|GM%]j|#a+T_i(v?mLZ:a8eT!~9%xAH68\:,>$}MMlL=L\?d Vk}2ٱjILg K+d5+r̈́^@>AqחF*R }~ٵ ##c}E}JR\nc<MLGtTc4f.)Sn S [6m]]7yc&M!x J~bLR2/T$ FIIFtSWqgq<7e*b`[F򊣟-my]qA#3o&_RZT"'~8*o9TtW*u'-{wcN+qxSW>dxˊ`j&N5y$#+{! ~7WƷ..nw"NNN+8{>T|4B hnf?7?4$j"'-]zB &t]EeaX7@;c4؄)z6.="Fs,sH*さ=|2:[(z!ęwLOFGxnj"ΚCcUB |ARCTjJʌwbN,&:K챃l1Ƃk th2w'T{?o891)0kF^> 9Z?X9g^nǏ{o;#)K00o, wXu%{gre`XN:Z|' ętcglX?^r bDS#-r`%qZvʞǓı#FLv7'*weav t$}O__DOoŒӵ>ffVDbv3W* As\X}f,AsK% '@Q!R%µvE]{y#A`% 5#|X+Ne5@nW{o7BGοxXw3R{R$ueWr_$> {}t߁w.5\-ds/.DDYg K_Վ\ 3YF3$w׎{AWO{x@SQ,=1W4DG^=Đ^v`+/[.؀6=,rD7)JP^F #^do*_Eus`4?tkVmF܇v01[mW F-4͆%/Zlgo`/"J׎#c2hLFd_6 T bHǜ[Z*.@Qě)Uu z4˭wlv_p~a3cJcaЖrfd}3Nh"~z([>Km^"C)6XiųC#ȡrE#̱?OAo[sE  n'T&7G}bE3 ]x3|MlJY~7]ϭzhZgcMV/)%[2{;*ߜJ%n8OVBwptsZ\GDjiq= V~PPP.TDWa"R$|[18)8w֌˱KMie*FS c̲ti~, a>(<^݄E( Pf,zi P;fj>hև"'Ƥ;b-X,AZ+>.ZDo Wuo&3I.wVt\Nq7lkja.eqͩ E]a^Ԙ/&wI@cV "k-L #+RӸ̍~Z>6X;L|,KB, N6(+|r/6[E`:mk~ Y=+ݟivA/V鬭,le2qqJjf%Ƽ~ƼJrpfMU2'XpM (SYڹh M;;9(~Z:4\+׫h7 L C}ysjnf}Qm[ߩ&q)׵QbUR-8^/`2ε CTSM2ޗęn`+^yc09E Lun%P@3\՞%4xŏݢ *2B+_ pf̰QJT2I+lՉŎD=ҩeC;0*h4wĢ!ѲN;4 ~ԟ6+~TюBBK6?0z|D \a9eP~ȋ<]D.>pRzֱUÚ~&N;Aѣ}XƗG6W14q1viwl s\]?mw:{gLGU<3cc^QU$u>bȼ:Zn\01Vb4o8Z(ڔ686tEZg iW\0xVϖ'c/QgĵB off81V"XShaIװx=]afC1bT} Qb-/; nqMxZKV\r0ѺF,*=Y@'@4mynLOi$PSeo#|GW}dm!! 􀎡`lf/eKOrwJƀ8 oՋ )=Z"]&vg$yT#uGͫ.##8,UF+$K&izJK|UwHj,bW{M+=eUZE>O| RS'Tћ԰˘%8\t~yM s&8/.zʎw`j5&$ v q+UajV%;VMRS32րS=tV W$=UkEkN1ػ\ ᡾ ˔mX(07(Hg2}ɼS:Y2B94_ah7u+m? ! *DUQLeT~mA-@c9>%$bNJ-3,&WG7rJ8&onHL{~/еfOv?G B,2j?ñhgD3o#!*zC*Z@E}fIgA,HWbiECM@q2(w!jKB3la=ngNMQ"QNOrn ~D ׋ɖ(^Pu#x3 :]%ÉQ8?d<-bbU^{ VRNyXTv yʏBb#$.Zcf/vHK𶇅UAKuXup&$}?J8+_N}4q.GźQݮo ;Ԫ*1 To(Fl4_ iwXG%e]l>A:|#tL Hè|Ām$:HoaJT6?ozkX(҉E>/cz"$ZjJd%-O=ұ<&G÷6D'F)G#R*|nw4n/$^ۂB̮:0}̫45Ⱥ3}v}r6?'Kס}P=ybK\%ykSR2P1mމUvh g~ǰW)( ,ba='z%g\ 'Ih HyM-Hm\[~Rw}/6쒣ⱨQeġw+mm\FE7  M2G,ƞ@2rr[րQQJ[5 JgBeĬ`qu #ƱR^ʄyi%!M,^$m(~ã }wjG9UNU Dr&TFrEلry t*|0nQ!*Z[Gj+,]#·[hRvvXn˝cy:/-_c?w'kԍ2N_RykyHV eȹ6 S2/ .gy*0 q~nڵjpה+F5:tŭ:IIAqV{ Iu jpKš]F;x.ϒz'|pEt[s~2z}Q9R~zFA9d\e$1vusNidzJ3Y0t'λw*ZgU^/zV{3Q.!92YMG\+OɇvNΟ c(!kڛo$znF 6],#X傊~^wST['o|iz;3|whR@=a$h>>l4L бTe`=hq"KN8XLj$/ 3 hVq';ɑ&a%-"Mo`bP{E dhۡy]3cB8#Qc9 j`jȽw}uuk-2Vq!frO+U+ T~֗(}dj~;U$#> >R gm@!qʳJ棫[ F[R{O$%52Rac(HEnAQ"^0[ n ~de366rZA+x-*r0 q3 {RCX$Ÿ=IK$:~B(Cr8Y\Xz`iIndkqPM'Ne[U 'Z D5A܌U ۊ:g` ˦-[ ܄d\/D7G)$E! śdIK ZZH.1^3uPGCETSGPS +ŀ^r ։ۍ)LI3Vv3(5WD>5O,Ouk춮=p|ǦhLi\Ck{UV}M!)_IY+.6F%;d5=Ęf*(X'\N,ƀRd##\f0$&HW>2FEA~&N2*rnh{>o[3yԴ GKZvszk?k7Sht(~'qK`D/ks\U٪Γ˥2e^m;/4pƻ}DFmgfdt*ЪJu@>æZ:tl-Nk%S:,nFfftZf݆Sena 4n/qќvɐd!u饪|^m ^!IfyΕV̭RrNe^++ykj@(],Tf X]%Hp"! yTx.Jrx< t07_PO됸 #hF ~Ձ ă8(y&cGIM 3(FPB-P2AUdKEzQDuX6K-oaI/ FqEu_6ooo}x9rX!|1:D80tdȸۚr?jv,T~X@V&,lU?94j@Co -zj"؎7TVH ;M`ҡ)ithF-Q/-8F'I,eqpb_`w*R/S/:l 9>QC^:FAP,CBsw/o/@ܽ40^>19!XVv1`B+/J'Wg-j3>i5AZw" J>c-Y.oL~4J\=tz֭5̓ViQ7U5$2$ಚ,t4֬b!6e*!iޅ OaZlŒz_&!P}+NqX\%ƷT? xU+y$VwFF|bV&`nlQZlh$ULD!=yg /]veG9CA+ oṰL[GLA#) yMWbu _ϫ0?"1Aox3pEyxRik&o! 7@x lxsp~u%Ƈ=xNh-HT/8~}ʴJiuOe⪑ :N.1\bW5KG昬jw3bBAɰĩtќUjC-;(_pvOgA =\3~/ބJl hX?8Z^S! 2"++$~&a2}WcnQhY"=AZkѲp@Ȉrcb|H6e`Ju"KGMd\"b^mX7x?{5 p .fSC|32 O,abdqOՙKep.w7phe-ѱ2ޮl \ǤzxRIkr,M1yj5r؃%g\yבȑ4DV@P*Ҡ4)'ݫN@Wh;k΁iO Qm;h:n:ǽM|ሟ&es5~*V"a+-%䙼wc$9C*iQZc%:Ϗ(]dm\ifrUZ\;`-nh\# '>K1y#2AFG^.ܴ+|aNB NYܾjl)roOLX֎Xl nAlxN҉r)!p5;{n` iEmݵSq u'|[ X.Q¢y{wë:+[X<` :&y0fgMS(JjS609v^3ulLYL$S=كPC@c틉9@K+S[h0rz2pUٚBMs|]K4}_1v Ʋ]_ @(GzJ)]^ + p:vAzjvf(KRzQ,(ZTCy}lVYAQGA XJ[΁m$Y mIL !u0;ܪ◂p )h "qGe+2ʤҍ~kW\Iyvx陿[$~s?SE̝c00ZIKzV?08ľ4C]#v%(kxi]k L nSScP6nbvDr|,IXdM}m<~t52w0\Њ}x@Ld5PyLp\dCmR9>Ƽ7>Iă~뻾"w2 o22,*<aBx{x;84}% {d 1A*U©N [ \ІN:d9h8ۑ@;>}nLk&}o:ej}ah'v#8AQ׿t raC&+Qw|q Ր ۪- ChL C B}<K ~N]wKl.l 7E[ΙTk>3)ZqGg4vs*qʑsu|ꈆ[ֈ}IKn{I\>*9YͨO\u!41IVMfn;fc6>1,[RyYL0;cz%^a]ZBeW(ݑgt;=0'X-s fN5Ph#C^-kfgʆֽzl(͐Cp zt˾Y3!B% ld5 ݩS#ᗍP5>&B@rp\=!9V-5ۚӲsJ4vE=v- )%XjSk91HP_0`QjnxMKTK* $KKX&@Vj7{`m\fఖI`YBCn]z*Ra=Q29_ނ%!݋=z Z?k.0eCj9Q)`5AkG=΋&#oPMMހqliIjhfzSLtNG HU[4@64ly-:eqy(a\ Әj+Jo)f{e')u>b7K"O{n$wbP{NU8Eo sSh2!bJx*@g◔M::gVH>%8ԘFKAx7~H8lpJuK%IohsK%`~胀9|}4i$|% vaHhh,o_ $}0`MjQ(ӣ,1>KN0'6-쀄1&O I'F5l}Yz:EʹS#Y3 naqm`cDG W-߬tk6A '~=#ƥ7\N84躙*9%g1Hv8cU٣= 5NbT2PgwN=+ߡ7 ȉ šNR,ib4J 9mE(ET!$gL 8&&,θ<tŹ!E"JpDž3Ƃ*t%i+kmҩk8퉐LypAbe\6 <{+x 0uP][y_+Q W{ 3pv^*^_PT)w48r84hyj+ XGydlthfþ ީؖE7u<:wںQE&!4=)V8KlfAڡ64FuW\jvwL_p^RvEυJ+%{y@YSf)C$Ey‘VmINRqA-gP* (mAٜ !^Ù>S~Ju4Q>"{sa&SC\SzdIݜӁz}Au=7 ]J (Bi`@. >i5`.U9F_XW:pƗ@㏎oAJXQLʝQR=% rOgl.&E&Js`ފGD cLxqZ R蒭~vlD{1WZ pR ؀lXuw5RBn}aoOJM`p%1j_ruI]g' N'Ăަ^7k˛Z,4F0.L\a5splHCT(SX$O^FxZ?:0,[ c'=+䓯l_X"6q͔ZkY㦘]׊tB)RCj2Ňio~ʺ\*$LYA/)ITD0b]E;ܶD5%5$sɚr "> M]DQ ]ep+&U༵EXv&0RzL`W0V&uiwG6t׺"4i[c (nKBLY1Np*-h:2&;wh*h1ag#  ydwnE~N/ZūaX*dZ?APBgT!pj_Tx! iMWQ$Kjwmp LZsj9'@蓐:}~Q$fJv{ <ŹKQ0ҁ_ qBm/}L{l]ۇLYr W9=V9]>-"]4]q (75 *W `, va2)b*iT`~mD`[g;_'6 6Qt[Kk?.}S|cm, 6! NoL'1R<5Q4wWP ^ HI1/pߨmo~HI }r7d_DvgM_o0$ʪ)=w k6yqn'oƞYWk>>bjMޒ OSֳA7~H8?"̑**$nVi(R?aqe$Q+tTyAh UptYiXҢu7ѩ]7‡afAҤ{T-pg?.Hk l}jAgD] e$wɱqm8ESO}2B.+l snDy !0g֭PRFkQPY]XZMG p8Ux3Vާ;*Tx#D*8Q|hH2{M#ړJ$Ip٬eA))ơ4cŜ9+Gv1Ď~!%s%BF"f$oM.x{[H)]Q4 iXdS/)p{TZB]ӹҧu&l w*~]t ɸ9F-q>/Go>_2.mhHKBlD@<;Hԓfo&kQ(*H'%ӱz~1W%8"+64(O>[ .!AEi~? .^sGF{J2spFB9H5 {Oi/ŜmQ ݸxw>T}dt$A}t{%2n}E>Xyo|gLg OdW_8ߕUD4=ސIEt_ro#=C` &6L+._P[Jt]g"Y<4d`&z ;ZEbJGGp;`Gu 输Ȅ8c9ngYO2c&5+ݜ)PoBRm(1ixmbi&SH2G*%d =P,HI$rr S0ÎI%ّ 4lVS w[/eCZ66ضnXU+W/̒.V ?1AZM4&92{z+:K:s3!PڶB^ZKSB>|m/'w<N<=5mE]ui=Zpٚ-v&M|\&==m|ˣe~Z2%@#Fi'#㒼$FC,JAXW+2IYfMOTäkݰrzIQTQT4G঑z52Ha·uA6 %IpkUyt*K950@Ԧ߹4 q$R3!Xs녒S;_sKSܓFɞݽ×;,^og%2Beu *Y7|Ï-9p'vc|sfoU'A[Zv;Eَ. WСL?zҎQ|5x?z7a~V^91JȈ 1H?_r-~'o;'|n^huvuDU6(.zt PQ afGDU`˰XGL[y7rQDg(Mu6. ,8#ydp<+5_QN܎Qu2{ə22dløtA9/?vqs&( Q|> 8#6H!!Qх ;ٖ{?| 2O X"9a׈ ̘UW9wHkBc2FۿQ ̞~@[hC5&a3ASĽ2֮+(pZ=Ŗ".8ՈdH*Gj~+?a7_+,tJ\I|'Jm0NaE)Y#AЭ>9tK!A=ZLH˲cţ,8nׁ=T|Qp3\ #dHLXVo1&+? M|Y#3%v9Sg; SdZb?}IJ@o6KW4EEפ?PY)"ׅ_z3yEV;xҖ=L.E*H**9%^/vJ '=@ȼ*A%PuE2E)U?x&sCE>b^& [% a@ 2N)\4XXF<77kbhk@~j x:v0>GOJQZ`yh{sB:O7S̐Tܵ!]b]ɷ$J#̇灼^QP3! $~70:k1:nnc1$i ̬74HUIHSتaRfT`+"<4N#A8fqyԕn;62M ;a> :$^ n. n|vg1Z: 6?kȦs #/;^mVo?ʼ.1o.bhD<֓?SA:LafQOZ{?Q EÃbGcp!EN~]w=E62֚_Gn\wiV~@=5096Sg+!Tf )n) ;?j_Zo:C+6R(rTq^Ԋ{C"3,fʷ i"t/)fgd\ T]@=B.X]lu:>r(R ImXRuo h``SnF9y"Й(a,qIKvzyu;zzjOMQ[٧8YPt%íڍ!4zKrO5@Za7kAm ߈^%\DXguIt0y}17 ŚfXZu;ȈX :G:A#T-~[]@.sLr:+y~~ZS9:8Ή͚1{lAjC.R(_tP,M(A l C͋M{W2EaX+\#ǜgvS?g=;LƠO #sFmukʸ(lDud7qŒYcQJ\>L[u20[Cmf+^L_|RO۲J7zqf.6x}mgl&T5F5lEߺXn 5VٰUnK/ [jк67+4ƾ>/Wm$Gӏ'SB~ٱF ,q\o yNs0R3f roݱXgQt5f&}ϛ79p1)0F#3Nb(~LM%)/R7  k(56!D#xX\Ce-Bۆ>=~+ρ5Aev",oeT r(MePsc> tTwhmJxEg-DE8DqOUw<7, @6 xHk2I#)u "[&)b0 .32gbVBes.'gn$3\{'@M9^j`Ø K=n5c=̉]݆Վf"3mpWڡ, f=] =(/,PwAFǡ[!zсXBp|kh'4$4\+\y}62cf6LkۦľWZ[vSv ]YlG=hVA ae2C09!*gqׯy&5^TڐU; Tw6xr*J3;vER>2YXQ+ S\;W Yr- j賲90I ,2s*1uK-^RsPloaCnPd,م;ߐJ~0' fnו0웼h~F(fGyq0> f 1[H)>fSl-!π"wr2*<%ę'̿X~; %4W)?Xc@]x'z7hd i/`t'} .F6@! { #XNq~5&9F-`7V~땬n&I?!^VH9Apd@vU$#oJxs)$w /KְB#шlc3Qȡ mѷ9.{ gWߢm6d0jf܋_8VƽH(bޅK5v1JN5WvKrÒ[Clph㙾S` ,W{k_ ZixYxޅu'WѢSdQ~ \|UnZr/q|b{wH~V3/VO4ӏ)̀'poW@_\;v5[}#03D5gSf1P΀xx]ȷ< NsGc~pF~=Ah(ɥ8]X kj{ecʂԋq C:x8C-Nӿ5@vd6HY?2nXw n۶m۶m۶ܶm۶m۶;MڴII&Hvp%;suL# Yˌ(B 4I$Wj*X"?'&I[8r,S -ntJ(˜βI=!??=j;xB.'EFfsP°+CkYCn"( k7I j=8p '* r9S@~aрsN իr` vm:Lխx |ļ>7Qw6,+([K1+}!Q/je p SaaeZsK6vFLvcwjE /uW`u ##E]\xDr &yqG|a^(Gizz CqQe/ֻ)7I@&i7/l qu/H~zL-KOd9Mj'NVi6KO{\V3R1;8͑5zvq*c@4#6r;hmHQ9EqFW Ubh4&JD"04vBMECCѺ> l fp%+2|dVqN^|zv\/GH;Ks#s D: "Ec:VtDްntSLH-D~iROl&uS.aSNOMSQj)[>`&;ɺ6)]sZ><|.]2vcW|Ln-e]B =]Y#6c]boI1Ayj#RỊa܆MӨ(OwhfT0Al8 )[8ADXo0|>.sQt7Zfgot3Xҟ<_JT"UCc.Q W1CK$5DSHׅ66*m OUqT>hj0kR!5ϸ:i|yC_:*"VW\'my ze0w๘[RW~7lg03; |٬v_?8]X(,<J(1k4>4 K2.sYE@Zj4JBEfTUuFsʶMkQl3B ~8 Q2Nw⺸uE_V-yQyFʷmW0pZ`b_̕&r_a"ˈC_ +H`&8t o RdFy Tk j"Z%KG~*D.&Y[ cvZg +Ԝ븲|^)j`S\lҢھ9X.7Uqf5h:ss*̬nEyZZ!dUSZʉmI(|'xW)@Ėԅi `HohѨ`E rIzXi ÝdgRHZOKJR\j&_F/jC 1@ , LlLhþTH1T*YsQ[-x&w#8 $&ύ_`YvVϔqD~WgW^]a,mQ?ɘV:v]Ճ [S_%jNRƑQv,_D I]+_+;=BW]3FQO@HLSYy1@E%T~ؑUo:&lXaOGݻ=¦'k?\12~Q~0Ӫg_cz'&zxpn4fcϖ->\`GN~K//GG֤~z/wC<340v9sNmbwĿы~żzeJɉ a큹vopq%Sx't@B Mv}wRGw75 Ք}y!nLl!Ncx( xk+b?c_T?'[ nz+38AF=O"hٚݲ ̦MII/OAG`@={Fqdz~ H'.}h܈\OQ_D[z^J@:>\ċdCEco Y m ?y 5%I9% #M"mӯ8f\ԙ÷3FG02]3fau$ /`zppAp>y3`qS㵐D2L" \k3T x('CrOСRGj`]*1pDVQkt4 62B,<Đ|kz''pc5 x= \לYǘzVDI+-a믗+ԞN HEHo2ހMN?E,]n$qQ ?jCwX\#߀=]JZ߄ξyиT͍'< =lLܣ= $ZsJ˥i&yJ^^.-~n78@lW%K]_oX(xlyN!<ң(MB4 KާKaƑ !%-B/ʈL $RĹ|;)f_cw)q_|{fg-;ᯩid׾:0NlC@<|ɪU뉬A¶1NI1r % @cbSu$Hu).WmC0 dW!v4 ӻ*iT}N&;Ck|}y!r昺p퍁**%'>/ 2Q[v .E^ zbZ&fa::uVFt x\Zijmd*w lر`=(O4i8Wkχ(B#b @gʓ}DZh&kN2+ږ,VHmvnk`[f+]>ʡ DlO󑃅b5!WuWHi?U 5(" o1WFha~;p㒀bb: ^X֭4 0jo([k꜉>ɬH+Qlo>Ta]:uJKq@Pf{jC~*gP"Ld L.@c,FMI }O۟Bv0@!CԀ_8ufF]ф vHt\X&綸|isXuq U7dC(="aKPl= l57dE ]\AbkrZͼT(C37ZՃ : |$*TQGdA}zӊ/0|bQxD\xoC/W[6M{ʌBgYA5bG_ ;Xcd31B)M]T ɷghR-l8*Q^Al!nq[WᴥV]4l#YZd.-ʤ,@cXe'7>`~AMkEhJxS; #! tyQ+ҵ~OPdY2GYrHQ"Վ )~[TB%K%Iᮺkz1_ILApZZpV~孾ڽm#JPKc}c>Pa!,hSUv- R/#Y+ `lMI%- ^Xjm#XنN(Y:sfsq_'4oF9TA>UU B:oQgL6~-}_+_ج z3,܃A jA7uRg|ʌ}kMf,kQ}A4*-a,5_)w)NNlZN¶uu0@mV !fJv>9j,U' 5:P 8gk1xuIJ@-I#qFgQ>ˇJShKlm*x0H.}&K(xX?*}j7 E̥ۨG63u6suy=3uT#x|J~I׹}kGyjs!UW0LP7z\!8"l~Ϧ!sb~ b7DI+!z3ا`=#O8Li $c=fsB),pW}o@o1 i_Hh]hD\a"t@Θq (p?dYmu~Ԉ(mO]}wݰKarZǐ$X^|}Ě#Dѩ ^V%BnN_J?y+D+(=e )2o#D4rE{'rR1o|.:~V漢RaWN49C5]:[nc&Q2ÛÒ {oexkH-D@nW!Z?DžE_ 8`a^~* 6 b;$q_5#zNi>?];N-+#`p *%#!;8k!uZ2ޏ9;QUoԣ{9*DdpLtߵV,IJ3ou.ե˵4o]RP>=B7 4F{x9G[^a+ϪDrV$H@+)Ip@qQ0Mr mOvbcD!Ygu)a 3)lvHa$7Z+w\ŁHC<:a٬%H!=K 0TלY^,ͬQ-U_KT>$lmե6}Zȭ0hZXl{O nuO4*jd vZc+2NCVWK9{T̙t b22T=P9C[ԥȏ,5ɖdVKqhle.(gƢ+̝37f>a `]0ۧM> îh6Oo)?lsy2;!`Oŋd͚>OYst7mI~ňSkԉ"L)efHS "V\p4~x"%#H|̣pQ2+ +2{:  놜6pIWQoJרJ$V54îӬ4]bcޓMcd޻oc\4;1T^yWhV֢jLWh}h|ۮ{Tj$' c2e =4@~V{m5T23#ix977bHI (`џMX* ,B Z'AW=["5pG`nDv:MA5iIeiYk1C1 ᨾZgknTf]X=A qv'kU_vYxG{ #3 &olZ]?Cdž)97" yيcdNASGCWŜ'n 5ӑh,| 6Ւע~s~Ln muh€FK2zĐw-q (D5(օ(`HZkE JWVl)) єtOСX po|Vџj}F&nA'g*DL7[^5@u埗->qGlL &#`O1wxn2 -ymOHCkd|`&|w' %K%$43 wfd˨6;MؖC^TE[1=)PU*^Xwt :5A5¸YoPɄ-dRDX1Iv:(ea !wkl1hkP9d#tQSZY7I8ۈ^]a}x osSy5`<-^k.;57KfL7yv0c̝P?47i؝lrNG/8+ j>O'׺tJ:>.tAvHҰ%դ_ wɁ!s7;$\iEF(H6D:+Y|eO9Ŕ@ zOI jh0o_ˏ EADs+%H(ڀ\-.C VP[͞a?HF{( Dz13)TƄ"7Q8olld:qǫh757w΅@wRd( s7Qk~zU+AcɺOmM)ksr =D&#Wvaq=oTԁ;<"tD9K0zgjK N,duF"p z($C czmgB޶tR](mP2nb\Ek:h[qi]ӲV7BMɏdYαj3L!^ MH HT;oo /|(p0JQp?ϙ v3\LYs $KN'pO[z&^dWC x2Kk)7&V /ڋqAؖJ_T1鑄3Л`R.rް? 0HUfcVnH<2:2c.ݕPe2F ZgwbwW*HWuъ/*~)X}lh|  u!݋2c˛ .4H]HytO|-;oaGQCd\I5uHFx- Aaz =q;}H偋\c&]9ECᕧ'8\8*knnj`dM;3uJY1RbS? "[!)VΚ.R70@}'86ZAk3HDJSPQ!57"٭h\zka+8p?:&c  jֹb}H6ɹbT TԊ$kxPdЇ8Y2u٥럋]A/N~,]C m%68ƱYhQm'˘BAiwHB1ee2Q.2ALb%ǵx $92#צ'?O&~Ha>M7Z"pEu~%;RS_qǴ9 Qס1M6 I.p!r6KwRDz|^[%M7Zwg-뛮{2owI1L@XA c*8,âaf|a %vHQBER3 X >]4f5КB{sUJ;i0 ˔{.rtEut1 yI4sRdsoCPǥ=~>g ޺|(%m(?K[Ve'/-׌%ľمZ* f7 4Xd"K /qK8)ȈAWV)ۑKW=7Q `Mv*A;e*zكCbGzQ^>'(rAoE쭊EK˂+N9e>@>MOArO.y\ѼdPBZ߾I^qwjfN|)`z);Ga9oe.]SĞ)ڗ&Go'{ІJ800EYY31yUS=;\]Py_PX, }ԅ3.d=*pYn玼쿥oLRyљNOpv`$ P'|1 |[h > V]CtГ+23@j(j>3?Vԭ0>.NdP]T(FȓwM;kdx #dAw?'RI% aU{,G5ь|U <ҫ f2f _GXZ V-˶gjie@wњՔYo7mAhET@\Ѧ9|3T*TjK|:T*[ /&] ݑVŷ,U.m\U޵P 3XyH\.-A.~Xx{{TvPz0zE۟V(+Wt5=șp6~"!7hA6J lE5guXTC ʺ9Pg\gX bRqsvM+2u>J eX6k.]NjR o\tFTm/>?RQ7zHO_怼 jYb:$󺃍֥'905;'|*r#:!$C&˗Uy @ ! H]5G\TF`v{RAN.ktS. ( '9qzÌ6sUkQ`ry 7sB &MnaD7xİnӒ!Qv7X8ctQ"Pu0F'zPpSM7@iׁ H;<ӵ]2!ᵹe860.`lDKK fJ#VYZcH(<V/1xbi63,  𤾴x= o×yzǵj[pJDn"6W_̒3f/֜dەaZb|*iDȁHO } +H2Eu/z׸t;)?fA]Ji롂 \gbgLZ-Aj7JdYz vdՃ̥wBMb? 璓X Þ{첉(cִI5G3|a.|kŃgA;tflx3r8U hz6ƃ!z4/r'IQ A9w1|`!6r0 :b(k&`:.к.-3HˎENPxErLux5=82ųa>=橔,>)W.Pv(QW?-5Do K5k Ihr n3O.%i1&T:NI'Z OÖi[<$C2HPa>iU]JKZ49S"ԾY8o|k3R0.gZfkrbkyr#pKƁ\AM }]ږ=XC8X@ZQl u ƻ}#X>%rGQ$xϾ9*80c-.DX[6H NEբZ^#Z, AŽ<<^R!k%RŲр~C3`;¿``H#`/)E" w9ADIe7tC3nH噣ʰTEƱ3#wvMcLΡϨOLaof{$XL;Vu nt{ ʝ~}>~n|H]~\\}RLNA#Lf@bˑ!!-pY2bN.0}v: aDՍDM{yE?εRE[&>ޱd`j&?BLFcfmVE^RoTY̡psKɿ-?-㖈]~fL3~iLaka>C%$):S@I@&|a?fy.~Y|uǑ*NOTOcI^l!Dw?7 }d@#Ϥ$h l3hw)xMdg⛇" ,_žܕr_.U7g} >*#Nq$չ,+ZbO ګŵ,S-hr,ݵ˩MܕkĬM~q=k8ugoUZLIk5D|,^Bw;֟@[0=T@Ar bPiV />Ӕ5L+X"f/礝=Tzrj[pF';:/&5%5Tnw5ݨ]'h.,̥rS?3k $"=e9ڇM[(Ε91݅Y")U\'A}i#b'Ϣ87HvH4L"LPg+Ȕw`)ji)ly],V=U:IlY|{oQyĺV,eurAPGwof vt!RX~$f&_=݈Ȝw^}|O5W)}8xƖ?܈Xe #?LKʐ֦;Xl0.L\={0KM捙EZER(&Vp=$^#RAQmT:F9$I5~E1 .=JwRyfd>P]c-NR|Z;`p쏋*8G2^"v)HB~d cxAɎ](EchUz?&͂!.ܬ=tW: !k-$R<c ]{K}ypDz?F47IPɲ/H-eVR>HZhVZ-XH7^om˕yHu9܇z]jlK})qAhuS'HLz0lQXϷ/d!<>2nUxflaunnǐg|pRՍY> 0B0bGF~][t0aRy:{V:ק"+x$υ9 QhHw&곪ĕ7f_ˊ8Ia>HHp2W&5IfT瘨Xc[Aߙ0砍㙬1Jx[3klCAW#;M"oF}dv3ɝ vp"2'#B/`D`U;U do b/8?j_y^0M’#JWi?xtl ]D`\gs-zp~qguh٫2Izn)YO (lQԄPzLѤ\]}f&Drgb;MbY.& ٘ ֩x_)HbxK(:eŌ a2ؐpMPӘȳ{FobZ3N5& ΢(7;K | Ug#S󫕆0I*뉷 u6s@ JN]/ă.DH*M./WjsVG-4Pt++g7M ;XD{r(Iџ{Z*vbtǃk^&Gޕ뇋0=1Xxt`1mF+5V@:NDCdnv 5]d}em(V}rj1\x~"{Ηg~C&wsWp^o`S_-o)}FVĮ k ^|ғTMDNZ*^y?Bȳ&X9>p_r=+]w\؟n E1ӍCQ ~y ;vI/*-?R7 1h6Wbz%AGbѱ'$rPgLhrFnT)yFm=|+.u^.,Ef42y-{f用mބjFcɸ9?\$![jW\c}ߕkl[?ug9Qlh<[jzBOybY'+A4dTi;Qp~j'XgR+߉q6Z {l0hJaqgٝŸ.7GۈZ?%~;4-z&;Hҙ#edP#Ƙd0jVE{m^F]u(+TGRFK` T0N,'gt+ygMIArF)EHMo;R#QF5a!Ee-s4QàYg&$GOGr75nP䀸YdRSz0={=le2;U5Hƪ<^\l+ʐ<|tRX u`7 ABd|[݇5;|qyʻzr<6pv[<?HSPkRB5Ϳ39kN63 ߌ38c;g2\%8}Ucd[Uσ+H:cihI|k6Dd10zgPpWp0]R>]Z7q5^H1K-_›B4o6;wƪC)WҁxD$Q{ܐ| <.<˧!NhEjNz,`\AW]{> 5*rS+a\%w^V\d?3Pc8T3v.ܰiZR^nm!Csd-vȳq/78O3KQ^`l]V > ]oN]`~c-_9 oB2,-<4k3ŒB& lW|B"qx}?٤l7$|q JPu׸ >[]װ >kS\! >sS\٫2.cq4BW"rgqJ\FQ4"3U̵4i|F &W4߆qy{uf&4 +pgv E%̵O|Ouj. |Mϸ6{ULe۬>c#M%9״m[|:ry3p6]C5lPfx_$LTɹ{(4-󑮉ea]k#$ݣJiQKצJi9L'd?6XcfN?XS]zd?3|wY3ݣ;aԧhw,&Ǯ3x(3 suXO<'㞊m&y"d? Oe? .z@.>VKy&éXNNq7d?(ݯ~WIF[zd?~j_4Ok_{[&2)ϵKj˿?Hjʎ"KBWSӮbtfJVҨf{w)F۴k4+2kv瓻^w95+GgV[`qQjѬ$&3|M?Tz55,^F%Yo٢O|:m[ne6gqʰ`T-G;YLMСe= >RQѲL&ץduc&cavP -.PW? 8FNZRyhZ^"xbmĪ۫]6ߧ\-4Xp5S,9ii;OF~Q-RMzY$oj\a刕j׌jnv+7VucөLI3fY(yw;THZ8ttvaq}4jɌ;JڻMYt9\8\~^QvAgy2ƒˋIhen@-giCl1>.~'˱8g dca0I5RHe.*=7J fȞfJf7z}@ &&'ZK/^$g1 r#i|?ڬKWyeYdȆwLj`^}o r$ KцEA۔9*nlljGg&0[AԆ]iԠE_Vd>{tҔd^*y<#N%}r>*fکnqGU]{]rVwQ"^T$@|M<z V]B.l׫f"&umh|8 ɑ`xiN} fGf5l(-g7f6IiN'+#{fyږqXt41e^UF-^F6'7^{eur<5ҨÊap,R9݂=VudԘ:t*`b9X}i|t0TજF׍ُ./-;.c&*PnwOc6J:U+A cDԪ@& ĦXڣa󔡠Ug-Psdڡ:|)ke'5CmݯZm{~lf_Vsنgr l5}W~k&%Ԍʕtz: /e|ZkTfBLIs`Mn}CG_Fbf(3(kC(ݭM(ɿJ+!8sleog@gT(-Kb[+ kK0ejQ ]]5BYy =8RD_aPQ^GdfJ'N}y2Z|cT7*@͖6{ci!ER%z#]_Sj2&&ϯ^qĺLcc gbT'"Z,dל]\yv,%Y5n]gY Ӏ-}L{~A@/6$5o~YK% &~fSUwijk3m&QKQC־bľ}4$%jKZ&˚> ; 3"D@5 *dњ}go!CCQ_.uk+bDh4,5@F11qa#Iz761C e jy@Ŝ' 3fx:&1o#=0QwNjHьZϛQmu1V}50D`\ho-ABk`}8}T5 Xd{{l`?Ӳ-F9E* kD._J۲[=HƱw3Nh_)28JQZHB?YI؁$AV>~@J+^IwrsE æ~S<ETCǺٕ@p#:iߣяCYnoYTDJ@b@"&Un8V-)׳Hs殌") J: GMeͺH† Y\Esx;Ņ:^5s_*j4FpT!E?]ZTnÇ2DJ"O=CYo#$Cd7qMBM9*Z;Rfy-w/5 9=˓mqz\#[72ank2\MK@A eag7K~zv>*Lc^?v)wsN|&FKJ7_L:5xE;{Jpp|com1z2ƥ:VS mw2VMoSmk*|u˨n;T5,;EP ?z Br+ -eGƣӴѯ[S?z1e ex;=.k0{}yvϓ ӱWAjYT*NYo5nQxuǯ搋+]6ձ-Q5fm25q;D ټ԰O5,۫e9޳ޘZFSsښf)oL%)~W%4Tc DkShj*5,[2L}Q`uNwٔS '?xA-la8=B9/`C#6X 42,+CrO2x`Wᒶj.v*;Լ,T+{ER-Wݗ.o}bgu/G]VYS#ŇtN?S*bj&e"9){P vTA @7NhaˆPqfys^vMrң )wA6{3LBA ZwGぶ=:m"f@z%GJUJQ'jual $ňQ:K/-7a UkuR(ˆO_Z @oY8/"UOiDV"G*x+X 3x(7\Tm-*M9/Oav^TjV Ẉ6aq[q`)A\ܬ·|/r, >RiGZ/=v ٭~ *J$$$BKGq c4H04fYU3C!}䥌+ߠ9)ŕ&S ۛY7 mHE`RUs>!o69j%Ҋ3<ޕX,ƦP$q.] T6ypX Zυ"j̑)U*eP'ǐ(JG)?DG03u+gmւ|Xb26x*jhhRҤޫ$M$y᠄䮣ڤxh#b+izHb`#",|,{UbqqN֠s`'`\ @i-vg^G i5iФɳ'~k" _+K=iY-ѥ+>8AodVt7?nMIQ$i]upӊO͐ 2R/gJ8)zhlOAc[,qZPM+Xm-B8ò1Ҩ W{QK֙͟73G/_LUЯ`r콹nkW}d+0Wiy}%.lR ;n#Z`iv)|l!!5m.n ]-z]ix:!V8PeS%cldz jB8,>}|TM_hlOPF&g|le켄QfPeGKT4d@RyK'pC}vZ!u" g Nx>a1cZ(N|tXԬ@n̂zO'6!Aֱm^wkp ovyFO,o{Ov\ίVM:Wr;~dxT"\&8Vja1Ǖ&>$sGzo7"}3;xVpqf@: x8DASYTHUo0dE\tC—EX*l(oխN xL]mP񪶪tGyoĚg6} f<:D`1R7o P{nJT{JױB1PS&8u - 8ۄ_Ad _Pgή;TC8 O   dT Q-IS=y" ZMfмj-z `),hla%KoΗ懰b)el{ H-yXd绞eաbfpkS蠠_`t4C J\ ?yy>جj~~޳]5={.}+![A%-|psH+8p/ PMWe9"DmKˊ;02 PCr2_#qǭzDd+%9勿|ۮW} KAM5w|';{e|Xn Wj<2W~ B]J%S !>->B/]օYcܹ>Γ.Cw/(:m +J8Z$?IQ9QƸH9Q|s|9A}?WG0r yvr7<׿dRJ,yNS75RZ#1du&rE{C^-00ok{wɍLI&hU(>]FOqumrR(^t'G-W˼ڥÇ@צּM@Ý@-IVqoCS| \Y? pgtr/˱%Fmw~!J,)Y) lwj9ڇL+!m'ڙ_G8Qsŭaۘ~_G5ydlGlۿo'̳SIYת"Gq9z]/vXg>o'2ى-pR57VSߵ KhzoCH-˚ԝJpU:E`wX *0pz&DU/k.B*b7զYJટ8mQk˙Iu&y2x(9 P祭ԙ vJ=u/W0Y*T=_(p^o ׿G_ -N\!pOiOOٜS[7Ȭ ]@GyQ,L$LAzs^:ŜMOpރGRmP)#i"pͳAzCQYڭV1Йz.sPy,$Cմ+$Ų^ٓ[KN& CoӰMiΣe-c_)lDjxu-RO8 O-}ުteSޚu< Y :\.VR9uc~1^SOg(\i9ܥW$!igq=ܩ (-KjiԘPj:C3q -SD* : D4%uIX@0tI/h ]m$<5*IIY0%kb86'TLW¡'G5!$r~Zzr&Xfv>^(0 v(6V%wJ|˟DKIgԓa 3` -t"d_hwnq`˷Sׇ4NnHrHugQz^V$FJHDb$οjfϽdL3p*j#-z(4v |dŬ=D;/p2e {_nTr nSկyQ=<*ww]@w'n_LeTdhǧH+J3>zǻH8%cG]P( c ޔ {oxɈAXQOvUod8mTzyȄ{:'|=faG14򨤰Q>;r"m<^M|Z"aP.BL,VC}-&*Tw-[csYɡ H6Z5k|^2f;ҚIԊoYToT-4~:D 234Kal#4p>.PEu/xyLN|qAE9dri[(A"ĠyӍh&I*E$|^P =VSh$ꫝDӾ+GD?)`Ro;s$ǵ,MBZNqAгݺ5984_=ҙ^I,>b:GC+ 5,\eI)Sj"/&V/Ȟ+?Tnpo~nhaJN3jʭŨcmk/piZvN7AIP X~䰇(-hF7W ~b=Mq[bpmAXlx ԡҬUR|KeB:U#W; 6_3y7A!ʲ ɻ kRTc!SIzFijs1/an' G~M^ΠKށۻ*?OيWtba*%(?ÁBxVmHKNm7#ݬr%3 0!]4hQr}+g2'[\ۙU䦻~Kbi͌Z1V.a cIU[$Ҹr\>s;W}|wv:Mzq<"?t7 @p5iu~-z`a[]d"?i29Zߍzt CԀ-+!XS k _c$@_]oժ>^YIvG1vZRhӱdԸѺ0f{S=V9ҶTjPS sLsl3 8\$}gdl˲: Q;f3ݘ4t}Tb 2V}u0q/.7 "bd0{HfjLN[*EK'2*@=rwj^0 -,(i ШYź  8P_lp@A0y/(.PJ!͏ArQb<2N^F%,xUdx vTH!&I rG]_}5ZdAtJ"mj+^'c16'-~7C`! +~J l 9w\(TXE?C%1\X;H=%f!Ң(EI_r]8gh4/hvG26/yL34q %*=}@=G+>Bͥ;f\K@nj\:L? ;n%aR;vH[Tr/xp} &$C ^뫐*vpuwG5MVs]<~b{-euXΪf\`ž8+խhcW$9Ur]Mqʻ-n]њF~JezP'-]3oż xt:eS^UK.onA\֣>qC$ BV 0-^F^MeuD@Ԫ}uX'Z{on:>7ή+ s*u7vok.;YMF5S9> ti`s[FQ Z/QS V͌`Y,.3 I ~bB1gЊqR>Đr s|ga\/:9RyېqB FOӚ':$a! ,RzYT<^Xq\?p:wBc`JaUѸL4`Ug)^n%{QR<@W6)"Qh8ϖ"WĢM-/uci OPkWӯ-`Q `'kpP[iYwCctb}@2Ʀr?:^K=IBͼO!^#B2NӶ{6T .>l 06s9ww$22 ˼){>~@ԶIG2s/f;3ށDgz.G|xiwϏ!L|oׇ;IM[NQiFmDpѺtD1]ؠ1Cq w&zwG?_ ̠m7Glx(,x";(<!( xb9M>!<דܮUt/>cuM7Ћ#6+ߚm߸O3R!|ƣdȉ & eb8;i .cpVd0Q٧ JqlQߴS'FVo˲W̝Ʌ>2S>B2\H}WA6W*֙?/[&k ,][k?cQd'D+t.,g_1KC|uց[-UmLI.m)~D8\):a %Be(>q~^^=|xixW dTfq ^O̰ Sэh֞Єa 8k6N\b:A: _rk\Lנ_Ȣ eV2_C!6O\Aj> l_݉6بw1o2 5]jpLl0\Dn طp CJwef(hit3TFL+BpIN03u]Y+ Ue8uLdPt[<7>lj0D-Jnr ?*u PnCx'dK)GPًyeAev1s* =ZcҦZ- q"TfivV26?)ioY~6!NLmf3kDΞmiK5[I<0H] sԶ,-PVq^UNoHPʖƩĦ=7<φݧQ?f2r `a2=ZK" IF,+f*L6Nq"'-/zpBGMߚn3dQD:BQJ|Uaw( F`),sBb> V,)ߓaI9s7Sr8N2"ɂNvUd0An<*j>Y2gmla\Gs{ai6n;MbO3{/WN1`8M+E8'0Y|>A(sYYl)Go%l*w6(a~S[]"ɑf592G߲c?M q9Tژ]/WN/=  ~-67gqO-n%u cjn>F\T3ϔ4LO83Wg+G33sK[Ƣy ,?fA*3 9c74{bq`lЉK6*) !=5ef6gpJL> D7ZP3GiZY5zнjv44~lTQR&4؅0Okѩ6{X`17&'/?հ*n4r&HxH *YEM8B UvN9 9VdsdJ/K' qTDe$ m۶m۶m۶mֶmn&9U* n}${ߞmr,-PYEM2V-{%L~׆XƟ`&@aq"Fa?W,+ݠhht? d;Uo5%$pϓExd06}14;ݾgY^;f57H+ˎup(MLGĂ.ʞY?ZtH]7 >T\{\(.5).N g;*`J=I>M"kXisԯyWmyp`_ǟrjg9f?1b@T]j-ӮR_bi0$s41)J!;q|:#ˆݢs44ʎфk1|m;n4^Ig.oPW%CMy!ՕU8ݶk߲r,zIѱX>nWYRHLY?VZb~P B~ _ y {+`6Ry&Yd>40w(?]$Pv ǯ NPi`R;Ø3&ߥcHwȿ ۷GT`l s Ynj @&4>" '$ T*JڨgjjNK-+y;~ ʐ>HOS;4;8DIc\'̚AƩh~Sp'ξ8gғB>zsW=ܖ.C5;Zt,Ox<XA5 8 kB$hUyUӟ&*0?`sŏm 村x\=ZQ~bP4N` mV@-0eSl6גndǺc[_4ڸӽzgCEBhiMfR!}! btU2O:lS{gB[,]HBDKQH;G\V@C'ɕDpH)Ct7 eNNoIDǡy;7+'S SEvdMD3XM|?ؑ2|N4{c&f;lc Yх5<nܷW2Y4ɞHw< P#ȷi5Bv5W"4,͆2vpԲ7W4WCEbw^A;Vxi ?(7I0Nkm栭؝z!gjDRwv\' 8 @.|qvy4Tp]8 }kgӎ8>/;6"/CR([jBjJFp *a$@W-"Sj޶7>kdhUȢ}o*mb4t,Ɂ\Ÿ{Mͳ%c-Џ{{QuMWLMRxHv~UD(n"*ȈXtr2%oWdFymiq-N7+JYDomVA{>"{FJdmNꑸ,8RJՑՉ_]x*zKr*6΃O[3BjZWwT޿w61?8~t h[^ ¾9g $|s$B PK +Ʃ/hM5W .B =2 rP VAD#0%FMqE0w$Ϡi-:d@ĞqN#8]췭(~)V‘|}>I3yS[O/Z0ɨuC২} *tŰ:dQX[c!Նƀ'S4M4 Vl)*ohrXgxf9H=oCIw7}-{؛ؤ%>\C:hrݹk%_an_v̑*CkY5Z(hؕ6]~Msvٸ/ߴcy DUīAvS}^r&]VRp5gvږnjuo]7h#&S}%P4,6<ԣYkfQ{鷯7Y~&eE8:J=T.Tb R2$A}ߺ֖q, /A&ݨᢺ2oM학sXsH.`LYDbt'TUV Mv6Tq1v2!*EHzB!Sɇy-51hGӵKji EJdƨkXI#H̦m۶CG@,(Z%P-)iM:,6:d,~ei]o-w}L{3s3YR4P}*R*@P/3lpԣOket!p%* z:Zr_Q( (@^8ݱ4tf@2tCuxBJ*?RhA)JcDhX/9&upvZ\&t6{u3!H,D?!Fޑ膟d'} FLl~( 1toJނ{y/o8DH7k<,8Bn;5H4a؈b]{ "Gk#jihs~^ipl-'k4ʭkO`py9]%cE-q.$}YPھ̆?|z:zgwxœfW, ]nW)k9_YAK 1Jj~XSQ;ئzY\Lй_mYmjLlW||!lo*ex!pdiwTͬnܳ{OPdb⍫$@rz=௾SO0"|d&\ pY'Eq& F;F6/*qտۊ\3Qr{62:\68:0*JN)UIՙj{~kR))pVI}BC'<9s9a$)):_0Uz{3ޓބaLA8 !HôcqKk:CvD0"O)fAUVu35"(L+|[cY2 8lTCv)ZMgFhz-jׄsIo@@i~Pbk'f}kk5x4z4l{8 /[8 kW/;J Y)./~~FVV7";)Z酴Z6PVE?xF 6+ "`iV Zu''ϫy#?gp'[&L/ b)svPm`(@_+qqInA|Pz;V P=0iZP; hY*y!pܜnv͑?)NvSF(ŝkL=gՈ{h%~'qɊ3X̯]ob_<7ei혫6bǂ%Nͳurr[l6tS}Vuu J!xj2ϨtiMv<+ycVZ/8ԍ/ fU +8HdYkY У*= PKb@F" F76?|vݛ(P=l*HylT3k߬X>MPC0٠%$( 6΃^ |ʃlNu;1+H 3g{M) ji_\v݋SC*YufZI>u|U]t uH}ŗ1ɽLJ\Cg,7Eҷ6)^ڡVN@MB(Ѽaf~ӰEeK-1v[@*K%"Ö`,QNUK򸍎d3$6:=7*kH'󄔗v\J;TJՊB֥mKi5<8I vn! ՑGU5}^!oq^TُTl4$3bf ?UOD(;W񫹧!`jѥhrݝ?Q66IkWVuߠ7 k' }Mw1kS X?nd 0I>\[4|ZxnQ|h'pʔGۿ ##ߋPvF^}oЋx {t/r]sA4vKGvM­$mg |< 9X[Sy u lhRdW<*o%E $qkW\€=je;^MTꑒr0:^0Eo2]2RkݪXvĥ|ɚT-QvW^_okd9&ٿ,d; J(Z JUoکa G G}]|vթ[՞w jVbD3~/D۶$5wx_0ä|b3'-!َbBDUT+Mp'`eT;"#^=> XvԳ)26t.H-%DमަbcƔ_eA6ilh`-@0m 2ÀQg8ٞMnAbΜ..|!w_Gρ!CtTkyNwIZ,K0Ůݕ]^EEe:{$!KiZ0m;0Pý^&UdssAs (  ovvQ].Gkp=k8' uNxm@Pz TOxu0 bܫzj F)hLtB R\\[+YȪL_OnkP?qT]^ ?ʇ*]Ϻ- wz_=Amxq{fNdϦ٧Li%*S݌Vo?ڛ/\~ߊ_>n=ד, E/ 1G@jhhXfh U5:6b4 cT0\#C- P٫-I0Lvѭ\)YS)N}ђ]}nM\9W qY&P`ʰPw}IRdBa>&{đ Y4ȬH̍Qf~~s6}aB I߶zvV}jU @IgК "yf@zp)d>HTS+_4PlUoQl^M#c'5v7eo#@Ned_]|=Ke4{̇P]O" S08ݘe _E*<kʭTWSEjs@B_4X='2.VUYxnUIm3̓gaG&DZ:Eu3j㥀n^wGnɰN)/sW8t@R0e|tC{]7Hm~{6㬆n-^=@ Wwi$oKQ#s |^<YrXFjWBqGc+Rc_fZ4\Dy\5 6 WSZ}DӠ/FHO½ /=)ŴMb_8~ѰBLQǍaubwCE%s|!# ƮJ14`w׫n9[8H_(j$Zӝ5r,?W*[,~#vҭaD=+M{GJ~R ͻ﮹"&a 2Oy[9\D v[MmJ#h24)W0r9>!47H&ذ2qMSxyoaRIy/RHZֹ5gKlC*~죵@"}eF7C-}w57˽ '1ηee{Sf@|\) sY#?]=m%Jm@e>>z/T mBV\|%jU`t!X,sRT̒U{nidp>/!_Yp7\b"A72N׻Uziǖ\63͏AlXؑEnVlϫc ӧq&iq SAҽAOj^jwTq NhAuS{Blgˆq-/$z4:l+ag։68~ɍ#~DjZw|c0٤˝Jꏗi~|?1~m%<%Oy.|UɟΕݳ1u~Hcfz_3g_:w=O=ž(]P__H<`cd?arQy=+]kZ_~=lI@H% 'Ь}=߻ UTw_(V2!ץg] 74x0g{X6iݡqd}mWqo;[n&f}2bM&P5DiME,- pp(]Lݱ6#|kp]lY ,e!s|.࿡kb}٩i$Ė{S\ae򁨣2B_7rovgzbsxpFlvZ-UO7ogތc9;˙o#GnGǯ_{y{ _sҌ9ׯ 62{ 8,/q \3F~0f̈́ u3y97aw- 䇹W!k.}oT>d` #}{ 8yژ1f=g-M/Ro.WXNJ\,#- h?9%濸l_A~j@:Pdyxm8ǗijzI܎`殹U@%W2Um;,V#*nZ(%_W:;HF숃Wq]ӄ&ET5Y/M7RY\>t6>ޓte?Oi dE)ctoSGz龾{fYvO}}.Sym[\jJ8Gzn^,>bźc2RxJ=n$3q*mn>r>4CПXu2(tTD?ۦI):#aSwIdtwmH7|N?#0oraggF']d>_9kmswN5'VRq[µvF\;Pk} 4)z}e: Q ;;榜vMiHAxz;~쀹/mg^d{wor LU`jTe~boGg}ØwD݌XC+.6+ܸ˜=lPb@+qo_Ѝf}ָ_ԵJ(!t\-bS]7^o_$ Qf;>j(,]A2^ B̬t$ȃ;vU+L.yc=i)=vbʗn"u'rdcw~vGnas\5CDlmOɕzDeRFz5qÐdL;fVgԲ'uv|s{@&z\JVR;aAc%YJ%9ߛLԳ u z.wBz%3 0$o(4;1 $BL~;.tI%y&M~{wJV)En6W`IHG'}nu IA9+Ͽ⬝ڥ:ْk]CRe'%|qLH`)ҦZU8}*TLw~wH|z7<z %H l/XV%;c Vᐘ5\JL+ӎ([~$ԩdZU,#9Iqb_?Ew{Y:n߄&7 |WA˕CYJkUOte~> "02*vHL fmǤ5?}re `a:*OgbL}1 f`;aeh0A b0{M)COOԙ# Y0Nrr%)1'4%X> @(D_xPy]dN҈TBa*]-jOiZE#آ@ |X gD}QsTVӒ<ů2DǸA8AxWǥl(5ເQTU3*̠?p܅NfWh"rU.K|nGy1%Փ`;}r'4 J<26& J%:XUγ FNFʟuqr7ZOAQax҅J#CV񟏯֒x#9lU6phΓk.%d'>s|;h U:ټ2Xp.TtF5!Xg"GH*@U 6 ]rXjz+T}J@\}߳@A-}ndCdɽuöC+a~O,EPg`] Mlr?U)i*}Wo#'+}3@: DJ۪_w4Cۗp}DN!?f(`4h\CO|ǑQX>|<$J~M>c!hMa@ 9:zT^Tt?|ՃpSڶyhbA`;һghy+C;$YĥaXO!k&*Ix9~wLs7bopu$Zo07|\ٰB5>?K_eև3jT"jꀘ'fE'Wҥ坌xk゜'4XBh;W>.m(t|NT[ʀ/?MS gGG91?][ԯS@8[H2HM0UPEGofwE|A<Ưp)ɵ7x1[\ 1a}K7 ke=]Q =]a{( (}/dkEݕyL1煃>1Uz̘t-DjP̿$Wш `4Eí#Tsb)l^ R°jU!a|mV4g'ʶgAiLHK*"@R6t -Soַb*WMA/-1oÚ]5@3{ma^ _eeщن}RhH@䦡3R#+[nLԴP&UnW 7N/U=R"n%b<@EsY[pL/|V}KC+jtEZ1A|jo0NV70i Xf"(da VKF6N?vR*6zvg!V5$kĸGKGROصx"qLqXhmfʂ&T9i½Tkr2]`- b2h0u_a D[~wjJsCm3QUΛ?3?FaoGa(FpimI'@ Q"t:sAjJgRHyN m!V岸J/fH_IB^iӼOZƲBAļLi z%J/\B?Ttzޭ h2]R]z^ kCyJ?|.1paS07~e/`QadAEr@S7]ќþdr$ >clYvisںo߿×<{gќI4ҋ=GzMdSR5$2B?x܏,/"vGOpnbG$!/&_, OڶYeG>#iZL$ڸ.RfAC#a_ydasYj-#rd)fN\BNrS{Bwq 2$ǜ8cvKS[] ѿ㴦v[ C7(6/Z9a4Ch56WzBE'q]lp+Ps˹`34ialrd..gg^))#8SBWW ĒQ *E5 h4!>*>t\`r&ϒE=+THAF/e\H]dl16o)\R)0_WϟcQH#WK>#^ëQԾ6ptcIpqFj:#P+2A$@ڛ0}#HIP8\@dJ(8* 09iCVI)ZJ >N8Z*ҩM#R%0m}])+)Fqe={ v=}w]= }-;nٝԕgB7Yꈐ[o\5Wܟ0vsQOiBʫ)4b ܲ](SGJ'NBpܷs伨6좩Nx"8b~{A1PSG@wIiw{΢Ρ!HQra~]pJZXMwvC5aUS5qutWvPKM2m*)&&+oOV&p5E#&ַ?*X,56`r[F+y1.7A`d-bOs}i"E5oP>)(W.'mAcZ2v*ga6\HHRk.Ys%mBGT'nD@x>f'׿tD{bTJcz԰ܷM^`/Q=C2dC@ 3 vdV!7=~d{MV⠉) vMvJv2B!!.5 |!⌿5 )ԋ8+2k8)L"Hr⑅v*Vr:e,TPI.r]Sf; .w)sQ9)pȃ$8{M=sI';t|Y_wxMICKNPC9T 51PcqP%_7TXuNҹayWkTu,˫^=,HbixէYSK9D܎5Nr'tn-fR}ߘf KˍPIW#<Gj[ޫ5l#v O#6w{E'TV_F)TGN.!^Ww41jWiکtHR?&:H1qqfJ]CBH6JBi] 9xJu+(JһPb04/ xַ䑃`2WaZ44aNβQ)+ga)8*wPNm i|j̿H6I֑]|KUK|`GAZm_*e)oDP6k~>VI䬤Vj  &0C:O f~a|Pa)kSrN ľ mWU>Si ' XbeEv^K$*wBa*Mxj֎oEEnYIUc(#JHܽsD1*w=`=7ˏң칤ͱ =6ؤ2\b4Uocj,]Ce8jzVAQ.-XV⩖:ٓ AAI} ?7.r_&>Lof맥_a0t'&!AoLE t_N` p3l ,# X>hsaȻo;"}ìj!bSOxU?Ҋf49z/9t|ޅ=VxȤ'"o\r8vkcyD NQ솝._/|7#u+cD,;3lh1u|͌N=CZг| f{mIq6|Z܎DW"8$ިcm9O4dޤ\(g3r-^⍸U٘nrv2 j+r"C2g]1 u< sYUѫyY̰ ^pc"l7R<'3:[}}6^˚J u q߇]]bjF@)1tk?gV96F/bj`7IPO2v̚D~"ƢRJ('F|z?q1:okUNGf(̟Ie\D|[,kށ}ÈDtx(I5}q׉9Z%%{K]w:>Tl0꺇QުtZN)u՝6Y#sR>Qcޟ睊 0K\:*TAF|s.jJ9xu?s/ jy3(2 r0u74ַoebioGņV1 _ḫͥ$udgCR rѵpuh~X+OUp>C+Ie<0+1^8-LC9/5ijLN]["ݧ+/c';w>%g$^O~`Âyt Ͽ_ Q2v?K@YӘ%˹G0@IH\oY_% m])H(EdT|W&t3IKwcl,& G%_SJDj_Au <Ɖk@v\ 4%Ý+X<1PNpwxVcqe[ں%T9#* nMXo vT2͂&\M`16у+o6$.*:_#<bdfj2RGk>.Γ/vM%?%heS7"D8OmH?X (RkKq^ۂ\ʼn/۱tV$O7PP&RD }0 VtvT{Y%&~եQi}rշ!OX~v,8Ǯ3o[DQh %Z(N(GUkWBvSD+n 6'hd%@W;06K(z :-X׍cKDGQ^>Fb]4 ny<_;( v_tIOVXP&W F{- 5t P^ipyn:P2\Usd:|If kU]&gZؒ#<}dE|_a=L׮ZrʗEi5>e><$9![8F6RκA^I#4 uFa*,ŀl.(: '9w("΀FdAJXGh? 6L #Geͨ0LZ9p<+OM ծ'98˟ӪMM#K&3AXvN/)1,CtQzH9e =|]oU x!oTc{:jؔvcV k,zBTH`&Y=kswc͖,<j OZm4r񣰋28r5Q{j%KffݒBsh_l) ޴'lQR]~bLY%`'-7 <3eT>WAbbl#.)BU9Yz#ݙQ1aWy,-vpPȧ)l0tD/bS9koc:;R"G#dPI}}-+E3W9P[˟߇ `s5|)|fch'A?-/LFtTK9ɱ ƎkC`Aˇz6mo  1|/*ci_V<6=\IdR.׌ cCiS@,<+gn&ےb'[;ZM^yts䗤v :_$V'$_zu%gMS#_D"}*ԵZv@9ljWIn*Z؆$vx;(8T +Բk5K6n ڞKpaLLm]L MZ T3yQee;G'S*H[ RQ"]`]l)0I+v ᣒElj]z-x~;kj%a>@?_< >`'{]BP> hO0.:0oH}!ռb:Biʲ铒s\ $+Dc)Q2" y\cݓ𯳌|31 +߼Z̞]6 N7f756Κ/Vu8~cw`4`Xߝ;߆z{7^`ƆNZ7;?y(Q aPԧ 晰_7V׹Σ<%7o/5Mi4w?b޾\_ IG;s޵ku'mO5%iylGgV؋r2g-QR>x:jB]{0!,BzrTr8,Pv%xX{DΏ &{yEk?%nA"qH$Իη=s#3I')or,L* [\LhG}rCҩ3 [JZOm,u BpH{l#q`DUir2;cKxh82')c>MOu)e\ϧ/oJ RW{ۯ iK 2|s;j?b> _IkHF^7$2%Ny,Q +|9Il$FdܑCTP5z :D{9 *u\KIPŤF-9b咬k3\ RRVE9QT َ/vi1%vE-HMp6#ʓXˆb+=" %jEBo<^@jᥩp/ jat2\9TR^'򨐖oћ+aJ^FEqb"9X1`tv{/N<>B&@O5a S`B.\CC|\#z vZjL&Y/޷(}i+[K %eU0r?EQ 99ٔE#"7 2! sL D ɣ6[R΁fe^ЊkN ‘CLr[Z0 F<1S&ʹʬf &%)+QWWTdpO=Q#ݾyIm%-9\{C2Wvcm>^?u_=o.2L/ަXu/ :>Ni{*[ &^ 2CfFTw:E.^spΖ"Cg3m/IR/x}~.,U^{O2[ Zj덞+s4/9X/L#ɺAzD%W$NMV)mHTep%밳_5~~cnԂ7 ' {XrmhL9s N|YԱ3Dwek)+%YҤRl1@K5Yr`˓C @1?q3$쥧])wO36*FhC^=hڱE'?ҷrm]c}}pA\"̾@:cto*'骪cϻg~P>*D("UM%vvZ:h$)ea|JTPBqJ4Jb;VoH˹%%;VoPҵyv qy4 x( A f)*Rk'/ZTLia 񨳍 :H;ЍDظH?BSI*䯂Pb8џ p LLڥ;Y(_}(fu]A.%FDزUܐ%kdKsܐyo= <+Kf&x#EWpOu lN+W(|m&ԏ`?]&W!& >D]GJ[)-cuBY0F /% Ͳl(F9\I?[~r?֮ZW2ێJdT]"T:z*|hsPB-K~$;P5w ҽ?b{`\#Hظ,(tOV¢-# {yT2a\Yue Wm>5R`[7"c{5;a=l*o;Y%Lag21Z-u7 y j<Ӭ2L=8љC~I*#D R0a\bݎwaʏ?IA=?@8*}a^`htӠ]Gh|O!}-IќtسsZwA0C!cuOfuvk,u\eb:6U|YT$(#:Ԛ ,-y|7, hv0&9b! |'vl0;AHLFx )SŰ= "-"Hi:#m`q\~+ ?nžR)qfM$RJ%pqڶ 7]#bEHZIqO*F_9w]}lW+M:ϙ*xxD_R_ĬGh!ND'r2wH>csp:uF}[Lo3| MUx. zG_dlxg6١'rv9v**ysv-lؖxÍ+VBsQ$HZ#1k<gÞfݰnlU:Pb=rGo1srq~$^]tZ~$ oS/?rKz2-Sw{ن' ذo4:D2Xñg/[5[qzuS5lakZX8WY>/kf)\yZd 5=WQq֎9il%q;Όnb̬a1rNf?T^'SB8nK^7BbVlD@$XSX9@ӫb:n,;;r1kH' %&:RLΊc^qr(}_?+I9~NfO:L2 G,:+sƖ;(u?N= WtMA -;Sŋy;65÷k0C91G-; W49j$6BBÎA#RYoGnK>nKAO9)_Z*6L^ŷ$w8IG^a JH-zE7Lcߎ\! 7^gv{߶O9M\p5`e*:dHq~3JvP B' 8r ] e"/K#`"ʉMzy~¶-&K!ɸisu:k\;c.7"' $o$!{e{/8Z2tNV)L4r5;ЕNӞ~Qe ]&xJ.pwD ܽNl r;X۶GdV/9G VG@tz;a:VHhMq[$"Jnx9{f,gn-DV)gfv Ruݖ@{e_1T*){Q~/!mǦ;:ڙ軘ژښ8y꛺w4xEsoZ*memtN#!y MJvuHr>% _Y(H[aԨ E4MiSN/?/zW#ofKJ (XKjf >L9rz3b-K [XYD`z0ɤƋ&A 0I9N9zp蘼FUo߾"͉+,}M*MK]KsjpK%"dH0;k?qBfԜ! 䃣l^t7Y> =7C]f튳9|'RN훕uz|TxaHG!9 3Se?:u0b,Еe7r~g4.P+e&'ǫ\Rհ d6+sNEMɣ&16Ik fͩS@*5@1\>bKaeŎC!S/2ϙIFrce0^;`mGc#e*v*(Ml,~f"DzQA#aitmI#~1wո6}]'VF5'i~4_z>9ʑ{ J$^^gѦ<:/^m~Oxe_V{#ѭڧW!qh 1O6a𾱇YXYG4=ɮyUtZ+̗}7&o=nSX -*Ȯ:Sf ^j L7A@7L dFF,5R-E5ÖZŔ@èhr ni\[\v djM$b Hѧٳ26pc#P{,1I`C()FŤ/ !Bq'׾) C!,QNŦp`Av J<-g$#w LeH5c2F8ۆ?衬{HDj$&2q3 7,kp'T.0?s⨡Z0Ch_39϶jKЅٗRH8/陼nu0ep=|GQ;x?gzOU̗X' U!*p "B?Ȝ \V)D_nvBT,$D:=d( SZA 8>% 8kO 2';1ށH;(. ¡\7D,uk*rgrx!|ܪXD*F@]y-S|27\v) cRՔ6R= N^{$6 ǖ0⎮6"IIϒB}(pW^$ 9}F͜<(nhag+GD5^p˹jcdJpd.F}ķtGG_KuHbwsi ɝ J$1s}EPr2t0t,UBz3ފs9ꨉ %G˅`ewϕ0s#+i=mfYˇܲX,|p` )"Д6H(4ǽ3J^)R%Zぺ}srwhh}&E楳' [÷ T%Zy56^L1ѥ9qf `p*\umcAK&1ǚݤ6w߽5#}]SikU8yRקkF656 Q]kdyjFYlB)`hjY~R*Fh5re75qyL_pF1ll%[n B~j UlbM,"e ?hNXԲ.aIbP^V|ь%Yi/HOߖ/w4Op2v=7; owX̬3ڀn;2\}bX[wxt4bs` OpD)yo*uy$JHpȇ!@aayVHE{JwBa,!+L`*υ\KGtнQOܖ;C-B 8&@5Y $2}jDz4d"2-|lyAE*ߊ:.wFb sgn1p0h'g -K;}n9qGl7zbbM}Ec6e*)ZV֚#zyA!O!=aZ‘)0e6AgGHyTzAT5 38aȂPw;~wb$xs=d`L-M z+EA=!Q*m$ſЬA !`ꩮ<d-;o%;L.ӄ\͕J5;u@3ZtΐJ'ldp[pҐ+{q,>O;W*n \=oqy:G(.bm#D_pHgJj4+I'r%{3C2`tI KǴ )}5]@( e:8lK/uL+؍ ~u縬Z 5i*JA^'vʡx/lzl3hꯩ@?^9r|Cuoi/Mp N囆B17 ęs ֕q&4482Z 4ZyZC6ԣT]˾u欽޸>#~l^v®؟.aGƒ ^33AR1@r!kA0N*'*n=i +Go!gzy?SW3 aTy8;:G;[I?Q\Qi]gf7cu4_

%V7$=1$l>}_ 1 t._d۳yFIwM=\<@N~e~8fv߈%Dy^TOiOqWV5a5s =#VwFotDv]yU6DZ%馨MvnnT.'B,!$jd_9CmIۑu3-IOS?>&~[ߐ9:8Oz%ƳY^/׏V~tQfzmU A}ٲm8Ȏ/$ƻ% &t~Iocn|ӳgQ5P.=J6zvȎ!)<=E1 Gr!,z(X7ȂQxMqDl?#ؠ AΞ 5nD:mnZseriaxE@W` u#84Iæ0?Lm&0!/G+^+ m0Q0p ,|yf[ ك60k &~Hn9Mww1 }&6.9~|?3k}TC`zͯk_YP<s)X&lm۶m۶m۶m۶l۶====DOLAEdEddjedݑx9N,lGcW{RX0'*%-in44 6ٓ i2Xr` pEQlقDI b))<_H 23ԄyX̌\yEc˰b{f3`?A]݊)arfNwq"I{̠2R.%{0~` n U\]2 dl+;Fx_8 _.%ƫh:<S 7w ^ah%|fA2웅|+jX^#"C33Eq%p&rVE|/k0EASf-83/gPi>E614Cڨd7;.NB‚DZ-Tί|/ 0*X@*+c?NfBC+"4K2뿕X-eKV-3oZ-@c](Sqƚ2K3}3T.JVBQ[-@tֿ2P@[HGRFqYHW-iImk%rJ{ie ̒aうh+y]6YV{|l$ E2NGpߕ], /oBB޶l6uRk2җG}o6lEWQ'w(ͼj7p$|r-Q6(k{qd2vGN5{pytd tHѫm4:H&LӉp|v,m1qEv?ًtUA\HxW<y4m &@7" zNx?B`\Zd& xD 0 ^H('{'_jX[o""Ax!vKC2Vї@x \bȷ`J?w,bxLA+1v%p:v%y=쭘I˿ uzciu[qXvbJNbG̅0]f=*fnju$G ϺN1?M$?JyO:K6Wǂ̴DAWBB\~C(OޮVklh=dC ~1 tg7dv 3Ek]n;VH]oҍ߻p2?09a|(r8jH{ۤл[ ihm.haW\܌&iܪrXO[Y9(&][j^ȴjOG师wk;@vMbc`)& ];@fm3G`e'3M.Ν8(0M"%{:8et Y2HZ"1E}% dqR*~#DݷUciF Z|/|giV p| ).%1R$2Đ`$n~WH2e>܎݂l?5}2zU1t ̿.n]SZBڈqV =׻(jKN#SKZry[q<|]tS3+SG.B=ټ7ka*z gXnI@pڟXeh 7vukoS_ȶuk".r~3Sax?BP7S,1k|+y@l!컋ggsx$kH5+`{, #̸kZ$r*b!|%_0AjD.{t_A]tW @+;VfL"Bp|[#t}2st p>>̅;~#a3' 46i,GN 83zdy`-mg6 E|DB`5KXM8S!whFONg݂"3Wx G2US8{*(r%5g4& -JkP| i Xrp;"k˂I;ohL<p%튱-LWl?=yo8旒f+̂JY{^9TwI IXd) :g_:z:%sj^6+e%ϢJ:ds6>7VjɨwĕD  C2,ߺ*hIW `*̞I{>UAf#]K,wfZlp tAAL=glo·:$xw72wwi 䯒)'.n2$q*]'l' rhebl=r kqAi[zE}b~)Dlq%7ٰAKo)'ΚgDF*T`ve<Щ8mQo$ߺleE,\̮ od9̌oݺrvG:/78b GpmYfW@ajɤ8plFQM & -H4%!=9rB)8r)\IXjnyCzYd5T~Ak#@c6[% D Fq-=. DfΆv^'Rijwf"ǽSqt|qT0?f u9?=ǠOK*US}R0^mR8@E5܂ȷq9W{7>A3U}f!cDh),,ubjk k%cύ!{>cvTo~a6E9>ykY'3U͂eBnS psw&R5QaPaDO0G/{ v~9q>#OJSNxA/[Nw|OP]?.|1ӊٶe{QʙطyC,%S%1DaNl[%A9*fHk?ۓA2^ y]>BCWAצ :B5$Uǒx4d.߰~ OQ,}, b^%X͉{Yn(Sė5N*f#uLCQx[PA (}&]g=@u{bK:3)(_cGf yI亂-K?$xb=$Üd8ˈ98=1 a^t1>"44Pd~z1O]mOjګe|5od'OW{[yy !+0plD)3{Deοzmz\AXX$VzQ}ؙ]65-8et8ߊ6w~lcGKwkp_kgv hۺ#G9-m#W+CYLH@+¾d̠0hʭVnݘS>[=\-a^Z=|c<< 9[u`nJ.ʘyTE.K^S'ƷQ#/Y  =EɆM0nܑѧyq(usH'<ȍ$zY4طs֥e& Q9{4hw> cIorpCH3ީ )CVM_8.F1n?rN41CR~@w)o0EKoxdo8j^[T1i S'mbS| Iۻ65I8R c͛ٶ/sHEEɽnVBV~62)Jõ'͛Wr, e qtjӡh4S_+Pt T';2N]n@\_.[R(QQy)tp-}:.5'0JuZ2p0t \4 ۥ8, ͗q8|\.2_o53ܗ|t ?݇mD-d5#KV3P,~x8Nka%*V {oS962ERYLϷT&:'u0@EQc+NNv/'ymt1K4\oD;:b]g TFzbť9\ Ef$HDn(* G|1,^C~ta0 O82C /ej"k<Lypg<˫Ď(uhv.d!R6P{WBed 5XD{8|{qTHޙEUʕh%XzqAgXZiTfk{z{y'i'ێY J'BiԬ_,B#PL|mi7dEUtSK>~B&]~QI+"KqrPΊ^;W~[l\?L aAs߮Փz\  ~4 _n_ 9m|t# iv O/p>μk#9J2#ғ,H%[TQxa5lSO& -4O{PǸ}M1rܴ{UvRߐb+g'AkafɁu{Mo}-d; 1c``zz1~dȋ'*l=n |oxCX{ׄiy},P=4!5d{NC-9 9'-,Җ*8yUV_OMEeQE"6;b\iaDHnRJ%a:ھ&n> mkF#ͯfsD6$q8$Gq l64567&O?}w+$ $.+V}~LpIK(Tv2Ots>R8}6q+}9o>윷yO;OgNO;OfR 㚩YnUZs ptTTLӧaRwt0ULXgGS2Te*/U+UoU_NTtDV⪾9VtN7!S UKr>U71S+ZU69S0UyU~xkWT}VT?O|«ʭ櫿TW+6rު/^K諿^Uz6z7ErkT2ήr<,VuZ6U.zT? SmLO{TI-}HSn]1S[U|d/S^I};P5W;9;){K{W=SCJB<{:'Sfbtr٫FǤr(3'3k$gAx8oB5`8A4#,H!)c31F8A0c !dpnr @H @Twݳ~(V#ⲓ~yGGx|I`LB0("Dtq؉Y|frJ `a PC2Iœ$/i%)TTNP680, ik:|3tpnԚ pnF"1N<66e3)=lc\Xw 4Z7+uk/}NiHVH@2?OKL\97^xNp(!ґ =*jوͥ'@ Kn]σONHg")*usեbrXv=5KXCZ(A/jK!&4ُLap[2cW.K3Zup^t㪦wPnTkڽέ6WC\Y}a;OpSq5a梍˩b/St.ݩɀVȹGT<4)IH/x2:?)^m_.se-f%$fwYr[WQ=,`{חRLe+Z[:("䠀H. p7Gff1:HMoV JPsVwy岈AC +<9◂G5 4E(%PV`5a:ڝINUVPZ&캄Җr :T41M0mA)+PjZRB$t*ou=wx.wTb.NENQ BD4.._&W핡g{n䵎1-8錇]fh՟*pjK.mKàq!>2+!k֖;j|Pp MtKӴ#.x'׋.pg'AnJG!$&+m1 (s`r\)aA`XܝH;k Abψ@vE杒- >0s-SUC}p|2BINygnDMFewC t']9K/]Z-:L{\8Fyz<]z"I&:uj EB~5N:t4K8wlRθ!"9-,F襵rCafKU"qֽzT%,%k?w!hk_\%?@}\WF~?h.xsI혿gno%ݤ)ari/K%!9O)gQig%֟|]д.!V^[Y% ?5DK(A)rc1 aKXxBCqu|&j7{M뜑fevV>Ը0Kp6bM@?&|oQ_os4hpp;Iz! RfU a+d\63Fo{hMZe!76s4 >  $8wgSXdXh >Gu%?=j3ҭpςbG;.wש,D[\*T=B iͼmac?!D]W3\7B۲ hvj$V+%,f#C-OWkc 1$uyI@ 9ٝ0L%tkTWqM4Z4t vl~0"l`AO@"9P`@!eK#yFim[`$1ooK[%qk6*]Ͱ3(5ׇyduS3ЄtlI>R~1X[M.Y8eEM>ҜVPe21" Ix۶[ !.s* ض3v9ʒd]d:B÷B#Jx[N(f,cύx&C"Ȭfa0b²#3ɖIHY뤷 dj}jԬLeoE g|X^ [)Qs;Ȯ|Ʈ=3"nF0nw @_~Z{iS8˖|Odn`TW(۴CѦWcIy׾ OgZm9R&Pzb_)}xL]qB֜J* j#?pv?xi"_`~IC9{{.S ~GJL5ғVUifX5OavN08qV>91r,3@G}hyB,1(E8|S *ƋZ|xd_+'H覽;[d8K?7IRB|=Ǎ[)Ell ;皠hP|Dc'Sb~Ue[d]fU3a(62bh%# I Y HiJƙREI |EPY4Ahs:oJX8t~~KW%3,X]aDY + uQbN)lDD<6e~je<^տVY a1nV׫k[pn)V Z8q-UYz)f )L)nF爺V/[~jq I_`d$ Ξ.RDl /"UV_VXNM++-x}Zˈ|0bxZWJ> {-̅DZH:~/ȲHX#U !5sUIy6M[FV@.EW{ʟt"zST?P7BhɶFPKرVܩֈ&P҈ AF`K"8Z i\_OĠm@o%ga|ZXo_0'bTn(Z'!5eWa}t:TkaGEGB9,p|F+%|c !4{l7*86 E"ǺaCO-.SȻlx3FFRexĽy8f)ׅ;ɸ9nv㋹6PrbG]X>wkP:8ψ,'WTmKB<Gx$o |I /\Fg^& ꄇP%Ҳ&ٱ]2 R3velc/dvl&ڐ4cT9iMbdfv@y"6o6x[Nbn2>IGdu`IǹKޚcGfw)RcbiSr~ٽaw}v7K$#8 E>!R wnl;Zk-&pX~gcQe|Zjp{o" b~RrdN+Kߡ($F=!p#PɒJpBph~OFf`n>wE6fOp5sb'CGhJ}1ˢGBE#74lZ*)M_ ˔#Nꉙ16Gfݬ!A+xAQv)#lG^?ÙǒDC<8ӅOx5D@!݌ ŧqbbAq_-& u0Re]!uOhXӇubU(Y&L;ntO)?sKqq#UՍQ\%m .>^I| }F% 3SoEV=ôD÷rJkDž@~#om$I(-^ ;RHaĜ<G^#uK3wOS2"֤!f0̃Y@Xxƣ]n{pܵ,>wC^LxPN/D ݭ2}nݰ~1v{3"ȑBR,Lń^V:7ks -ƫ]vKVKVVD%)DBkVhR_ rs]m͖<,\ĺR {U%p5isWLQP"7lV ?Q߫bO+0 Yd[eСʋZhmw5`$,zfǾ//A(Sx#pa%^uHy٨"h}F{Ch*jФZ-]?h@dc; 쿠5fj!N,!v<#RL2hl7eӺE˥7,#=؎{7D1/3 -B" m_Ejie $ǎzXf|q_5o(1\iك}kPy<1gEkXǝVl|# vlҫiζ=̖=•7Y +)b;UB7cΝtcQٱ EQ.lmWN/m1wߋ!rع-FYِ$s~w8&Y&hMPo!' }vfFYw[p\3sc5/wQt) h윆n]>rEV}u.өWz g+/?y͎lV=ySsjȆsӾR&KkUʻ¥CAbi;UoD"ljR'T@UYv-yAyP};a|1<):Z&xQF< P`ڝx GdKBJG=0< >&;l1Kb$`G!`Svʵ=]&zoD^0u1㵋V#+q3 ~ WmĬe%JX DؖuW=ƕ~zЫf%>*Vˎ{Xwp j*큆 (=ܦ_րkτg 9WO ̭ʁN2^*\ӦAhT۔#!ZS!wǖ{yq6\c6\Ubg"0 P+V, -Gc25kh3Id%I#*tVҟ'Cϡ[ -Hf$|W} #ȇȑwMYrW/[AIwmU;G,a[7qA vyR+W!m<8 FA/j𦍝뜩 >6H; 3͝>:GK僵V q );`. 꾭s5, zpM0McMʝ Fd'ݢQ^^_Eb{݃ԟcAyV.A[˕NjP>1q18)F.gsx9û%L6 Ⱥ_p(S)P;%uhrLx/ ^ǫMMjUr0h-nUU5{}l\Ík _MfgI8l,H,nkv5zK?,"лFִ=aF Icw_ Hb*+GO G 7a(޵.,\ޒn `+Sij7M\C:yDWn{-N L4 c.jAs \ Fn5gd }6hAk DOXxEwt]5\x[j÷d'yT̋-3ܜ:ڮaƧϑʍN85߷x/ EmBu!N ź78q7~d!n0mo!"}ȖD5߃=Heghoڏkcݘ[)!_o/puMXc)Wj[ڿYKd|ך8fPvST_rՑD~ܘ^s,ӧG\ૣvN7L p3)>Ri mhb}=z:5`x8ar,]/CFTtIjl`̛p 8x汼=P㷋B ͠8_ֹJنD#a,zÚGM1A ;{ڂsnsX+Zz@gF)v;_/K ]Vj)?oGW4_)u͞n>s{}~; G-A;8wH>ckƇk5F:f>ݻY6zGFFx|&I+!Vg$V/a1;d 0^"f_VʞJNWx8w <߱! -6͊p;:;on?p(eDri78!e>u,$D7q{:6Z/ &}83z4(rDvaoz ֣ynDmtzaiOl{ U&]QȮ&SN=ݮ\5?6lx1 nl;9Ͳ}ĻvoRw :m*""s9 -^&0wC۶u!(UR4ljdSdQ;wq[1yxj|Ȳ^t&v=~pQ-l!{IGh}dͮFcmsɐEzn~a EK}ho 25zlp U[lY.mk"VݙNz+R-łcN,ݟ7yHV&Ey:wCh@A9N9T!Kq̼ 2 (uQ@NnhzoշsCPqI?0x֓ c @BLZS{ A-oɷo\|}3wi7Fpo\U-=VA)OcmC OL}K|ZZ4n"x0`<iHf,lVwݹ=DP`ܣQ$ߊ"~vӚ{:ɸ~2OK&E>"fM !|u_JcGa~"6tl߾4O0~c~{(cpr?0J 3eW @#Yd>P^{i.,?þw>2jψiiwiYNJ8]w&H+b?E pK G[(nY~rՁ{cJ×?8FA,iT*{*%rzwdԑȧ8aY3\>w_ͶG>2~&W%d~-h~6s.l2]z϶..{98@7 ^Cz}}K;K}}:O$Elн'긤tYYJ|,K \Bqp:R#6̓;!o`u n쀫X?vՄyfPJri'X¶tκb־O?/j sHPomZPʹT:pSѫ! VTEpTk=;bAWpf٧Ry<\^5(W)*7f]G]zFUjC&8xwU\_1cQ$#ҍWHn- l^Wɴ[gg?Ȥc,fct <kOLv%r+!ߤe$ğwåg¢ޝJqBew7<13cm+hŹ_Y^{a]7il;3mzX=#p 02J#@HLrX$LU(di&$e'ggjٓggjHZ/OLL§W_\G_Bg"8?[̘҆?韛g]B~Qπ,FT Ml{3uy}՞O繃S'۞{LD2ǞjJiskk X[<%NiTF?cƒ1{*5 _XUN RQ0JX͓Ch@Ahq!˟vi %F@1/I\mZ_:נrzI5N+G(S1"wAvKһrmB$oj]r>\!aF)i7nO/ܲOmt̠eodNYtPKE.d '4kLHZ j=pisUoN?/:IoXkqM>F* v(*> o OG;n4 E5H?<ԟO~RV# [L&4B1„-s "8::j۠##@jUՑ_ZdE`$#<ywj)dH (H&Z Q2#U(kolIztUZ [$|?"<>*rg$5*۠o990mb<(}o#FnLDk[$]To^xg$C#W ?$N G2|}G u+M@|!6?7)Y;%8[s͏r+ }rr>mLs1MUOCR.A1zXzu83{}cGR}y4N,fdFh ^sx5VX% Z$vxL+p2*ZdLᝪԍ$RqYj-ӵ)N^,H k628Wn>Tm-0)ݺ@FN@ӀTʻvmGͰ}U$+[fۊSahf 3\$M c78c/d U&y2JppyI/T j 0ևI7ۑ7Bt.fBx T`,Z2RoF=ǵA:KX]h5hpco~_TA2 WnKRj~+I4vg"~ =eYe7O+,I{>yɯ Wj%vP5ًzKa"dyo\),=]6 HrD-QLH6V&g[~ׅܝŐ/n2?¨ s{sHG.eʍ[dp씰 P Nv8D& +4i6%q=[^@K8H`Kq*M72f:W|ߜ"{DJ ) ΅6A|p3}93^) >C1:Zz]q)yCb{=y'hcw8gz%RIDHߞrh%άS2m]<  C 7"`e*6h * ~;TF cb[{Ϳ6~56ٖQTvwJ'6ZKMR&.nt$4aCqVNU:oʘU05ѶMi`ILBgjŊ)g*&!TN$GC(ꡊ#Mo2.8ԶAJ:ITS??ZoY ȕB9sc`415t_W*o2`׿rUfe|&,Ů WI%'HlY^qX9>}abx]GHНfP`э'Óg2iav$SО*Lv@T7 ;`(ٿ'b`o):@Cg XU3 ;_Ky*W=}W,//Zo֠bS*ljq%MXb|k: 2z1` j9O]`vgdMv[s=@td%+YJO^=t{tƜU"E6I9~<|cY/#/vNlyQ[/hixXꬬ\܇'2DT؃_$-ԓ%R|C(Xc+I7 W]烇*n6fpZ鷿 < 'fK(ByEO)A3 \gZu6>'wx(O&qq޾k ``r(p/uU?,(^g mUbK|̈́%0RM.G,:kkf/C5m gAk;(G 8ٸ5IBظ6 gJ U`D^DBP6u=Ţn/̌_ TFwb&ϝL7;?? ̀̀M;0$[%1)HjS>t@|SCg pJ{M}{wHZmJ!CS"Y"$F A.`S9JK<:+IzL)C]Vi`QK^X$i/! g^SeYImq AnzIYsx#wh\ajhfz͉/'bgg aZf`aGP]bPhuq`w.N5[2Y&: G%>0wOjU`f0%hH 3Ƚj=9MI+) s7% kU.٩KcwZުŻe9hH(R!@>a:4Hx;  32}P @\E[ZAdYN lL| ]M|~]ɷ<+&H[bKhӨ⧩{%IxsA?4f@r> ̷&hr 9 O-I@6ׅP#;{`w`mN:&U_Up1׬1:BHMESWT֪S"4ZYD') *2TZ=(N)tiL}(MJ1})_*M㽞l#]( ]j,(6v9A/ j5_NOE=TQU1ع|c*q:k!򤍧O^h`gqA(_B?&{l7;~Xd+\T-m+ܪU("lú7sz+.I^n'XK$+K.x/f)mHgY=_?ˁ>B"͂iJt#Jx9I{I X.*Uv=KBA`V,` Lu,d)) eY sB2Gh ḬGd\.߳.%HwN~ٹy";^<R|ĎP]H"6 } t"&0KCfV4PXNUQR!ta"b`Z0\ _3~lvFlo'Egt`m/a sU`yKgTaΉad@M'HYLрťDI&L*[Ȁ^ 5.aR[ܭV&cߒƑC"Tf@25k4,<]>d?Qz55SB 'eN Ï˔jbOy}*sm}F|)Ay:T(Uʨ| zw7lW?:wt>v~ -a'''G j 3tw~!F{) 0N;?"+J)(X\6iw@zbb7 ,Y=Q( :Qa8\P$;!E A@ w6)Pr5)IN/xg\΁C5P`ZE-#'YW d=Q q* ~Э~H{W46o/;N~rۋe=w/;Ù9qbG*O[v:yi XY>Mu=_nNz^k0IRVں \Wd:%O|{K loY[2q h#R8Yi@cr0 IH,+VJ90|?I7-fY[sقT 4 EaFcmMxÎw=Nk@=P9xUe7GVc;3xe\JQM}X!AӺ'r`3y[`AlWDtɯqw(QLƬ_ ưv<14ObVz$O k!zl!:4dda" iX1O3jB%/ʡԮN+\5þ3iarLL(Qvi36-msz5>΅Hi;Gu{4u)>y0ɏo33,ِ fD/m0}3̅X!^ G9[KXÓ<'qW$pr#?E+ ηx@\Y0H1c {ƹ8ҷNY +sg%1dcRN<0< #|Շ{XZ=u(~PjdԮ*`=f+Q{Wl=8jвrRUKDΚU+1""af-Ɇ_"P>4#O@u?s >*Wqo^3{5rUwҚ"bڨoKpyqִ=8fI$/I\d@R[) me,܋DIAާ*9@ 5=Ÿ>{4i)'BYG܏7Ϳ> _(_4ƶ6Ml2=lQe٤W٨J%X[YK(Sf0W+tM.SFq^Fdqm քN#JZ^S= vM!OzSTBBSpݶ%m۶m۶m۶m۶/mۙ_Uqoժػ:aufl(,l@ .1^pk_oaڱ)`LكD4x}L~ :os|fҪ"-<8. q%ǡp 3*`RW,,8"2*&fv1`t,}Z$,,3 %CYEiА6KUSAV9tiz(?&Ƥ1s R7/Ǖ:d2hMRYgJwq<u`2tؿݟDgy~0] Eux,e?Tjs=rȪ|7}=$88`)gg!F@ɲ ib-NQ7*DF.GmR^n9.LY 7ȢjgFr mZ ⬤?}(E` zEK)ᛟ )fQ%s bX;(%!zL %պ*|/!bQ,$tj_x 7%p-`B^ff?J%Pl ;fy SBx ֲ%xuˎU⑍#x뭽(O9w ƀ8\JV~ fNP8ttJgsFBB7&n&$5nmd)[{>+^y/HrY)uU73얨D _;#-m"SƵΑÞAvEGrY]Ho8'es!>߫2 ܀6`hPw.ItNzeP-rjḠq-<1RC݈y)tN+3؄N2aϺP>>Gc-yJ5F3Yz GΠTh9T9L | p 0^Y_&Dd}bl"kCnRx̑NrMx4W-E&^ O~[S,dmAc).>ya^D:̥jJO^56RuGavGw$ք?TqF>z66YrrVGB9EyHQxT H-\Pdyfz߼ :X 2W# Ssp"$fi?dqP<ٖcFtRh0!; YR%0\ku=-ne\ed&ֱINֲK}dst<Ǜ*o_jm+%jwh]P+pY\CRV $e; UOW,)͇my%8>۸ϜJw(9d Z 7:l`G6!W:c:41Db鷻B|:2I dkብٳ76PVݐRTAj]ʺ8URX qeE'\G-!CS*i]ʸΥ(d݂y߳g`hiyuJq)&819^LchMorRSzAދKi vhe55WMm - 0`sxЯ X&'+tG p'šo r5\C~>gGe4F2A93mg-Q+A_2&SG.i/J4 QuG=+>ȷKo"A{T/ِ,{qDrY^06!+ɖ]W/ ږ #a؎5㢂U~Wkx̝ng1KKj["@[ !$֧s}HWY3IIamYSMu ֔?UDu@7HziicD$,U\)][{g ça7ʾ,ѡ2U-,?[`,%uze6|l Yw.GtoYB9^] .pwQ9ZHuH) +FM4BHP4ͧO$2R7ڭ=}c76fba+v[zlV#Z~6oZ43 B=  ș$%m居p'2 .]݇c֩O2'l!$ǘ)5 D4OX J,%$D7,oQrs3A͹;4!զN'uۭ'71v4ˤ[s}D2_pU܏;8QZiBGJ$~|>|T2GRx܁kV'ӾBDZ#$6tIئ䇢pZ .g^F0%PwYp]ar0.s6S$^Ȣ[ 1%!Y)\d)v܇-Y54yJ7ӢM ^?PV۹I}o@,Z܉7P4-M:UG4U4 #)Ligzkwl9K3g"Wz*8^wwfp9pt]"_g$*B26~}PB&9O]~cB~~jRWRb/ŀP0N DtDH's|c `:&Y||<&zթ̚J7ߥ g6n@H+G9DCv v2=.zsҿ2cKBjاscփ͹%vntRY^3~n2yf~m_هM[|Z(JO&tq)-o ߭gJi?635^[]X0f.N*,m^GHQIDE!RB Kz#J:ͭ+U fW5EIZp@hB$jWBS߸ pZ`& ˙6r#(!VteGc^@Gr  eJ&:M%f!*{> nb#nں^5PT d}rdI6CZBm/TL6Zm3Y6%/ƸaJ2$P~bbaia 6^qAr[qy\BKejV#"=ń1ifo[jg꬀bꉧ5lzFR '6 J)6Xn*JmD<Acz퍻(/_whZޝȋ$I[['r,ciefZlAR Ƀ3iG4ϖ!ln-]MW $tmzB%&{>Dņ|%7ə! n<)#FGM&adb ^'W#W;FM S}z2_}\Pvv^ΨlF KB6#5sn]tcFvrߺˡ6Z)æWQ(X;`{MCم z-w7ؚ"Q6T>|$Q`!*VT.u VK7DAmtZRCZ&gMD ZJh?G\9![ZubZTy24%H hÓDמC>j^f؆ NݩS-` :Bsw崏II٬)dW2&TXB뙬&lIqK8 (4%w;rP[.v9\/Vv{j()9Go)ŏb ֳr(3;򙸬Yw0"hP֕-N5frB,Ct+e5Rx]GZC^HG; XrZ4RPѡˋhr eyw+Mt7c[x{XH>w:LL"C+qVHحvȠyτl)Ƣi$&E <<ۈ6ʇG;MvDe^Pʏ䒟Pt(5ax.ݠњP)'z^~"mZ,p9s?(!\p'(W$_KϘM~c_/ws{3|k^VZIMH}ϛ:ӤscA(&~"0F7baɤ! Ygmuܚzcn=1u5oZD$rQ:Ԙ^?3F62`PqHP[-Cb: լpH o A oݚ B~XDݎay?[)aėBll<4BYT{54iVЖg#eORfPdlͭenwrKp}t _4w3s &GڵG=&61aM#`?N+t![Q7eǟ̧^ss -u'mQ%!_L2RkOPTTYNCI*dX;]g:\o؝w-m`=Vw_hECM g: v㎤d$OytdʁG#2o|Wx2ߺpN?o藱 P anO84%. l `ͽA?۾8_ٲK:RO 9m*.9Ih 2~M+T%$?g˓.u&<($>露^pŞ Z"jHW(b㤙DF "P7<ҌpGZH8^}輲 u"1r7tSyBUN6.-t8i\ܠ.yxy' %k /pAyƓD(9F> 9f* z|+FQؐ"8]8;~-nރk{7cϮa@_b1?x^+h^:;:Inp[Sk-seEY[j~yQKRmH E \<>+)Z\[@`㇯K 92 xhS21yT+ꚷ2ĐEnIdCjuM`_`ND 0©kkb ۹OlI)ha.d* VBuܣ*ӛ-{oxt=,[jg &~&B/ey>;)Z:+O16a|Bp/&`g[Py>˸?y3`ZTjsbr[v, :v,vŌ; Iˈ[jH}^w܉$o H aU/<+U,cgOSJݏkVV:e ,ɤ#5\ AE D]`Atn䲋H:t/kE7UCՈkם^Y")YKpinڀv [*CSѧL~3Zj07 xkFP^ÖGec׷;+8,-I]K!{'+vm: lR&PPwS[7[WO{fA[&adEU+~6gdFd`?gtD/%Epo]%v١_C"\R4V?|%Ogf>! |jf֔ޥ_W""fj,NiŒ3״=#Y'8icũ;&1pֳOR.\T`_^\j tj3i"&N:S:Ѭ?al}O& {4&7ѝVZ`2N/'Hh^hg7\i2G.HU#zx\2*9[IwY`P{º:-Fjn^cH^1uM@"v @1XݵonC Cy%OUO.Φ _l O8M&PFSm!vnݱMPm{1HQYOLy?$9NG_zDqE_?-7%>@B)cx:2ZbU}GWd)x5> ڞV6ct\27]I[]k=o;zs-ʲ٩e9KXj՗17mljrl1ik?(">_NЫhki!"PU9Xg~@^4|<ƭtXȚ"ڜ .Ԫj.F%=n׾,\;r 'hGo4nɐvC ID1zM|K{HpyjI 0hg^Zb^͹:ݵ6x/꓿/[ AcCPyD$xŊdW`( V[XLE\ߎ{+w ;NqD9+}\g m`g,m21 ]B^,wMgOXP[t~v01¯<Ϟot2ukF#*~Եq/=2MCW 5If83ٍ$J/t+i9N7E7ۣ!?NKiRv]l̔1l"ِ@eЖΌ2%]PGYOed NUPrPorSz;3.yzn5TYg҃)TFiEj( "[O ܊xϫB3,):̹j*REC;C!f~)oopk;/[;toF $h{ҸA>?.ؤ4Jv&ހ;?ܔcQE b;5`_qiRaÕ8j_g]œōd2!}ZޙlƅToy| ٢V-n-@ʹxX YɄno4(Z J}G0G*cM\%83_}+r S 'AE<2KJಃ5,Or /9IgR1@a' W΢1.LsYK2)lBt%1KL]?_<*$_4 ׿qGXB6 -d5;AR>],P#H^ӕou0y)"!sȮa,N>R(`G uIQ[Zɐ?plRydK{c~'3S(b4]P>MG;^e; wD@ Kk!ebf7eWO1g I3oEWr^#o%]eGUŗD SǝOK`'=ȕ{s᪄j:Teư}r40J)-D*&eme 5҉+uo+k;4V1.nd`&,!(ϴMYoIGkoءKJ2f}D޵+R9h~)LsSj\RԱq@V^U۞fE@~r"#ρpm_(eLbӯ1R]kmh%Sk ޢ?Or 'f3 ~(vAEK);ĸϰ >J(crODrZT$qB94h*xFq !hq[xm~);e^nsM"Ɬ&c~&EKyq~#*|'c``K\^(yY{pw_cp]1F]6N)gm^֙(L,s 4@]*ؚIzĊyUc90eI$ɉdT7,G||rec0[c9Y(QMW`RXx nف ȬRm uU1ȘJ2ErT)Yk-gKHac溬`ʅ E Ȋ 7fTѷ3[1őj2µV{,ۛ_noЇst;춯Ogfu%~F\0o P{ Qw\QMcN~oj${1@ 1amh=b7y΂. bp](7haUOl S_3%KvהO[$3ZUM>aCqxšE, 9PAY9`@ث\9g06{})A|"\9JE{ŧbNǀ0!G1>K2Ŧ|c]"AbeLNI֤t&<g]x ^L dg܏2ew}ge9fq@.=`lŘ IEȜ|b㜰E>&.L,jAP|xARѽzsN":6OoHHFZ J"f sŵHk ]\BhZ.vD@* O!*dsm5 )@C6*_Q`Ҍd/.TBIAܻ/Y΅e[u %7?GWE>J|7)Bʭ` :׌;-3Vm`N/: R.o/:,.D6/?v>-uwp1EAkPK:ҥ0k]oXwf-/@%b{RT)# \;f`rɆYSJr=zŸ=_= 栺$ zD˻Nz7r5Mo{c,9|z sĐj1@q=?]!vоbM5^_f7A.Eܷ/j$t1Ԕ/_w=a1L |etOBFEGZu >݇+Yc@Ր phU|LSO : fʛC4!k7i)e&Ct_;/bydIέ*!JP_|ꞩ,TiDEVU<ȇ/q?< -*` 8&SυjID]чO'TAA܅X?OvsڬZtHjRelBn"AߑF"=F5ʃDl9YC|IfNyt$EFeg}AP$0YB'i؉m*!I NYPB\WCNL1܁~K]i]AV*=D*jERh*G2爍#(pXbM E3v "TB`Dy `H}?.ZQH.W8z=4dĻo9U`!ˈjBaP }(t:7Ёxj}9n2³ҟK[+4c,Thrc?i v,M|c* pe`nɆ +\"o%&Dd'l%-G4'b%2pRpW9|QP(F5ʨOϊ!TJB  ~%5Fp,6hKzlL2HV M/OyEOC.0Z*(SCIE!'l$r'!L>H7͗Ƀ!$O@5-t!˺P6<(16†^&XV n qb;@6uSpmHS͏R]&*&40`U,Rdgq}xpj~=Gg^/NG]$r"z^yO1{~;)P=)G4jrc]Mii3Џ;GoІ2LHV`ѷM| Ī|ގ!e**#9^CV0EtvBvothf{;3H'G=FLwdQML&GM+cӋeo׵A[KA7Bu[kԧ_y~c׀%1"( hњqs7_T$ #*Du̘&KKa3 n<1AaǰP0REZ L٬)i $QP! ' vbB4FM^56/*rw-eaSYeava[Lę(c'cuDmtk]YF)'"SYOaީ蓵?wPiY ߹#7ki "6XhPg"팂ܩvN58 ,u%.M‘Q*έpAvR|ܛ0Ƥnw[Y n YAJK5ًbސ? X"I;S3bMV K}JhV W){V:9sHM7왷j=YO= e⾽}zq^v90ġOi=`<\Ej1nBZZ) O" ŸbZ%X<=[bZgwvo܃]2DD u `ަnvh:B%WE_dAأcÈ+HSh Ie%PM:+HD{$O &Va::_]=y>i~'m9nӕrNWqڷ4!n ͸nZW}[ w5#gye0E1PdYnTG g5Puk("+qb gp6CuhYG,ky0̔<ط4jlt T_:Bx8 VOC>C?z`=L->FnFc窛0Jyyđ+ӠޣsifFߥr28M(~h?tյ}H7|N^HrxsbX$jd}v4L0 sqbvWgM!·;'՘%W kʎof@ztlH6{ `c-% :5՜>\ S s'|F~;2>znL Ք>ϟNxd tr81z-?x_A7X^)8x)$OӒ_)c Jcx}+2$U=5SmdYH!J+ ftzi⒥câEZ&̙xᾚ;Q/#xi ا;2bYj^JxTNBuI{.3ڨ.f}&YHIQc<0x~h|WqWvmy֮EL[նVxx~xiWa/WZ8Qj5&Ěwߢ`vI\]1·U`7P22곲'0-daLdLIPRt"IgC2H{ckADi4ָZOTZkS-EDQ eP:M#$>7<5.‡U:xY ZU: iL~_1"b!+.ro4$p5nwPjkӬiژͿW}3h Nب6jx4$΄f 0 z܈ڻ׀!&c#-9\&w_RO,E),O?,ag0:vX8@CLU#SgM@$ QsZ%B{T{q^s} H30Jܡtl͉3e8A GyWQ@^\xPUעbn=u.Bt āK=1§KLjƛ|-A֔6FGX8k/ĥfB<9[ߤlnՅ7.ѯ40[~N=O# *p);YU>dAŒ~3M-r!9M0U7{ D=eTס aLa)?hHylFx|oK1VsYT'+cRdR~OMb 91dR˄pQ5;I<Ď^HR#**n^i)w E[TA*&I5RgF\qπvVD \xidǀyÖ)VX0J'EY\}'| LT\j7ح!@QAjF:Ied2LwS kg+"8V#Kަ%V֗]fZ& ]4LWV H<5D稚z `:PPD3[Jz$R*<+?aSfl(]yLPX B؅' 9Y~A9s!gO |pfzQr&0,Sqx=NځEe>G? e:-tv <7fw$tX@ׅoKCl@C$S:$LVY?բ`y)L`0oy*!嘢KaƸC40}7_[[ U?U^YW~zTEnlf gEl#=9e=IJVofaO<]xsû2A~y Ls&Lit.N@D+1%ϒ5XVP>4y$, ޻eVO6Z$&U'viQ "-ߥq9 k? ELvSr|x8mk=hzu'n~mQ&b1Aqw6ø0Df.4`9|Kߩ;dL5,戃TT])\kqpr#xD%̉LL᪶69kJv9uFg;L.h7<ɐ֮6شeԭkGdTvT ̀2чh>Vk'LKtxA+,T2ͮ EJ3e%8Ky3#d)[j۵cUC׼Vc1Ӫm]VbuM6ӺT,B>Wm:XєDŽEW$g[ea1y~`gUߘ+X[q4|#TSRKQ/4UT؅v,1Jh]l ⩘sK< NwEE+ ][o2INq7{;87;!mI%ְk=UwD),x:{rK!)XTu0HJyw2%B:޼E|V.j({BR S'X61xz[L-Hu-!AAo8pa3m\]^Y!2]*;XX)sJsA6-IEybRHT_iI2f(F&UB-@ t0XwoX?:S|r?8bM9}w;CP9(ELOY&dbXaH BD塓$FSq'aOP7IKY`nI![2RO]?KGa5BTn/E쥳5{r@`]s 4ܭ^w0 @+J?_ X]6WZ YY ZzR`M{V{:x \H6m-ge`g8v8kSr0%螣pe4^nd`\QKoF^^U/h2 ct5')ҽyAXHXO =,p(|4asu|[9ng`C BHdY= 6Hޕ&x8\x&}߻L'ek(FyzLwTD<[1ņF ѩI :˘UT&jj֢g_AlO E\F $6ލ>\n%N%SdUk[c…9mwb+f’W8WSzΉ#*rTA Xo >+ϱI@TDRSk :RZLZ*Llv[ "?QnNqPǃNHG(RDWNt VCt_vvkmwKN4q(@7b!$a.RLCċ %WN1ts52R-17]{Y'Bq%ލEjDnMU{V 9`~`!U6of[@<6 a!_H.ACūZ21ZlBMx(/sLh:SB=ԦLFxr^wsf7ZvغMtv fviK8s4agRV{*^'p$r GJXsB^ *z;ZJQ݇fwy ikHUب(Ve $z'wcdN\?;-\}ƨ1`!'+q!# F'05$dpheb &GFWC=f}XĞ`46h~t:\q򛤏וEQ+p~m;&c1[P^#i69yS,wkb0ߑǴ7j4?bc5T\^%M[kR [3-YeK6DVxcgDz X4hO?F6Tc(\\>ѩNIԽr)%-Ɍ BKQ"hM̸\!1>a^PIfj (Le T҃ifS&c˼M2Ɨw*+ϟƴ];-H5# l!ϷKJx '*R' 62Iȏ \LrAPɓAT(GH$@bZ&~tT.Btv׍!w7__7O0Ǫgæߐ>6|+dA@>|ȒeGF1GA'~3{Sࢭ2Ǿߏ]Mqqq= 87O%|p 1s0LERb\%Э# Ů!fwrpԀq^E9Qda9YHOҬK6 }$zd~FzIy ~W[c$r*;He͑{i[9,cHR.1'-`$'_^ׇ"B˅.6d`5$bٕ[er<[^H掑1lZM@d͡)]k w^|GtDrGKL_-/0_=Nj]FAF3C]qVH/`x%G{+XWSi0 O fZqڊTg;'h_H닯"n=)Ksʏ hziNdʡ=KaV? x|IJV00"0|R5 -3[)azE=@TT8VZ_u\gzH)Ƈ!0@22{ˍ 4hQ٠b6„8,qD9z߾UkIqe5 (bւUECu l4h΢[qيKNdŁ/+D%b@+pcs5>~tOLOw!54{)D2yPHF Gcjœ`;DgKSLō Fa!\VU:^p/e[1hAtDKH$sY .~$A9Ᵽx30#HH)p=L@s>U@구bIX- v0] qHfD} Tv$^n_Pd@Y#|_y[0T8J3K -<3yA5|YaaUMrxk/`?D0VTV^4p tB&~^ᎱX;, gX+pGgkDM!ۂfu)VbNu݌NOiC-ZrKTإMfc4]zN|85U{R4?QэKR;q 4wuJYK /Ra$$ #E{b@{ȪOx&.Ԁ*ÌKXAdVS=?]ى?ʼ64jHҬ!CvKQځs]:\s88gCe*MwGK|p;U8Q߿<crނWlҴ tmJJfqC+oy#}Sk 2ɦj6Q97wJѱ6J(;/ ctյ~w+ {wSB\s' h~{{/ٗ}&_zi} 9ܣO2-~Z^6Qcwç>͊z2χ!y,6wx0;Rn pd͛՛d;> SN[OY>+x{C oNԘ5]28hwk% y#"b"hΰW&R:\*=n,Je=; /_lP$uGe$BrT8 ˈ~I-K"^_6y! kS9 i%wV zW53B@! {5=rT( 0\Qj]^9`:9/v[McZ ŦcgvHeQĊBѸ=U\`;5 $q/fow.nlW;ͯx]u|Yg5]&f*ROnSs*gq܅?ML~'qq>p[\&)*(763M@Y~.ыT"嚟D~GO(|1DHT\8wq 9"*>e!Y1vˢA 8cl1V'D9y 2 }&BXaKITGE? h$vCCSL~1Iܻ!IDbhOHԁ5rՀtπ DvI}~ l F,\!ʅP\mzJa7a8mG6B`_E\bhP:mGGu7( f-9_|H:@ve<t1 j?P>dWmAoyQ}2 RP8Mk\5ACzAQ" \vJQ==}t"C 5J̑z~7}7pDFs)}rz*qu!q&mՁk4z "GHD{,֠}|P_휼B:9fyFLG˝§dwTe0q?$a4y^U} G^k?&!-[f-Caj6]:+YSƬ&ʜZb9Iәa6Z>XH@VXZzj+e4NR"[]fjiԕiblM"yS}deفy'ar1Q?g@Z*r`+( CcJ 1aMi'R7lcl6gј`j{m4Tۅ6LZ]sO2())2 I™:ğn&n( jjh]6imO!⟬d/\$z؍jҮ|(V<ΒoE?d*-ك.򚺃| e~oXvI/CB9f:.ȦL˙ҹK4~RpP7A4 w N14>XRQ+h' 쒣\j:,;, gB s6dfҚ$ԲLy3C.VUc^fԂ4,FӼQU!Fb5Sq#;Ћ]&i.PӰ薎XUzaVjGi ۱ VmKG×;5X)1mr^1JVڄ<ǸVڛYfJAوiyDD|CH>uG54Jܱ NB$P_bq:Ҝ%Q}E\=T.!Le :תg0ù w G M͟*Cp~vR`~$r'E͔:e65w)vYg`ڜMT47iyĂl4 6yҨ_$N0WB$P8H4B:3MK_t[*50DYfYEH> @JR;af9IB zxR]S$rFF<'u %VmK4y(-y[IF[3 Xj0Ukq .'o'vo:w`]84d R矶p#:s77ǧboF,9'XvyTNsU.< b(tvoFT߉-"XO  VO}OcpFz.0; ׭.3޿!{TF¿ʚ:^`u7rP$ dx'~HCAs:Kyqae1R LnG_-:p7̭ ֆ 3](0*ee馿W60Yu|4*To] Y-urmUν^Qp6b"I^=)2|>йAlw!n{W`S[6y}dl9MB/tf.j`Z{j*=>8@X0O aVR$rwT Ma5M92;3A_П%ghyTHbђ,wJR)ĻUB၁I2erca&QkJ7sVJyH? -%Hܗ x""j<4čKT+ԺŜD6bp7B|w" X-''VL (IDYkaO@ȘQ6XNBT)b!E d*2 -Xj̩H {zư-|yQ'@ַ|*03*Μ)x&P]J9﶐,@Я8.6JI'*VUXֵ}Xݻ s%P$-NV@B ]GZiKg3HW''T }NKZ*<NS /b ɭ3UI34=fM٤OXtch CVB1*Ѫ.n1@d0uqɨs(i'7xv/կL5 OƕF WϢҖ:cD-Fu{`'ړ!G _@?Țz4DHG +w~gi̓9o4߾ZdIff[DGIT؇þӎ'/*nOM=EϛS*iu o m-7_ހڤ~=i_L>Lȃ߻ )-dH@Tn?d%V*nk 1~޷n-cƇ>c'c)oadp{ :vqU{j!` "*kv[vȊ0ՀyvUO 2酶`qI/]?BŅ[aUi`=b{Z.5c<=[vӡ}61T (B>gҖ-pNrGNR%.Pʻ TM`Yީ:$j/Z1VHU TGY4,PRN䒦*6S654Y-D"M)"E}H=8WI% 1Ť0q/ؿ^BKCVϪnXq@)\ )Qzi~X]O%W^V qhi]"Ћvkr!"rWduuRs&UR/y6C1(׷~ǖ59r^.z|k*xJQ!$ :y(jhmwEWvf"#,F6nC.ٮpBt7xhh+GBv+o4B\Y2Zޏ麤WkLpv:I.WښcR.@>E]MH~F޷%}#&$z$Q$WR[NVҵN|syn^]2 72hoyy"w u]F-L7oߛ{J4Dzhi d?jj*?М}3b߽r}" p$KeUHz^Po̒Di=ieGAP:NAY}/&/TSӳ5#j'1ʼn`edj/S)l= 6U\ǚF"͘64A's\/`g4|ެ!һ nY<6?`Ӎs'x{g_W.xo ٛڱv$xЎD&SI_D(k4Lm'9R b%C4uy"˞m[,͌T`w{ ebݽI~c 6pG#HmFLė(qĞ$5Ԙ.Uf~쟀vmSmˍ}Ǚ!8.5Sqx}տ=x#n1(@ 8@'I/e뾿b}yI/(2aµ z$ߌ2q=RWkose# p-װ:>$BPDJNȣ+0R6XľgV3! M(14B;Sɓ+@TGnz(x{H١>oHzjb.jST .wiaM `;);&2 Ya}m*z?|tibM`eƕy4:%tΠd"$%>Y @^F[܀QiWp*LFp.F7 & G$ܛ}>B HSM01%p0iЍ !CMi yFa[+nYU0-2ͲVP, ѢDL#n.wE=k0sawEZ#*Y9PUpD w4j؋z~Q3rE!Ud75'(]3r;_4hD"f'[_6sJgwE[#NA?.FI `^+"v\#o<\(QN)kxeOI <Aff5pa`]-\ R2 cguYP5ƫ @ou9zv\%[g\2)=Qˆ3yҦv ` V6P}U2L_a߅} y@%Wv-4'?>-OʠF94D%N~k+2V҃[J*U4VJYz5qġ GT&9n7F^ad0xw0_m;I: hܰ##+hL!ㅥ@k>@Sb9fuoJMF\;xgVUW9&*024#u(h~d@D$IT0PNNpW +VDUr0^zD8CoH3X'6l lKO8`p;5tH 51)BmU GFQQ36nhr%Z{~wE}@0L(cd#O-OP[[S%xKVA{e[tzmGM\YER8gYn6ѫT]fPbnʲɧp d3˜ ߜO5%:-c_'0'>lkQtVmtG2b@f d1V:TRYEO!h8ʖ-*m%E?>qo7 GZA-V;k_Jg2>fMknٲy~K[oDf~sp$WpFOg/˗ϊ\莀ߜ@G蠙&^PHDm,M1ɿ2֫`.|v('Zqآr,>)ۡxK޻Q>lIεNJ-wrOE-=4N_kIz,'J("ݭW0[U_XVw߯:S%qv?'YUv]pSαs`NnvVu=/InZSx168a;Hlą @k5U ."H*@h"TbTEJoH(= oaFv&P{}NY/!w`05 3c^;i(&{T(Do߷3|Fu:sssL,,gb^܏&H WlooH̗\wC/(&sT>Tc}a)~|!g /GjKBn%޾pSSw!o9\ѿA2pۭVʈ#3) k_0?aK 2x$#b($ygf%z.1܅1&2wr7xxYͫ Ju'4")Ru\dLt"Jb/M=q$dyFJ 0~v4ke^霶k6>UJe}_Ps9.^kpHhWQ魊-Wc囫%7 ߰*9h^GmG3g5Oqdž:U O=&d0awFBFQ$k1Qi&õ|0mYq|A+)QGի}cd'# D603x'K9t~}|ylY^+ft^KӒ-O峤7(jQ+R5Q?V|Q&sxQgpݨFꐔ5m^WԿip$o o<q/ΖsӅr>([}?<9Z߼~9k7:sӤfXqBjxÝ9>%C]4 B$$20J*XK߬jC;ݺAig|MטʨM At,OGe=i||\R Msd^]#/6IX3jɘ޾͜I^ARY`(۷L58ߨ.ާ^(`ˏ'rz(cK.O9DƁcG2/+X6}sZP=x}NUG9Zc﷧k#H?+7BdO,z?-kJh7sێ[غӟH831N6:> />Ô46<.HSqe)|Ju/)WR+Nm^oxh Υ(.٩ku|å[eH?S%J޷Rb1M.^ 6ץo4 l4҉a36f}R+,0˪ P_~l1zvLY׵Eӽ]z#脝P_m1+ H 37E4tfB;0 /Hy˟fT;q!O( ߵ>"ٔΓ,ִbVaW٤fLKUNN"o}V{& zIj׋e%<c܄nW.͉Pڐb899e{JWtA:!ϥgEG+ŤV3wT;-R;9<Ͼ"n.ۦ`8tм"w./m:f# KE&72RFs毛oai')%w)ҹȅZԖNunoeѧ]_l:׍M% 1gj9ͺz&/on6݇ȸ|]ca]=0}Llh˵պgCuad=Z`.Ȃ˛[鸚&oLwy|5jaNɟH[f8@cQFWӹnp>yIYr`]˦ҷ8s5t ک.uGDܷtιz؊F&9IHv"t&e?C<8S',/ } ;] exepPж:cah۟}RIR'n-|/epV.sR1e{g<)20^|C^<[!%#{ZN,Њ;pTF%do=;^^zō2~AqM\UFD \ⲡƤ-}5z~Nib7R]ڥ9ߒ]t8Z2cts9]MȎBt+(0n'WyaY1{PDc/3*ӌ'ўe?l|9YCs\˲6:!oZTrޙ/*FmZԖ14d>.3w*ܛJL'߳3"?>qTJWr 9mv#v>VsMTDNO e7ǯ[ TasCsPD;ׂDҎ=n`ΘB"st0Jp춿`eq&2 a ۦ7(%-$Q2~fWqg' RH53]#OY4Ct'ÄǓMUΐiR~T2U[Co$;\ƒzO]sk$&йCKI6zz5I9ѱqeyk-8ܴ@Alwcöͻ+ZF,|۟,F )YOx6]hOuAR̹79/kb@v*ӑ73];,F>2ٸ~LΘ/65'*D@Df9Y-7CӴl_>"igGs_fe+pQm*at.,6wCv)u/]QK>L',drY=MzDq@]M) 7j|fŨЧc/ڡrF"L%Ӫ?$~R,ѿmA41꣣Osn?Art5r{T3-zly_W~~UϿHsVBTU!7J659~بȈcKnyg(O/_'FNv4s-K˳.i"k^i `,\u5+:T,ϵ6}tLc>+iD =CXBI.Ʈ.bZIש-p$үz:aK Ɛk,zz$*UL )~UZLxg٥?D|<G-ŇFxLc5/pAp2|bqge;.,'dqm+~5ꞝєk0$^\}iiju#cȻr6$ #ƚM$5phC]yU3lhi_H}A~һkRœ>i՚Sl4B5E x1/k}|> .dt (^>i, T{ccxӸ֞j>K#܌69oLr>6[Ȩǒ~fe|j?bԇ~ T(nO0?7}cοs*'Wv9 U3 "hYw )n%* oVs}gԕR['zWR8>I_VUT>#qiyxy9)'9a3֬, oH$ {Ь;WIqI[oRЯ37 (%zp}^hE59;sVEROr_v(vlfMx+GHVRHaZ>r :23Ҁ{ÝVV?O(Vh}V*nĮ<6$R@BQ'Sj{Kxœ}.?iPyAI&\'acd&RCPz>L֡KӢnX0%2G,ԄYdϾ.ިM\F/9|<XFsRD|K[ZDY%J1uQT&R] ݶ!ŖWo'ƽj̛A? f|֋ܗA~E23[73 p: P/8ـ Ÿ OiG?pOQ4]ϐ8Xoxl?`U:(bJ!W.NjRM t@wNaze2@ ~} !o/6u.̚6 W58@:B=}{W|k0t;r!Q@Se?t,Q*px뉢^DDZ]@f}أ==( ?3 ~܅G(Xl<ȀT'{?_c_+ چUm!~IC{g%c; Łz'JZI~ -@@,>O ؏B Z.HD'  Q{hGm!@Dw'%:=n5!`0KV_p) cO%vg=8wVCI0 L?5?G69G1œ(ĠN{zA Σ+ebYP ^ R?E+959]?EB ` &`a%kEw~%xBC(4+eW}jPwe=<IRW\Ȼ?g_`LQ 86aEtٲ#$5+8G!9<1pDB/IZGB%`HؾD}*w<@טhܴ]R_ /\N }[@D"#b9ٵ8YN4S/tuB"0hu,V)p1(:(˚e%Q ?ДA%eY|>M4egSEwe_rK ]epMÚD.VF 䋏-[v_' 1.p@أ{llc@ ae)VeUh<`)x\}@y q 5-x(H('7 ^@9bI,1,Wߗ{"`+ uC:-@"wZGtξUa5XN5Jh(@(Q0kYH}ɺkpXm^5vUsa!8Vnлh^ s^'d&O-Z_\\(Jz >C>fdq^XX0N*mTm-BHN #aDb䮽P$@Y0S  ߫OVM3Jo)y@NC-IRO^8sX9flU0'!{;Qr`(Բ[@b;pQap^ pY*O ʨji'!E07?y1B4N1dbZ xk,V΂Ś> B,XA9pBb=/Ģ`0G^: V5:*^>8Kl,ͬad_ºB{u}&y`qY *ĒZ3@'iDUݒ}tp?-5x;TtW ?׃ EUpmP! %.I8^e[%.z8bW'تeb {yH2`F"=a{KϜ تbՒk!ڕkL`xʬgkfq%Մ#0 2`paQ0g7^Zn# E . u/Ffؾ\ ]Dbt U^/ N?s+Vk)@$88Lj)@^er6.=Z-ܯ@rԑsxw*h/ zw"0ʃ;cY:^a{5n+N:&>ۅ>h,', vA74 wKd- : %SmT,m~x$vՈ{R d6jK4V2k+z9DL~ wwkcQ*SbMX MHuu3n; #fˇ0;=akqisC:aPu&ܣg?p٫+r9L*RBcNo tV`@s?Ćt;7\,V-@l#=7x I j"]P,n%8 zn/|Morȝaԁ0nj}oZ ?0ʆ~X[QYǴ1FlƁu]_| U]b@! @ @KT''fx%,PLQvu ,-@+:]sppa~ǯ@9}>L ̮3΀~“ɠ8NGl) ؓb:- IJYƅE}D}\|f1ӱkX ')|$%"Q1JM8) 7{3ƒ,G π 򯢴^U'MWLN[zjCw `, @ Zȁv_`֓UeR@ `K{&l˱u@3ClS*^:@ūzSup<~q;˚.q@ff﬛ ^7*CtzԆ%lq ha{|P` oOfa9;sn Sw^Kyg8¿yX԰QakGc {աo| i$fY}lp䗻D@X\ڌ]O^Cl,3ںw>,|}[$k~xS3#X(߲k#~,\`ojKJ*^T`.G%yh/05&+. *[d*Q&E@C-UJ\̺˺w?Z|@D!Uh#ThmhzF=̝&/{99gCv8;r`/va *~J[+Inp8r9Z‹(@#c(PRuFBߓ{< ŒTAJaөxo;j8V+uQtUW偊׊Ÿ Vc(2G^ܧ7ŀ#Eړg-z5B/\nN]U1cࣗA Hޚ@.e3-ђ80-ms%LĦJ^YԷunko2d ?/W]OI M^~*0d5c|>LH~g'#qKf?,ӭ›K M2t9!̩aCcRRaFX%%F0 3O 2; 6?{_8Y,RY$tY4V&!4oq @TkB%k& ?~*s5Ȅ- 艽@yXdVar֚ѩ@dZv*!6%CR0vf,V3ω{ F9ܱy HfO48ō?'w ğv*pDuԊ ƞKл`igdawhZi 2υKlgcux SM0WLN6"^E)JQtWbldׯ m*=|f{3~3U 'TK1&̸(f^crOtFg%;SO +Jj idF=eʯm} UA ^%υ6L@eě*Ce{Cα4TT#6Zbq*y▱a&5K-nbEs(LHNaQ[@wu.1NH~\rXe`J@W r+ 1cIHz~PFs%.k8B?hH|B*^ӿtrs:V:~jTcS1d X C ]m2 'Rc0!xm(0gkr'ЏXzWXkY@M7m!۰%M]Ke0&+Я?𰋖v<ն>t PK EUAbin/UTo:>cux PKcRebK+  ">bin/WALinuxAgent-9.9.9.9-py2.7.eggUT$@`ux PKcRp D manifest.xmlUT$@`ux PK] Azure-WALinuxAgent-a976115/tests/data/ga/fake_extension.zip000066400000000000000000000002511510742556200235600ustar00rootroot00000000000000PKӁ;Uq%97test.shSVO/JMWP*sMK+PHHM.-IMQ(.MNN-.N+ɩTPKӁ;Uq%97test.shPK5^Azure-WALinuxAgent-a976115/tests/data/hibernate/000077500000000000000000000000001510742556200214065ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/hibernate/TransportCert.pem000066400000000000000000000021471510742556200247270ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIDEzCCAfugAwIBAgIUIPFB6eVkqLIfKddkQRQz8HgUoqcwDQYJKoZIhvcNAQEL BQAwGTEXMBUGA1UEAwwOTGludXhUcmFuc3BvcnQwHhcNMjUwNjExMjEzMjM0WhcN MjcwNjExMjEzMjM0WjAZMRcwFQYDVQQDDA5MaW51eFRyYW5zcG9ydDCCASIwDQYJ KoZIhvcNAQEBBQADggEPADCCAQoCggEBALyU2NczculUacfEOy8yixAlPqUjHZ5f x+hogp3kqIbTI+D9MjHGI8bzOfEOoud1OMMlBozE8eYxjhY9Zjzw6/QDEzotmlTJ Tdk7fNx19ySVhBBWlyck8m2Q3fKpTTe1DwV29nXJYsh8Z+aVmsNGiZUe2HxJQJqc UtLVPYT2SMnCwbUrUr76nYcoO8hRY1H3sGOMGK4Ejp6beXUmMWAfd27kKyY7Wtdk gXMeVqMe+G8WvYXyvCsiRvrqpNtSd5RE2/HyssPLdu+z4ITG7GJRmKXDQ7p3F+P8 /unW6CuylIJcxnZzpocsAvP2C7UTrObp3fF/eAGSnnaNT2bfbdOtCLECAwEAAaNT MFEwHQYDVR0OBBYEFE8Y5NDxq19I3Z8q7W3p9itAE1toMB8GA1UdIwQYMBaAFE8Y 5NDxq19I3Z8q7W3p9itAE1toMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQEL BQADggEBAGQmQKvTEeGnawu7bLaSHNfUsuv6M2h6ckgKSJoqqSPIlU0XulesZYM7 ZWuBf7ajVtPCqG0hRCeEVdv/V/dVu7jBXnPg3iF18VHY72ZQhY/iEq1TVciHMxqf v9rTSRFtPVKZ701bPZqP7CADLeEVwA0iDNu8BZkl6ytwx35Eqanw95eK2ifc+kz9 NXHNbUOOBZXkGD78ESLsicDsht1CWnZh4cvaHG5GzC+q+3yrMvMKM+/djuXN9YVt Ks8tUuL5rNRwhrsPoXDuygFnjkWe3vuqZdwfAjMoTEfZ2zd2wgk09vEwjZpR35lj +sGK8r2hw7gtP3Yo2uAh8zRA96e/zDc= -----END CERTIFICATE----- Azure-WALinuxAgent-a976115/tests/data/hibernate/TransportPrivate.pem000066400000000000000000000032501510742556200254400ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQC8lNjXM3LpVGnH xDsvMosQJT6lIx2eX8foaIKd5KiG0yPg/TIxxiPG8znxDqLndTjDJQaMxPHmMY4W PWY88Ov0AxM6LZpUyU3ZO3zcdfcklYQQVpcnJPJtkN3yqU03tQ8FdvZ1yWLIfGfm lZrDRomVHth8SUCanFLS1T2E9kjJwsG1K1K++p2HKDvIUWNR97BjjBiuBI6em3l1 JjFgH3du5CsmO1rXZIFzHlajHvhvFr2F8rwrIkb66qTbUneURNvx8rLDy3bvs+CE xuxiUZilw0O6dxfj/P7p1ugrspSCXMZ2c6aHLALz9gu1E6zm6d3xf3gBkp52jU9m 323TrQixAgMBAAECggEARoM9hVIWgIpwtyJ3otE6UEIs52B2/bYAsLULSfCq1ybx vnnOH/Bfhk+B9dGsNfGN1OHuTgqCDLmw0D4LEXRgNbBEqzdMArH2quhkaqatT3+c juNhx2A4SaGma8rENbU8taViyG4RwrdJvl1oLeYdIaYT+n0FbApRqcXUJ/hTBSVq Iha4LZNhPAnEbtL0Gdxpl8v/bHy+EHyrPDpY5FsfNkVmJo44s9IqpCFxZD6VL2FF Ygn4ZeqCDTbhkSMFBHeaNm2N0d5lke5BHtr7m/CZ7eHVB077g4gcigHqqu50vUwD 2PoC9gx0Z7CYQz//r9uId40rdmQx2BzNN8TdixWtVQKBgQDXYe0QRQunidmmwGeP bKLrxlbi2pMJ3JsHZlFsP/G8dKfIYDZD6Id46QZKUnqGW8fRywP3AgKmVD9ww+16 Yy+3KI5O63rymnSJfXcD45YThZvOgYVLgULqAIKF/IVnOmwyM2RaTD9gH153W6Vg rVOgiShz8h+QlZiOBrUyg+o20wKBgQDgJQnQT2CqlxhQ+VYEHCab7o/NkmD0XppP QrQ/a2qr3gM3XgnccNLVGJaLJJvypSmTaMkSyCrWXHcrhlgl80MaTDhwAtoQ5FRL uVNJwYmzS62eBBKqgEudsNa4x6BDRjKctqqB2mwn7qWEHdbcZJggms/60ckHgME3 timcr+lX6wKBgAi44H7OLQCl5niIRilavuZJa+9X5qh5lJWiIR3/IOz+1GSa8Nej LQlKdoS//lI+mUL3s7tnC3Bs7PzGEnHoXmBOdiTgCGSWuK1wtUclCkjUGlEskZdT LsCHMMH4Tfa2OPd3eVLmz5I28v5mabYWjtJre4Xmgjy6sijeQKxBB1UdAoGBAMOm 0PgqMaD2nt0fp7uSrwjxpki0+ziT03JYMWoiY0x+UKRly4nGWeJ0wgPXAuE81hu8 HbftTacrs0Ik1JDb1LkBy0nK03pnNEWdEVySOZZt+rCxsXFu55JQKD3G0temUMuG jzMl6763i3bVbRWYIUpkmCLCOA36j64Hri79RlvdAoGATRLuoWvjncSdMUd2Lio6 CFOuk/erTQIjMMDcyC8J65htUX8KwJYZCOGk+ZiomcYaPhp5cSqVj6ZIWWKSh/R1 LeSL0wqsST3OCp/gLUWRgcDcXI5Ei08SnYxFBegfm7hWXb+67PA3Baqxo3EM1t+D U0bzJckJAv12DHlC/sixePM= -----END PRIVATE KEY----- Azure-WALinuxAgent-a976115/tests/data/hibernate/goal_state_1/000077500000000000000000000000001510742556200237505ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/hibernate/goal_state_1/Certificates.json000066400000000000000000000003351510742556200272510ustar00rootroot00000000000000[ { "thumbprint": "DC45A99039D4F9CFB0CD84D566BA065A763F3762", "hasPrivateKey": false }, { "thumbprint": "1AF52E2E892C3A0DBC589F301F80073CBA924E26", "hasPrivateKey": true } ]Azure-WALinuxAgent-a976115/tests/data/hibernate/goal_state_1/Certificates.xml000066400000000000000000000123511510742556200271010ustar00rootroot00000000000000 2012-11-30 1 Pkcs7BlobWithPfxContents MIIOjAYJKoZIhvcNAQcDoIIOfTCCDnkCAQIxggEwMIIBLAIBAoAUTxjk0PGrX0jd nyrtben2K0ATW2gwDQYJKoZIhvcNAQEHMAAEggEAQu9ppyJkRf2NM+0R8u1U8+Va EWkYPBipi7WB6xU1FORhEVyLLjFVmBJ2i14fudsZaQUrYu8JREJqqzJKT1dbjRdw wCiHb+OQE7P2ULal47RGqAP42HIH+1MSOABtKizyUb74ytDenJzN1SHFxLiml10v K76BqCZ1q8f6n0I8dVNufnr0DZaKkjl7NnaAF1J4lIX/pBguyxDoUmt0E7LYVSPM QjJHWf09IdZPPc45BFdOX+BjkmGswOUVvs2Y57HyVzDA7sgsqxrF3SMK5SfX+DzS uyTi9nsFkpkn0LAcSVF7w9l+DzTVKZ4Mke3RHo+Jod3Bg5b3S+R5bcrjAoLt1jCC DT4GCSqGSIb3DQEHATAdBglghkgBZQMEAQIEEEzcCb3pevX+ZNvSgipCeEqAgg0Q tCHUafQdckrQDjeRslQ2E3vpqgjeBaE+2RcUDe0M92eK/APKGDlDllcNLZdxh3qF FDfRxRy7jkQ8PVHAGvlcRkKz3+APatZaJ6wzjZ4o0Lpg9upGeqSQLqmFVATERHG1 Y8fNweJxfo61J0qwIPRoyQMhEQDjdwi1fRtRUo4CGK8xVZkgvy0WUKGnXhGx02Jx E+CMjb6yNTGzwOahHMkbLDYvh8L2R0RXA8SZ20YC1vk0IjDVJfPGq/6awlkRjaqe x49r+CKVoO0bz1vku3jleTDSblkPYILPPBsVn9Eh3C97MzICYXxKWhA7uOXf65ry 5FVqipzWyzA3bcJQqHOssm0KfxVR67tr/x40GpOwyhy4l5OuT7KWsnI4dd25edC/ doGgQfCBDpWSjnu2+Kd8xJYUTmtC6rMKv5snMjUmBKMGR1GNkxX07DYxOhpUUtKa XCLXEJ3xiFYx3yQg4zsBEBJg2fZpPX2ZuNvlfKUY2ndSp8kJWTuwzlIjanKeJ1fD vEFxXdPQxpCBf+m3QlEe1X9QTwD0feAhu0f5GV7I2QZnjT6jKeBBhA3i2QUwsHLo BMP3GVT3yHtnz9+qrpaznqedg6Y0lmsm1ZY3uRzhCPlZ/zUZxkRWXc+vgmBLcjTj JpbffmtaCddc273oDx402pVtik1K5u71PCVAax7OyfdP0h+rismAubgurbtKFyQc IK3DJFeZ2ZNX7R8wvtq5n6ZgxHvcllpBBvn9uVtGPKP2je8OUf5YzZIRphc7pWIZ buoNxEPq2LvbF9yWmeSTqNgsIuubbvaGmy3tOawQrMjFpUiMeTCATE0kDNTvG42a RjOK8OHjpFp55UQZPJCZdJZCK3EqE1UamFogH1Tm/RbFPNlVOx38+gYEl+uKKu4C X4SFYRcD4wXqdkPz9mgem2kgbdS7TRALzRgYL6SsmKxRtCehFen0xYtpYnVtln5K vWdx1t06qs/IaNybUuyfLSf2RxZ5sorCUTwUnBUYEoOx+Q6vg3qjPim+laOigGHS DM/m3IuIl8afef+mmIxncWwaQ8PPDhJrqHTyqvGq+6wglzm2KOaSip0hbUlgJn0S dDy6i6I2IYKDqdImEz7vUmFpc2kB6sdf+w/rsDco4cwnASsAZu4twkFOhs5T+nt9 lIedfA8EAWHcQbXHjKq2Q/7e1uxw9zNkqhAKTc04pDtv6uPTuBfTNQnaE0XX28Qd JuER46592BzTmcA26vbFrMWPnvt86os5pC3dERVdUdsEh/7YeYf5sfb/nVBgzoiI S7fARmDQ8UV7xR95G9E2yL1HlPiAVuo7no677MAEVuly8mLT/gRGNnb+zGk9FJhg 2maWIMUA7ph9accvBUNdYT0u4qdz+PHQB1KMKlaq5l0kYscDHgp7Z91Mm8LCwbbs SQxU35g07K06Z51LpfB584jS2fNNRWrK7jljiw/WkIXP4NyNBGduEo7KD8PNOldn gRUevFCieDbDcrNW15tFobFmIYI1LPtMvfc9QZVmksvw0xJsTq+NXwnF1kfyWvTR j7chwpFwhmjDB07wN6nK3Vr0AYFmXI9ciW9UV/QoUJL9WhesGY8f9/WLyV46fAQr OGtk4c6EISjzZ+G1b1lfxkdSByzmElWpEks4rv7u93Ph0v4/5I1pfsw+uRV/HVBH ggfmQrekfjcDU+4CL6bi3bqFv1JRUr4EcE04XE4SIoVcAcqJpVxmoWbGEiZKsGnH 4bf14NkZjcup9MzfDmik4t4sXn+vbRWoohj5qMWSuEVWBmo2ae0j+umzCYXozWmx hHBElb3tZ9rqlG6/zj2iVMpkUWJZLr/TqojKbdAsD/3/+nycrrf+zyEg3PAvYQgp cphfr8BGYbGxknF2MWMjGUBfhQ8EmdwA4y5YA87igOmfkxXLzUs/Ahc262Sx5uXB WIkJbN7lUiyo+zrAeqgvstQ7bHm29wnPG/xuRgabcH4h+a5e6yX2CjYmgJn8nWtx jHUxXhObTK4v/xS4HGIk+exXsI00CbzL25k/dCnMDOnhsiC5PiausaIQZW6lk20t /4oaRV213YSvzDvXoj+7cgJYto9eX0zFiJVO/IVcdP148lP0I4guaTCi5P/6v6oc 1O2DwIjHJC094g/mKc8jsyQbyu0lwKwoo8BsLhAxU+7OyYIkEwqIRxJT/7/P/VbO +rKWeuVPHSC2my3Bf7YrRVIVu1dYnZ8IdNXiQXcQRXZjRQMhVbBHDeBK3ZKYJZpB EqGCNlwsIjrpO345i3d4bg1ro01uNCuvf4J958CUH6E1i6Z86tYdMNTsM9x/tSqe NoDSx+4/N/Gr6EvlbFfl+J0C7M0unc0xvlYrlbjcTxfb9UqbP9Az7gwPcuskOZzh ODxNL2ywpCens1vQAq8MOiUIW9yWoKDeUIyqVVelNNxDjDLQ0EGoXkUe0+EOAl6k PmAz+obHaQxC+IjFX4HNV53B0VhevWm7P8G4H6wxQ3SGwLBBNTqKlEqsYl5qTYaW lfCgDdGCisj0mHuLAkHcAa/KNqHLTRN2gUvXGxNFDAqQwuOI+Uikh5JoywZLx882 4pc0vV2Yq+GJUvGpmCVL9Wnc1PzfzZDLutZZvDWPVTftFu6NNeTHoGnAr/NWFE0a Q+RbDLTN81DqFRr/X8jZ2qg+kn8ipGOFbqeAck2KiVvW2DN8+s7HFs/xSYX7+MDB 2Ygzxu+q3hpliyQ45tPTER2lkgBEkoOa0HDvgpgzNMG/nu5s9Nipa+QT1ZzX954z V/fIkAIAumIyjF5sC2JKeHOw1d/Pv66LGNywMzP7PwiTPXgxHDiSYwQuX1VhXD52 A0s5W18Pv/PTXHEoBCSsaYUVPDxcEznsDNA9WkTxmaEdxGfVPWLStUf48jYrwCot mPryCWDvjrNBn1Z8JtJN7ydj0p7KoLD/pK8mbctY8SHfhsT6B3gN4AsQ2UGkFJRD sfl7500bgA01Y2jRsbPpXy3qB4nCf1pVr2Ynvzo/UrlrFmiXJel9YtcRCXq/o5K4 TQe5XNppb1LAezQ5PJr+eeQ3ja0x7VhJuDGaEYsZytZfRwE6X/F3SIBpxnOcJxk7 1YOiQYwu6ufDQvM/WREm4P82N/V0HS81mr9z42DbuhSpNMtCFhpLWG1UnZH7IMR+ KAn33DeWPi309D7gAUz9njwoYQ0EpHClBb6T3w/8rjdYrKbN2RwfQjttaBSAMPY5 Wxxa+3OcM2KmyzUJIw3K1mf+v7IOzcwlhAPogYngdX6/0L5DJv8UrPIZi+ik86T5 aSXpXeTHesLxJ9WMV79zJRwLUIkvrG8k6giPyYY3hbu7nva9r8xoPTcuNdAddeR3 WbM8yzO9x8OcBlbQWn3fq5RupkfZfB+CZRmtDjCN5wPwRM5wUtuTWMoQbMu8UT7J AMS9THzri7PJjB7tbzoNaEwWyZpIjKbwZ2UmJbiQXag79K12PKMiNQ61HegAo+9m HvkAKgnyK9ueh2g//0SjEkAkl56kVt8y43KdLg2FejrXvxtaJlyfNdXSNBXbJ4bd 0jsh4F3nZZanz90iy9dfuDrI/F9dch//PzxS+E/Xyxbf9TBiAPqdcWqEu2XW7whN BEaXGAh4NmHHGpKcPK7TFEmLlyr86Amb78t3Z28zXItCXNXjFCnewtM2Zp+5w04M HBZnyaOQZDtsBfJBnhb/2THszWJv+g7J3/3FXgoCg2JygMhlCoMLzDJGQlBA4JWY qeFrZsxe4AUKRbGS8v5H6lR1jdf0EVbappKccS1i0P5Kw+fvnuwkCxxnNTayhpiv hR6XSEPsrypzx438zcyv799I6DT4sGcy1dv/EDsWkJSYAxrCnjG8IVoIOtfElySS ms0TnUZik7lo4HjaSQ0BAPbWQdtGGjdjc9gIUfeuZHu/GmO7O4L86ZVOKI7PMSXG 7wbBUStvJtDvifoqJKkvQQn0mw5nIGMODrP1G7Nc6BCEn0iSdd3/kD1x5uXRfxTp juY5aPAe2IqnDTxjyjFZoQKawpu570yhTrMd7sNdVCop71J+nAFUsIoWbBM5/C/a bcPFV6nAj3SIyzxJlS0BEREpymvF4I2kuQegzWkj8IHn/ITubOamAcMjPc918h3W Cf1gAEccKjRtK7HCajvhoTVAHhBfdD+knGUKQLPqB8TKwjWYlvBfuNlj6I8a9lht M97xA4u5ma0pk6ql9Ng8Fdssbey60+VOEJgYAjWV6wAmEYWKQ8IEhesEOGESDrjv 0mVvDiFdmyXeLpZydaO9Rtf5bX5YVnO3my3c4ezH4jqTnEjm1rnyWV9ww9potFzQ fFdcV4pArY5nvXkbZ8jhSpldO2AfhtH7CvFfmkn0vd46tb2gwH2qTg7z5n+7De5b utFBFjWnCoG0YASVTMZ5dUN5amiofYK0WjCNFuaH3dSomigUWMPBqAZUn1ctol5I fbrXyl3ApiWuj4+x5oDHW2bjDEmp6Vv2Rrs695Z7PRw= Azure-WALinuxAgent-a976115/tests/data/hibernate/goal_state_1/ExtensionsConfig.xml000066400000000000000000000113511510742556200277600ustar00rootroot00000000000000 false false Prod https://umsas3lpnprqfsznclrr.blob.core.windows.net/568bb00f-455e-32b8-8deb-0e1bf1636254/568bb00f-455e-32b8-8deb-0e1bf1636254_manifest.xml https://umsaq0ksgznjhnhvqvmj.blob.core.windows.net/568bb00f-455e-32b8-8deb-0e1bf1636254/568bb00f-455e-32b8-8deb-0e1bf1636254_manifest.xml https://umsar1tzn12qbbhwwgmw.blob.core.windows.net/568bb00f-455e-32b8-8deb-0e1bf1636254/568bb00f-455e-32b8-8deb-0e1bf1636254_manifest.xml https://umsabj35g0v1lrp2w3qh.blob.core.windows.net/568bb00f-455e-32b8-8deb-0e1bf1636254/568bb00f-455e-32b8-8deb-0e1bf1636254_manifest.xml https://umsagnbbh4hpglrpgc5t.blob.core.windows.net/568bb00f-455e-32b8-8deb-0e1bf1636254/568bb00f-455e-32b8-8deb-0e1bf1636254_manifest.xml 2.13.1.1 westus2 CRP https://foo-bar.z6.blob.storage.azure.net/$system/test-vm.aef009a9-f00b-490d-baa5-f9c64baa8be6.status?***REDACTED*** https://umsazq0spn4w5z3grwcv.blob.core.windows.net/a56f582c-8808-3cf4-9fd0-c943fa076e03/a56f582c-8808-3cf4-9fd0-c943fa076e03_manifest.xml https://umsalvlwk4lfvxs4w05v.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml https://umsapqg4jwbpzxdrjjwr.blob.core.windows.net/ac7acc47-2188-9dae-3527-8acb855e707c/ac7acc47-2188-9dae-3527-8acb855e707c_manifest.xml { "runtimeSettings": [ { "handlerSettings": { "protectedSettingsCertThumbprint": "1AF52E2E892C3A0DBC589F301F80073CBA924E26", "protectedSettings": "***REDACTED***", "publicSettings": {} } } ] } { "runtimeSettings": [ { "handlerSettings": { "protectedSettingsCertThumbprint": "1AF52E2E892C3A0DBC589F301F80073CBA924E26", "protectedSettings": "***REDACTED***" } } ] } https://foo-bar.z6.blob.storage.azure.net/$system/test-vm.aef009a9-f00b-490d-baa5-f9c64baa8be6.vmSettings?***REDACTED*** Azure-WALinuxAgent-a976115/tests/data/hibernate/goal_state_1/GoalState.xml000066400000000000000000000040131510742556200263530ustar00rootroot00000000000000 2012-11-30 1 Started 300000 16001 FALSE 06185f00-1f00-4545-abaa-4b7baa121869 8cd0ae53-1baa-4f00-9356-a4e1a34f004c._test-vm Started http://168.63.129.16:80/machine/06185f00-1f00-4545-abaa-4b7baa121869/8cd0ae53%2D1ac5%2D4fe5%2D9356%2Da4e1a34f234c.%5Ftest%2Dvm?comp=config&type=hostingEnvironmentConfig&incarnation=1 http://168.63.129.16:80/machine/06185f00-1f00-4545-abaa-4b7baa121869/8cd0ae53%2D1ac5%2D4fe5%2D9356%2Da4e1a34f234c.%5Ftest%2Dvm?comp=config&type=sharedConfig&incarnation=1 http://168.63.129.16:80/machine/06185f00-1f00-4545-abaa-4b7baa121869/8cd0ae53%2D1ac5%2D4fe5%2D9356%2Da4e1a34f234c.%5Ftest%2Dvm?comp=config&type=extensionsConfig&incarnation=1 http://168.63.129.16:80/machine/06185f00-1f00-4545-abaa-4b7baa121869/8cd0ae53%2D1ac5%2D4fe5%2D9356%2Da4e1a34f234c.%5Ftest%2Dvm?comp=config&type=fullConfig&incarnation=1 http://168.63.129.16:80/machine/06185f00-1f00-4545-abaa-4b7baa121869/8cd0ae53%2D1ac5%2D4fe5%2D9356%2Da4e1a34f234c.%5Ftest%2Dvm?comp=certificates&incarnation=1 8cd0ae53-1baa-4f00-9356-a4e1a34f004c.0.8cd0ae53-1baa-4f00-9356-a4e1a34f004c.0._test-vm.1.xml Azure-WALinuxAgent-a976115/tests/data/hibernate/goal_state_1/HostingEnvironmentConfig.xml000066400000000000000000000011071510742556200314570ustar00rootroot00000000000000 Azure-WALinuxAgent-a976115/tests/data/hibernate/goal_state_1/SharedConfig.xml000066400000000000000000000006101510742556200270230ustar00rootroot00000000000000 Azure-WALinuxAgent-a976115/tests/data/hibernate/goal_state_1/VmSettings.json000066400000000000000000000113021510742556200267430ustar00rootroot00000000000000{ "hostGAPluginVersion": "1.0.8.175", "activityId": "8797175c-db54-4b06-abf3-db0c14b37f44", "correlationId": "2fd8dc8a-42b1-4999-8138-9a580f5a0419", "inSvdSeqNo": 9, "certificatesRevision": 0, "extensionsLastModifiedTickCount": 638852750771746340, "extensionGoalStatesSource": "FastTrack", "statusUploadBlob": { "statusBlobType": "PageBlob", "value": "https://foo-bar.z6.blob.storage.azure.net/$system/test-vm.aef009a9-f00b-490d-baa5-f9c64baa8be6.status?***REDACTED***" }, "gaFamilies": [ { "name": "Prod", "version": "2.13.1.1", "isVersionFromRSM": false, "isVMEnabledForRSMUpgrades": false, "uris": [ "https://umsagnbbh4hpglrpgc5t.blob.core.windows.net/568bb00f-455e-32b8-8deb-0e1bf1636254/568bb00f-455e-32b8-8deb-0e1bf1636254_manifest.xml", "https://umsas3lpnprqfsznclrr.blob.core.windows.net/568bb00f-455e-32b8-8deb-0e1bf1636254/568bb00f-455e-32b8-8deb-0e1bf1636254_manifest.xml", "https://umsabj35g0v1lrp2w3qh.blob.core.windows.net/568bb00f-455e-32b8-8deb-0e1bf1636254/568bb00f-455e-32b8-8deb-0e1bf1636254_manifest.xml", "https://umsaq0ksgznjhnhvqvmj.blob.core.windows.net/568bb00f-455e-32b8-8deb-0e1bf1636254/568bb00f-455e-32b8-8deb-0e1bf1636254_manifest.xml", "https://umsar1tzn12qbbhwwgmw.blob.core.windows.net/568bb00f-455e-32b8-8deb-0e1bf1636254/568bb00f-455e-32b8-8deb-0e1bf1636254_manifest.xml" ] } ], "extensionGoalStates": [ { "name": "Microsoft.GuestConfiguration.ConfigurationforLinux", "version": "1.26.89", "location": "https://umsajn4ck0sd50j02kxr.blob.core.windows.net/a56f582c-8808-3cf4-9fd0-c943fa076e03/a56f582c-8808-3cf4-9fd0-c943fa076e03_manifest.xml", "failoverLocation": "https://umsavph101h3qhc1pz3t.blob.core.windows.net/a56f582c-8808-3cf4-9fd0-c943fa076e03/a56f582c-8808-3cf4-9fd0-c943fa076e03_manifest.xml", "additionalLocations": [ "https://umsazq0spn4w5z3grwcv.blob.core.windows.net/a56f582c-8808-3cf4-9fd0-c943fa076e03/a56f582c-8808-3cf4-9fd0-c943fa076e03_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false, "settings": [ { "protectedSettingsCertThumbprint": "1AF52E2E892C3A0DBC589F301F80073CBA924E26", "protectedSettings": "***REDACTED***", "publicSettings": "{}" } ] }, { "name": "Microsoft.Azure.Extensions.CustomScript", "version": "2.1.13", "location": "https://umsaff4lwzr0mqltcdsj.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "failoverLocation": "https://umsalhsl5scrdmt03dql.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "additionalLocations": [ "https://umsat3kgcjtz4rzmtdhx.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 5, "isMultiConfig": false, "settings": [ { "protectedSettingsCertThumbprint": "1AF52E2E892C3A0DBC589F301F80073CBA924E26", "protectedSettings": "***REDACTED***" } ] }, { "name": "Microsoft.CPlat.Core.LinuxHibernateExtension", "version": "1.0.1", "location": "https://umsan3ktnnkr4k4fpv1x.blob.core.windows.net/ac7acc47-2188-9dae-3527-8acb855e707c/ac7acc47-2188-9dae-3527-8acb855e707c_manifest.xml", "failoverLocation": "https://umsac5lzx3m5mmb1qsnk.blob.core.windows.net/ac7acc47-2188-9dae-3527-8acb855e707c/ac7acc47-2188-9dae-3527-8acb855e707c_manifest.xml", "additionalLocations": [ "https://umsaqw3srxc4dmbcfz3q.blob.core.windows.net/ac7acc47-2188-9dae-3527-8acb855e707c/ac7acc47-2188-9dae-3527-8acb855e707c_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false } ] }Azure-WALinuxAgent-a976115/tests/data/hibernate/goal_state_2/000077500000000000000000000000001510742556200237515ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/hibernate/goal_state_2/Certificates.json000066400000000000000000000003351510742556200272520ustar00rootroot00000000000000[ { "thumbprint": "DC45A99039D4F9CFB0CD84D566BA065A763F3762", "hasPrivateKey": false }, { "thumbprint": "33BDFEDC3C0E7F57A27C448BC4C4EB3D4D763489", "hasPrivateKey": true } ]Azure-WALinuxAgent-a976115/tests/data/hibernate/goal_state_2/Certificates.xml000066400000000000000000000123511510742556200271020ustar00rootroot00000000000000 2012-11-30 1 Pkcs7BlobWithPfxContents MIIOjAYJKoZIhvcNAQcDoIIOfTCCDnkCAQIxggEwMIIBLAIBAoAUTxjk0PGrX0jd nyrtben2K0ATW2gwDQYJKoZIhvcNAQEHMAAEggEAVaWX1988At8SItCnOc498Sx3 sdXjPRtR9UhgxwLaqTTot9/UYgEepD5Y7mqw21gMOpkVP9rJbYUffacOwcSszQbd 5p5cT7sFqjfRtAChNWg/mQcbOg6yxgKLD08DO63CjYuW9L0uOqH8aKdO87IcITKj U0xfGKE7VFIQg3n2UIJsWZWtU3xckNE4lPGvRuSBVHVMQ28UAZIvD8objJJ6Re6M kYn16SqdkMHb2n58RWZo22eMQZZ1rUpSkhnJq2y2qGu+HuUXxVZn+hVVHWcEvd/X oy+UAKF5ym4smdzLleOdTggezcENcngYruNUJWDQErUPawaHp5V2wCcQsN00aDCC DT4GCSqGSIb3DQEHATAdBglghkgBZQMEAQIEEIdVMqdTOPAiEP1HN6VdKK6Agg0Q MoQcYp4N8niV9sWZb/VQmPLDaQBmjDzeTSZGQstRg+O4mmG9v/61GREQNwMC1eVz NNegAGho/Xe6/ANVM8avZGUBP3LQNkWOYDbcsvVIzE4bTGy6ulVN8eLRoUM2Khcj 8yDFcgGVyZBI8Y3kBvKHNWEK8rFOJ7W1t+XtqQe5vI0j1OWYt3z49Bksh5WeDmmz I9D02trhr2NUAEqoxXMt4gpV9V17gZKBljaULG6PqFcpNkNwRS5GQQVE5rAzEQaO NmIBxbKANASXeF3fKBfQF114KjmFfCMVopj8l90boVkVmDbg6t4/6CiX7HCHhcGP /ht+wOfZ1z7cE6vDxq0us/IAFdy1yzdndMuL+4bvaIqc1iOhBpHrUa9CaEgxDoq3 yjP8cZx3/IiLHkViHgp5TyBXpOQJU4OD7/Mm6N4U0m/4MnaxMUu9lzHYXlwh+Cee qLfi74gOWV08OS6NKy3H7FtaW1ykhUbfSR3AfQh0SfKzC3VVOEclc3lv5M1tFeVr VUCL2bMEIn4kTTjTSv6fm0JJimf14AQnxVrUcnqUqF0amH5f0AQbRz5D2sF+E4+0 QFFPvULbzwLPq11cSt+j30rnujb8I7fiNi+x6nRrKg2ztf3xyO9aO0Jnda6OCJb4 ZQmvu1KlVoOnSQSHJ4jIBYOt/Kv2o5/I0xqazVjHzlR2rP8VFF3K1j//lxbELH/T qye1g/4Zzf2SqIF4Rkt6XZAMWTfoVA7fu7CGgKKILfDiNvnj/sgiwSkctreuP79M +BIlIQYyyVaopsOc9OcGBJGtTw5d6zoXEg/lZD1Ftp1Er9dWSJp4L9c1NfUkem0G 31+f8Xn5j5fZ7s3aFMv76jyMwANTgCT1qjWZGO/lUA1D1M6fbCbEnddIYkadEYtB y2DGg5CQM1ygt6m9u7MxsJtr9jjFiW3uMv/DQLLnRWtEshaNG6pajgYLGzZzU17s lhXcW0Gp9ugTaWh/R+x0to7+LLNepUjuN50zMT67BSS1QscV/qb+8sfRdH06xnpx rYJuXvqbTd7oJNsY25qUyHGYfibx0UpyCi7MaIUPKNjAGpi28H53oc15NBcJww23 WCzURIzi8y4R1pj/1UNRAPFQo7EvyeK4p6p+vVuRQKP1KfCTSnqncvClI0JgBt4G Y3e9UftaDsiRkdMQVsJq5t36tyQQQ2bJEkwWidMpdZgEaR0G/Cdr7YrY4mChZHuY IqOvvRFP2Vr+DoAiJz6nNFlnZxjVFFg335JT+daLDYktAo9jz7YAnXXhQTx/mdTb w3ZFGBHzfhvESX4c1QaJ4De8UdXfnMsVpzY66Fs4iW7H3ndKvmrh9EQRUELoUdW5 8cWjumKLLOAJhxQ1kM6aAbc4MEsXFTxe3bSmDt+s80vECcof9P4P5BGtGVBoAD2J 61nM5eXfFW5n1ucqpjHOuouJWZeS+EuHGoMfnL032GjefPytY+A1YzxRqs5Ep0di 3rYJfxo4wjoxiT4aix4pvsfGBok6EVdEr9PqpLFIfDFHGgukDglbsFJB6WBlRc12 qHfSSclNBKbI4UYDSY7QZfYunr7mp5/VG2Z3NKs111dYRVVccRWbwqdC93LSdyKr ffu89746AG/v//sj0Hx4Yxao0CxQDB0K/+oL1OWjFUYZABtxpkYevFxysGPPvJGD Js+xYB+xVd/bj57BrAnNNSgciF+E7+0/eIkDpIMVIbHcalnHPUPkFYaeks61XPDH RWi91D62Q6/wcRWpqjmJVEew4NQEdTPqJXBEkoXzq/lj4QKBwbrvJjixPwENcaxa WhSP5uq597ThRxNJNb5Nut18VYlJVQirjGxZLGFJuO6AIv63xibKXyNl3a8OgJl8 3l4eAta1jCyMzFlPwMiHGPVjFpi4d5ykyxt1D4e7eibNynAdrIpTlLwxKcaij3Cr IactUMV5ctr8TanNKf/GZlfKaxpoZE28vJu+woDiZBhAirnq4q0e4lMrai/1P9VM ApLkXFz6OhhzT6gZd6N8KgvaBgG89QVu/sMV6dovmICHGIjf+LlDMb+2Sv7/zzyY +saECqDjbXg2rGuLtq+KLNOj8Ga1XnA/gUchuYVtT4NnD0jD2ej/aQpyy8TyEsQt BctEpbZY7XNU87DBS/lnMzUGP/S1EHeC9Aw+ppnmiojyPoUfsCAUdqhFKWfNagm7 NdZNX9jRbsAIqyEdXmUtq8e3bjuH5kwW92QH0cQplKsGh3anDxy/K64RbwkUbqvP TDXtYedyTfKhaNTNuWa6zIeBKX/CI2yN9ledlwF0b9IaPqnJo1ulSUEzQ2uZGhEu IcUCxtumNbdhxoAeA6TgDR74gFejUvaS0L3hg8dbScGp+fx2wKYeH3TjGXr/E77Q 9+vYW7Qga79XcPvlXEVn7Tx+dm3p4YoE/yXe89RiWyj7iliWPTp8p02vLmPwOpWP gSzb0TlwVw4oWSr8TMOsdhXitYZfuxLcg9chEKQ4JjgozGvouFzSdg1I6SXpW+WW rLhMSI5zJR+2Y4RuHFY5R9Q9cqUpJtu1v1uGgLmZER74rJBc7K/1+yihh0BLgLrp R5oK5UAGnPmfW5a21BBKmqq71sFlcEsyXCITiYhkKym/ee/QZXxbgQA2sRrPYqxL dVdf7JYAjTPcadD9824aXGzAtb2o9P4/ALZkqI8KeqQVYLG52RDagMgLXZ2+24dm XvSans3AP5aca/t7s7WGc9kbqL8lvFU/G6XRmPIVnUn73hcOAmUDj2sO7hCowkRF AGT8pF8sFIMS/3s70wWiHo34UUTIKxEl3qHTYkiIFyHranuWAaZUQWWCWFS8rw/i 7ls6mwRamb53H38wqnUgSFXF5ysZJpftW3L9PKBTwot54McbSSITKzIFSt/jJRs7 yWDdaG3jrCFS76t2MgEWYMESEg8lBEDORpcqRLAkA8zlkX/sf4Vpz0ARYKjmHF8u /aDGmZvbR8BLNi0fcM7cYLAGXdcLFnAcH36wSMdtlEwZyKFRpfEF/9ul/RV/T1Vc 93CWWIyRYqbB+PFTwYc/p2w6jZrHAXZY044iHgFak+izRttjaaoxNpa1ALiMxln2 m6Udh3IO5C1d0pgiU1vTw27z0WekyygQDcdjzQrELNEdGxzaPgVsCeECHK2hUGji rEJP7j7hd4/RZKefu+On8qx/DFeho0H5Tuk0goO/jbgUnv7m5vrfd8Xt+bVKs5iK wm5cOSEygfosmaavfzdBdzfNxHQABwJ3tokGhPhau2a1ToRWrblL+U0TymgNFrP3 p27pzGZW8CwN2AXzfde4ulJhrR3+58vhyxNMlNHjcTvwt46n7uWapaPAfzTNwecP zBzxbiJrda+1HoLr4vY01lFSY7wEwCU21zj3v8UVXXLp59rOKpK+QbhR7w7eR8PW tgtpXj9kVg5e+/ygsGouPjywZ113tdTknPbJkeIVAgIHrs94pplx7lccQB4r5TCG YCI116szucQ3WNSWj0qDr9fY5s5wiHUvHMi8I87CsbM37ECGUlPq6e7ZxK3s9ejN 7rr1oFAgLY6Zy1FoHIf1cyu6KPEQ3TXFPz78RRxusS1DCsDIfR1C7CamNGDSPTYe /VSg0SaqhkZEocKTsvzN5h/8ObztqEe6Z0SXmcfVhkUaIxYH63Jv8NZ4I8BPmUPV dCVQuIPXqZ5Ot8lOAKoecO0Nau8wEF6ddrUYJXJ8PI6Ds0Ja3NEsJrgcganH16MI D94bjbJlscYw65gzjExA2j29K05wOtERa75uXS+Jar/SsYJMUxojMC2O1PRzH8/d HOdRa5wlA9G9K8Nx0qlkCee45DNYZGwXcmNuIu23nk448MniFGuYfGO+/yLq0HyN mEtAWgrDElp2ZLRjWeB7gj6iPLskgu3I+IUS/RI+Cn3o8wfTLd403GkneaFr48w+ ycnFSFV5xGgVP52AN/HrVia0091M3ffk3gZbVEV/AZuV47wMlAbbwKp3eTFo1PZY O/l4f5AsFmu0sjIogZ3Ii/OckijgcqGJpkhG0m7x9TUzpC4LYcs3k+MjPlKhC6P/ 5bz7Qcg8QTzjZl8I5Tk0JxcM1cfadrm/FGq0j2lzkixQP2cgqt8I5BZUZ/8lq42Y BZ/NPaxyO5qeFTHyezlJvIpi2WORZ5vLcWS5qrafjLmGbeadD/Bqc4N57UfDyv+J OkN8IjX2iZEzVtjPFTHZAj/5ty4DaN/u7ZwFaa4mv6r2L9Mhcmv+q0NK66Tu7ViD sJmw0/gAnWU0wVINjqm/igOCMF8uDyQ6z5HEKICTuieD6COZEyQ5wbK/gfED0159 JpEj15dzgdPrx8vfMxZdWstu8lIBBZMw8QcQsI1iv4hsh6gWAIgyezVxc3sSwoKQ BvQUe1X9ZKcZPc+DjQ8mTFcnFLOA9n0KnoRe3toNf42VrDkrmweTJOEFEMmGuu3a 8jgg88in/8TI1z00FSVJaw/kHaYos7Wtmp9Cr3RlOLY= Azure-WALinuxAgent-a976115/tests/data/hibernate/goal_state_2/ExtensionsConfig.xml000066400000000000000000000101661510742556200277640ustar00rootroot00000000000000 false false Prod https://umsas3lpnprqfsznclrr.blob.core.windows.net/568bb00f-455e-32b8-8deb-0e1bf1636254/568bb00f-455e-32b8-8deb-0e1bf1636254_manifest.xml 2.13.1.1 westus2 CRP https://foo-bar.z6.blob.storage.azure.net/$system/test-vm.aef009a9-f00b-490d-baa5-f9c64baa8be6.status?***REDACTED*** https://umsazq0spn4w5z3grwcv.blob.core.windows.net/a56f582c-8808-3cf4-9fd0-c943fa076e03/a56f582c-8808-3cf4-9fd0-c943fa076e03_manifest.xml https://umsaff4lwzr0mqltcdsj.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml https://umsac5lzx3m5mmb1qsnk.blob.core.windows.net/ac7acc47-2188-9dae-3527-8acb855e707c/ac7acc47-2188-9dae-3527-8acb855e707c_manifest.xml { "runtimeSettings": [ { "handlerSettings": { "protectedSettingsCertThumbprint": "33BDFEDC3C0E7F57A27C448BC4C4EB3D4D763489", "protectedSettings": "***REDACTED***", "publicSettings": {} } } ] } { "runtimeSettings": [ { "handlerSettings": { "protectedSettingsCertThumbprint": "33BDFEDC3C0E7F57A27C448BC4C4EB3D4D763489", "protectedSettings": "***REDACTED***" } } ] } https://foo-bar.z6.blob.storage.azure.net/$system/test-vm.aef009a9-f00b-490d-baa5-f9c64baa8be6.vmSettings?***REDACTED*** Azure-WALinuxAgent-a976115/tests/data/hibernate/goal_state_2/GoalState.xml000066400000000000000000000040131510742556200263540ustar00rootroot00000000000000 2012-11-30 1 Started 300000 16001 FALSE 6d8baa24-9f00-4f00-8aea-56fbaafee17e baa0a165-f009-4e19-8baa-1d14f00c4b15._test-vm Started http://168.63.129.16:80/machine/6d8baa24-9f00-4f00-8aea-56fbaafee17e/aa40a165%2Df2c9%2D4e19%2D8582%2D1d14d39c4b15.%5Ftest%2Dvm?comp=config&type=hostingEnvironmentConfig&incarnation=1 http://168.63.129.16:80/machine/6d8baa24-9f00-4f00-8aea-56fbaafee17e/aa40a165%2Df2c9%2D4e19%2D8582%2D1d14d39c4b15.%5Ftest%2Dvm?comp=config&type=sharedConfig&incarnation=1 http://168.63.129.16:80/machine/6d8baa24-9f00-4f00-8aea-56fbaafee17e/aa40a165%2Df2c9%2D4e19%2D8582%2D1d14d39c4b15.%5Ftest%2Dvm?comp=config&type=extensionsConfig&incarnation=1 http://168.63.129.16:80/machine/6d8baa24-9f00-4f00-8aea-56fbaafee17e/aa40a165%2Df2c9%2D4e19%2D8582%2D1d14d39c4b15.%5Ftest%2Dvm?comp=config&type=fullConfig&incarnation=1 http://168.63.129.16:80/machine/6d8baa24-9f00-4f00-8aea-56fbaafee17e/aa40a165%2Df2c9%2D4e19%2D8582%2D1d14d39c4b15.%5Ftest%2Dvm?comp=certificates&incarnation=1 baa0a165-f009-4e19-8baa-1d14f00c4b15.0.baa0a165-f009-4e19-8baa-1d14f00c4b15.0._test-vm.1.xml Azure-WALinuxAgent-a976115/tests/data/hibernate/goal_state_2/HostingEnvironmentConfig.xml000066400000000000000000000011071510742556200314600ustar00rootroot00000000000000 Azure-WALinuxAgent-a976115/tests/data/hibernate/goal_state_2/SharedConfig.xml000066400000000000000000000006101510742556200270240ustar00rootroot00000000000000 Azure-WALinuxAgent-a976115/tests/data/hibernate/goal_state_2/VmSettings.json000066400000000000000000000136341510742556200267560ustar00rootroot00000000000000{ "hostGAPluginVersion": "1.0.8.175", "activityId": "b62f1ce8-4608-45d2-9db9-369936c7b685", "correlationId": "87b28d71-0ed4-4587-b40b-370581db7ad2", "inSvdSeqNo": 10, "certificatesRevision": 0, "extensionsLastModifiedTickCount": 638852753414265542, "extensionGoalStatesSource": "Fabric", "statusUploadBlob": { "statusBlobType": "PageBlob", "value": "https://foo-bar.z6.blob.storage.azure.net/$system/test-vm.aef009a9-f00b-490d-baa5-f9c64baa8be6.status?sv=2018-03-28&sr=b&sk=system-1&sig=9Q5lquQk293b7ZbZ7o9g6HGK3oE6Vf%2f%2f5YUvA2xBPd8%3d&se=9999-01-01T00%3a00%3a00Z&sp=w" }, "gaFamilies": [ { "name": "Prod", "version": "2.13.1.1", "isVersionFromRSM": false, "isVMEnabledForRSMUpgrades": false, "uris": [ "https://umsagnbbh4hpglrpgc5t.blob.core.windows.net/568bb00f-455e-32b8-8deb-0e1bf1636254/568bb00f-455e-32b8-8deb-0e1bf1636254_manifest.xml", "https://umsabj35g0v1lrp2w3qh.blob.core.windows.net/568bb00f-455e-32b8-8deb-0e1bf1636254/568bb00f-455e-32b8-8deb-0e1bf1636254_manifest.xml", "https://umsar1tzn12qbbhwwgmw.blob.core.windows.net/568bb00f-455e-32b8-8deb-0e1bf1636254/568bb00f-455e-32b8-8deb-0e1bf1636254_manifest.xml", "https://umsas3lpnprqfsznclrr.blob.core.windows.net/568bb00f-455e-32b8-8deb-0e1bf1636254/568bb00f-455e-32b8-8deb-0e1bf1636254_manifest.xml", "https://umsaq0ksgznjhnhvqvmj.blob.core.windows.net/568bb00f-455e-32b8-8deb-0e1bf1636254/568bb00f-455e-32b8-8deb-0e1bf1636254_manifest.xml" ] } ], "extensionGoalStates": [ { "name": "Microsoft.GuestConfiguration.ConfigurationforLinux", "version": "1.26.89", "location": "https://umsavph101h3qhc1pz3t.blob.core.windows.net/a56f582c-8808-3cf4-9fd0-c943fa076e03/a56f582c-8808-3cf4-9fd0-c943fa076e03_manifest.xml", "failoverLocation": "https://umsargnfzcnssnpfskrr.blob.core.windows.net/a56f582c-8808-3cf4-9fd0-c943fa076e03/a56f582c-8808-3cf4-9fd0-c943fa076e03_manifest.xml", "additionalLocations": [ "https://umsazq0spn4w5z3grwcv.blob.core.windows.net/a56f582c-8808-3cf4-9fd0-c943fa076e03/a56f582c-8808-3cf4-9fd0-c943fa076e03_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false, "settings": [ { "protectedSettingsCertThumbprint": "33BDFEDC3C0E7F57A27C448BC4C4EB3D4D763489", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEBdw95bU1gKHSmrpN2j+pkYwDQYJKoZIhvcNAQEBBQAEggEAVSJGNkhDfXF6Yq6HhisMn967spyCZA8EdoBfqEXQ+3YZ6mJEe376AAozXRNJqMa1UzhC3LpM1AGZvhr/GMp6Epnj3FQy5HEL13GwhzcnNluPh1ljbKUfRXTr1mX2AAHrdikKfMG9GXb0rk1vdBzM8sfd3MaVyXxnJGImVdqa2x83kU7mkv9pv1YF6kVjfLszcjDxHNVGUNZJ3LMS3qgMyTAuyVIpMNnf/5nRo1sGc+eYc8QUbSDhDO8+bFCFtrDEhexbqfE2Ie/TtM/AqT9I76Obok9Z3cP7Be5C22F7ucV8jKk0SUTvny9Xf0Up9C9VAZ2zaDGIVSztAuEV2RvAPTArBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECB+2FrZouHwVgAgGI58eR+Bjag==", "publicSettings": "{}" } ] }, { "name": "Microsoft.Azure.Extensions.CustomScript", "version": "2.1.13", "location": "https://umsalvlwk4lfvxs4w05v.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "failoverLocation": "https://umsat3kgcjtz4rzmtdhx.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "additionalLocations": [ "https://umsaff4lwzr0mqltcdsj.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 5, "isMultiConfig": false, "settings": [ { "protectedSettingsCertThumbprint": "33BDFEDC3C0E7F57A27C448BC4C4EB3D4D763489", "protectedSettings": "MIIByAYJKoZIhvcNAQcDoIIBuTCCAbUCAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEBdw95bU1gKHSmrpN2j+pkYwDQYJKoZIhvcNAQEBBQAEggEAMyvTexiSexpeDviPJvSfN9V2zgNyppB1PaXCrzjykc7Fw3+xXWxgK5v645cqHJE9FTExvaaJvL83IlAe/ScNLdoT3R3cvYkE2Xu0xDTK07UWSihwuexYSTA7IeKrFLUb9uvMKygPQAvJKq3n7XwFLfvye9xUvjaLD+QEwKX7Y/RKKLfta+kpZhy3GfJPe5U1QgGhkoK8ZAFgFUD78kafBQ7FU8KxuyMTxWthkyRa6EJgdlXSfaPvM39vgPiznbDpiVPqYgwjppnvnRpwK1WrKhj1xUTyaOHQhVzUt9sl+zQjxgBTAff2xbe/igxT4ukudmWz033LTfInc3rR89jGMzBDBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECACrQi7fenqvgCC5CMyFMZqpPtGxqaR+O9pdaZ7wd53ms1EEkhSMnPy7sw==" } ] }, { "name": "Microsoft.CPlat.Core.LinuxHibernateExtension", "version": "1.0.1", "location": "https://umsarmtx2bjcqg0cj14s.blob.core.windows.net/ac7acc47-2188-9dae-3527-8acb855e707c/ac7acc47-2188-9dae-3527-8acb855e707c_manifest.xml", "failoverLocation": "https://umsaqw3srxc4dmbcfz3q.blob.core.windows.net/ac7acc47-2188-9dae-3527-8acb855e707c/ac7acc47-2188-9dae-3527-8acb855e707c_manifest.xml", "additionalLocations": [ "https://umsac5lzx3m5mmb1qsnk.blob.core.windows.net/ac7acc47-2188-9dae-3527-8acb855e707c/ac7acc47-2188-9dae-3527-8acb855e707c_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "isMultiConfig": false } ] }Azure-WALinuxAgent-a976115/tests/data/hibernate/goal_state_3/000077500000000000000000000000001510742556200237525ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/hibernate/goal_state_3/VmSettings.json000066400000000000000000000113031510742556200267460ustar00rootroot00000000000000{ "hostGAPluginVersion": "1.0.8.175", "activityId": "603670b5-db87-47fd-bfed-56464612ea6b", "correlationId": "2428bb44-f44a-432b-a279-bc1d55b39af9", "inSvdSeqNo": 10, "certificatesRevision": 0, "extensionsLastModifiedTickCount": 638852755619904276, "extensionGoalStatesSource": "FastTrack", "statusUploadBlob": { "statusBlobType": "PageBlob", "value": "https://foo-bar.z6.blob.storage.azure.net/$system/test-vm.aef009a9-f00b-490d-baa5-f9c64baa8be6.status?***REDACTED***" }, "gaFamilies": [ { "name": "Prod", "version": "2.13.1.1", "isVersionFromRSM": false, "isVMEnabledForRSMUpgrades": false, "uris": [ "https://umsaq0ksgznjhnhvqvmj.blob.core.windows.net/568bb00f-455e-32b8-8deb-0e1bf1636254/568bb00f-455e-32b8-8deb-0e1bf1636254_manifest.xml", "https://umsagnbbh4hpglrpgc5t.blob.core.windows.net/568bb00f-455e-32b8-8deb-0e1bf1636254/568bb00f-455e-32b8-8deb-0e1bf1636254_manifest.xml", "https://umsar1tzn12qbbhwwgmw.blob.core.windows.net/568bb00f-455e-32b8-8deb-0e1bf1636254/568bb00f-455e-32b8-8deb-0e1bf1636254_manifest.xml", "https://umsas3lpnprqfsznclrr.blob.core.windows.net/568bb00f-455e-32b8-8deb-0e1bf1636254/568bb00f-455e-32b8-8deb-0e1bf1636254_manifest.xml", "https://umsabj35g0v1lrp2w3qh.blob.core.windows.net/568bb00f-455e-32b8-8deb-0e1bf1636254/568bb00f-455e-32b8-8deb-0e1bf1636254_manifest.xml" ] } ], "extensionGoalStates": [ { "name": "Microsoft.GuestConfiguration.ConfigurationforLinux", "version": "1.26.89", "location": "https://umsajcsg1mkf30sqkz4v.blob.core.windows.net/a56f582c-8808-3cf4-9fd0-c943fa076e03/a56f582c-8808-3cf4-9fd0-c943fa076e03_manifest.xml", "failoverLocation": "https://umsavph101h3qhc1pz3t.blob.core.windows.net/a56f582c-8808-3cf4-9fd0-c943fa076e03/a56f582c-8808-3cf4-9fd0-c943fa076e03_manifest.xml", "additionalLocations": [ "https://umsargnfzcnssnpfskrr.blob.core.windows.net/a56f582c-8808-3cf4-9fd0-c943fa076e03/a56f582c-8808-3cf4-9fd0-c943fa076e03_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false, "settings": [ { "protectedSettingsCertThumbprint": "33BDFEDC3C0E7F57A27C448BC4C4EB3D4D763489", "protectedSettings": "***REDACTED***", "publicSettings": "{}" } ] }, { "name": "Microsoft.Azure.Extensions.CustomScript", "version": "2.1.13", "location": "https://umsat3kgcjtz4rzmtdhx.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "failoverLocation": "https://umsalvlwk4lfvxs4w05v.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "additionalLocations": [ "https://umsalhsl5scrdmt03dql.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 6, "isMultiConfig": false, "settings": [ { "protectedSettingsCertThumbprint": "33BDFEDC3C0E7F57A27C448BC4C4EB3D4D763489", "protectedSettings": "***REDACTED***" } ] }, { "name": "Microsoft.CPlat.Core.LinuxHibernateExtension", "version": "1.0.1", "location": "https://umsarmtx2bjcqg0cj14s.blob.core.windows.net/ac7acc47-2188-9dae-3527-8acb855e707c/ac7acc47-2188-9dae-3527-8acb855e707c_manifest.xml", "failoverLocation": "https://umsan3ktnnkr4k4fpv1x.blob.core.windows.net/ac7acc47-2188-9dae-3527-8acb855e707c/ac7acc47-2188-9dae-3527-8acb855e707c_manifest.xml", "additionalLocations": [ "https://umsaqw3srxc4dmbcfz3q.blob.core.windows.net/ac7acc47-2188-9dae-3527-8acb855e707c/ac7acc47-2188-9dae-3527-8acb855e707c_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false } ] }Azure-WALinuxAgent-a976115/tests/data/hostgaplugin/000077500000000000000000000000001510742556200221515ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/hostgaplugin/ext_conf-agent_family_version.xml000066400000000000000000000237521510742556200307130ustar00rootroot00000000000000 Prod 9.9.9.10 9.9.9.9 true true https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml Test 9.9.9.10 9.9.9.9 true true https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml CentralUSEUAP CRP MultipleExtensionsPerHandler https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.status?sv=2018-03-28&sr=b&sk=system-1&sig=KNWgC2%3d&se=9999-01-01T00%3a00%3a00Z&sp=w https://zrdfepirv2cbn09pr02a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml https://zrdfepirv2cbn06prdstr01a.blob.core.windows.net/4ef06ad957494df49c807a5334f2b5d2/Microsoft.Azure.Security.Monitoring_AzureSecurityLinuxAgent_useast2euap_manifest.xml https://umsanh4b5rfz0q0p4pwm.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml https://umsawqtlsshtn5v2nfgh.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml https://umsah3cwjlctnmhsvzqv.blob.core.windows.net/2bbece4f-0283-d415-b034-cc0adc6997a1/2bbece4f-0283-d415-b034-cc0adc6997a1_manifest.xml { "runtimeSettings": [ { "handlerSettings": { "protectedSettingsCertThumbprint": "F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEFpB/HKM/7evRk+DBz754wUwDQYJKoZIhvcNAQEBBQAEggEADPJwniDeIUXzxNrZCloitFdscQ59Bz1dj9DLBREAiM8jmxM0LLicTJDUv272Qm/4ZQgdqpFYBFjGab/9MX+Ih2x47FkVY1woBkckMaC/QOFv84gbboeQCmJYZC/rZJdh8rCMS+CEPq3uH1PVrvtSdZ9uxnaJ+E4exTPPviIiLIPtqWafNlzdbBt8HZjYaVw+SSe+CGzD2pAQeNttq3Rt/6NjCzrjG8ufKwvRoqnrInMs4x6nnN5/xvobKIBSv4/726usfk8Ug+9Q6Benvfpmre2+1M5PnGTfq78cO3o6mI3cPoBUjp5M0iJjAMGeMt81tyHkimZrEZm6pLa4NQMOEjArBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECC5nVaiJaWt+gAhgeYvxUOYHXw==", "publicSettings": {"GCS_AUTO_CONFIG":true} } } ] } { "runtimeSettings": [ { "handlerSettings": { "protectedSettingsCertThumbprint": "F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEFpB/HKM/7evRk+DBz754wUwDQYJKoZIhvcNAQEBBQAEggEADPJwniDeIUXzxNrZCloitFdscQ59Bz1dj9DLBREAiM8jmxM0LLicTJDUv272Qm/4ZQgdqpFYBFjGab/9MX+Ih2x47FkVY1woBkckMaC/QOFv84gbboeQCmJYZC/rZJdh8rCMS+CEPq3uH1PVrvtSdZ9uxnaJ+E4exTPPviIiLIPtqWafNlzdbBt8HZjYaVw+SSe+CGzD2pAQeNttq3Rt/6NjCzrjG8ufKwvRoqnrInMs4x6nnN5/xvobKIBSv4/726usfk8Ug+9Q6Benvfpmre2+1M5PnGTfq78cO3o6mI3cPoBUjp5M0iJjAMGeMt81tyHkimZrEZm6pLa4NQMOEjArBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECC5nVaiJaWt+gAhgeYvxUOYHXw==", "publicSettings": {"enableGenevaUpload":true} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"commandToExecute":"echo 'cee174d4-4daa-4b07-9958-53b9649445c2'"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"echo '4abb1e88-f349-41f8-8442-247d9fdfcac5'"}} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"echo 'e865c9bc-a7b3-42c6-9a79-cfa98a1ee8b3'"}} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"echo 'f923e416-0340-485c-9243-8b84fb9930c6'"}} } } ] } { "runtimeSettings": [ { "handlerSettings": { "protectedSettingsCertThumbprint": "59A10F50FFE2A0408D3F03FE336C8FD5716CF25C", "protectedSettings": "*** REDACTED ***" } } ] } https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.vmSettings?sv=2018-03-28&sr=b&sk=system-1&sig=PaiLic%3d&se=9999-01-01T00%3a00%3a00Z&sp=r Azure-WALinuxAgent-a976115/tests/data/hostgaplugin/ext_conf-empty_depends_on.xml000066400000000000000000000064561510742556200300450ustar00rootroot00000000000000 Prod https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml 2.5.0.2 Test https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml 2.5.0.2 CentralUSEUAP CRP MultipleExtensionsPerHandler https://dcrcqabsr1.blob.core.windows.net/$system/edpxmal5j1.058b176d-445b-4e75-bd97-4911511b7d96.status?sv=2018-03-28&sr=b&sk=system-1&sig=U4KaLxlyYfgQ%2fie8RCwgMBSXa3E4vlW0ozPYOEHikoc%3d&se=9999-01-01T00%3a00%3a00Z&sp=w https://umsafwzhkbm1rfrhl0ws.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"commandToExecute":"echo '09cd27e9-fbd6-48ad-be86-55f3783e0a23'"} } } ] } https://dcrcqabsr1.blob.core.windows.net/$system/edpxmal5j1.058b176d-445b-4e75-bd97-4911511b7d96.vmSettings?sv=2018-03-28&sr=b&sk=system-1&sig=mOMtcUyao4oNPMtcVhQjzMK%2bmGSJS3Y1MIKOJPjqzus%3d&se=9999-01-01T00%3a00%3a00Z&sp=r Azure-WALinuxAgent-a976115/tests/data/hostgaplugin/ext_conf-invalid_blob_type.xml000066400000000000000000000121651510742556200301700ustar00rootroot00000000000000 Prod https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml 2.5.0.2 Test https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml 2.5.0.2 CentralUSEUAP CRP MultipleExtensionsPerHandler https://dcrcqabsr1.blob.core.windows.net/$system/edpxmal5j1.058b176d-445b-4e75-bd97-4911511b7d96.status?sv=2018-03-28&sr=b&sk=system-1&sig=U4KaLxlyYfgQ%2fie8RCwgMBSXa3E4vlW0ozPYOEHikoc%3d&se=9999-01-01T00%3a00%3a00Z&sp=w https://umsafwzhkbm1rfrhl0ws.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml https://umsajbjtqrb3zqjvgb2z.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"commandToExecute":"echo '09cd27e9-fbd6-48ad-be86-55f3783e0a23'"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"echo '737fa9f1-e9bf-4c3e-ab1f-9e03cd0b5b40'"}} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"echo 'f6a0a405-c028-4e68-bd77-5d491fbbd9cf'"}} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"echo '338e316a-01bf-4513-8ae1-b603b09ad155'"}} } } ] } https://dcrcqabsr1.blob.core.windows.net/$system/edpxmal5j1.058b176d-445b-4e75-bd97-4911511b7d96.vmSettings?sv=2018-03-28&sr=b&sk=system-1&sig=mOMtcUyao4oNPMtcVhQjzMK%2bmGSJS3Y1MIKOJPjqzus%3d&se=9999-01-01T00%3a00%3a00Z&sp=r Azure-WALinuxAgent-a976115/tests/data/hostgaplugin/ext_conf-no_status_upload_blob.xml000066400000000000000000000045351510742556200310660ustar00rootroot00000000000000 Prod https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml CentralUSEUAP CRP https://umsanh4b5rfz0q0p4pwm.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"commandToExecute":"echo 'cee174d4-4daa-4b07-9958-53b9649445c2'"} } } ] } https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.vmSettings?sv=2018-03-28&sr=b&sk=system-1&sig=PaiLic%3d&se=9999-01-01T00%3a00%3a00Z&sp=r Azure-WALinuxAgent-a976115/tests/data/hostgaplugin/ext_conf-rsm_version_properties_false.xml000066400000000000000000000236341510742556200325020ustar00rootroot00000000000000 Prod 9.9.9.10 false false https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml Test 9.9.9.10 false false https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml CentralUSEUAP CRP MultipleExtensionsPerHandler https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.status?sv=2018-03-28&sr=b&sk=system-1&sig=KNWgC2%3d&se=9999-01-01T00%3a00%3a00Z&sp=w https://zrdfepirv2cbn09pr02a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml https://zrdfepirv2cbn06prdstr01a.blob.core.windows.net/4ef06ad957494df49c807a5334f2b5d2/Microsoft.Azure.Security.Monitoring_AzureSecurityLinuxAgent_useast2euap_manifest.xml https://umsanh4b5rfz0q0p4pwm.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml https://umsawqtlsshtn5v2nfgh.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml https://umsah3cwjlctnmhsvzqv.blob.core.windows.net/2bbece4f-0283-d415-b034-cc0adc6997a1/2bbece4f-0283-d415-b034-cc0adc6997a1_manifest.xml { "runtimeSettings": [ { "handlerSettings": { "protectedSettingsCertThumbprint": "F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEFpB/HKM/7evRk+DBz754wUwDQYJKoZIhvcNAQEBBQAEggEADPJwniDeIUXzxNrZCloitFdscQ59Bz1dj9DLBREAiM8jmxM0LLicTJDUv272Qm/4ZQgdqpFYBFjGab/9MX+Ih2x47FkVY1woBkckMaC/QOFv84gbboeQCmJYZC/rZJdh8rCMS+CEPq3uH1PVrvtSdZ9uxnaJ+E4exTPPviIiLIPtqWafNlzdbBt8HZjYaVw+SSe+CGzD2pAQeNttq3Rt/6NjCzrjG8ufKwvRoqnrInMs4x6nnN5/xvobKIBSv4/726usfk8Ug+9Q6Benvfpmre2+1M5PnGTfq78cO3o6mI3cPoBUjp5M0iJjAMGeMt81tyHkimZrEZm6pLa4NQMOEjArBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECC5nVaiJaWt+gAhgeYvxUOYHXw==", "publicSettings": {"GCS_AUTO_CONFIG":true} } } ] } { "runtimeSettings": [ { "handlerSettings": { "protectedSettingsCertThumbprint": "F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEFpB/HKM/7evRk+DBz754wUwDQYJKoZIhvcNAQEBBQAEggEADPJwniDeIUXzxNrZCloitFdscQ59Bz1dj9DLBREAiM8jmxM0LLicTJDUv272Qm/4ZQgdqpFYBFjGab/9MX+Ih2x47FkVY1woBkckMaC/QOFv84gbboeQCmJYZC/rZJdh8rCMS+CEPq3uH1PVrvtSdZ9uxnaJ+E4exTPPviIiLIPtqWafNlzdbBt8HZjYaVw+SSe+CGzD2pAQeNttq3Rt/6NjCzrjG8ufKwvRoqnrInMs4x6nnN5/xvobKIBSv4/726usfk8Ug+9Q6Benvfpmre2+1M5PnGTfq78cO3o6mI3cPoBUjp5M0iJjAMGeMt81tyHkimZrEZm6pLa4NQMOEjArBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECC5nVaiJaWt+gAhgeYvxUOYHXw==", "publicSettings": {"enableGenevaUpload":true} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"commandToExecute":"echo 'cee174d4-4daa-4b07-9958-53b9649445c2'"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"echo '4abb1e88-f349-41f8-8442-247d9fdfcac5'"}} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"echo 'e865c9bc-a7b3-42c6-9a79-cfa98a1ee8b3'"}} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"echo 'f923e416-0340-485c-9243-8b84fb9930c6'"}} } } ] } { "runtimeSettings": [ { "handlerSettings": { "protectedSettingsCertThumbprint": "59A10F50FFE2A0408D3F03FE336C8FD5716CF25C", "protectedSettings": "*** REDACTED ***" } } ] } https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.vmSettings?sv=2018-03-28&sr=b&sk=system-1&sig=PaiLic%3d&se=9999-01-01T00%3a00%3a00Z&sp=r Azure-WALinuxAgent-a976115/tests/data/hostgaplugin/ext_conf.xml000066400000000000000000000214751510742556200245110ustar00rootroot00000000000000 Prod https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml Test https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml CentralUSEUAP CRP MultipleExtensionsPerHandler https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.status?sv=2018-03-28&sr=b&sk=system-1&sig=KNWgC2%3d&se=9999-01-01T00%3a00%3a00Z&sp=w https://zrdfepirv2cbn09pr02a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml https://zrdfepirv2cbn06prdstr01a.blob.core.windows.net/4ef06ad957494df49c807a5334f2b5d2/Microsoft.Azure.Security.Monitoring_AzureSecurityLinuxAgent_useast2euap_manifest.xml https://umsanh4b5rfz0q0p4pwm.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml https://umsawqtlsshtn5v2nfgh.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml https://umsah3cwjlctnmhsvzqv.blob.core.windows.net/2bbece4f-0283-d415-b034-cc0adc6997a1/2bbece4f-0283-d415-b034-cc0adc6997a1_manifest.xml { "runtimeSettings": [ { "handlerSettings": { "protectedSettingsCertThumbprint": "F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/Microsoft.Azure.Monitor.AzureMonitorLinuxAgent==", "publicSettings": {"GCS_AUTO_CONFIG":true} } } ] } { "runtimeSettings": [ { "handlerSettings": { "protectedSettingsCertThumbprint": "F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent==", "publicSettings": {"enableGenevaUpload":true} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"commandToExecute":"echo 'cee174d4-4daa-4b07-9958-53b9649445c2'"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"echo '4abb1e88-f349-41f8-8442-247d9fdfcac5'"}} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"echo 'e865c9bc-a7b3-42c6-9a79-cfa98a1ee8b3'"}} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"echo 'f923e416-0340-485c-9243-8b84fb9930c6'"}} } } ] } { "runtimeSettings": [ { "handlerSettings": { "protectedSettingsCertThumbprint": "59A10F50FFE2A0408D3F03FE336C8FD5716CF25C", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpddesZQewdDBgegkxNzA1BgoJkgergres/Microsoft.OSTCExtensions.VMAccessForLinux==" } } ] } https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.vmSettings?sv=2018-03-28&sr=b&sk=system-1&sig=PaiLic%3d&se=9999-01-01T00%3a00%3a00Z&sp=r Azure-WALinuxAgent-a976115/tests/data/hostgaplugin/in_vm_artifacts_profile.json000066400000000000000000000000221510742556200277260ustar00rootroot00000000000000{ "onHold": true }Azure-WALinuxAgent-a976115/tests/data/hostgaplugin/vm_settings-agent_family_version.json000066400000000000000000000173431510742556200316200ustar00rootroot00000000000000{ "hostGAPluginVersion": "1.0.8.133", "vmSettingsSchemaVersion": "0.0", "activityId": "a33f6f53-43d6-4625-b322-1a39651a00c9", "correlationId": "9a47a2a2-e740-4bfc-b11b-4f2f7cfe7d2e", "inSvdSeqNo": 1, "extensionsLastModifiedTickCount": 637726699999999999, "extensionGoalStatesSource": "FastTrack", "onHold": true, "statusUploadBlob": { "statusBlobType": "BlockBlob", "value": "https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.status?sv=2018-03-28&sr=b&sk=system-1&sig=KNWgC2%3d&se=9999-01-01T00%3a00%3a00Z&sp=w" }, "inVMMetadata": { "subscriptionId": "8e037ad4-618f-4466-8bc8-5099d41ac15b", "resourceGroupName": "rg-dc-86fjzhp", "vmName": "edp0plkw2b", "location": "CentralUSEUAP", "vmId": "86f4ae0a-61f8-48ae-9199-40f402d56864", "vmSize": "Standard_B2s", "osType": "Linux" }, "requiredFeatures": [ { "name": "MultipleExtensionsPerHandler" } ], "gaFamilies": [ { "name": "Prod", "version": "9.9.9.9", "fromVersion": "9.9.9.9", "isVersionFromRSM": true, "isVMEnabledForRSMUpgrades": true, "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml" ] }, { "name": "Test", "version": "9.9.9.9", "fromVersion": "9.9.9.9", "isVersionFromRSM": true, "isVMEnabledForRSMUpgrades": true, "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml" ] } ], "extensionGoalStates": [ { "name": "Microsoft.Azure.Monitor.AzureMonitorLinuxAgent", "version": "1.9.1", "location": "https://zrdfepirv2cbn04prdstr01a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml", "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "settings": [ { "protectedSettingsCertThumbprint": "F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEFpB/HKM/7evRk+DBz754wUwDQYJKoZIhvcNAQEBBQAEggEADPJwniDeIUXzxNrZCloitFdscQ59Bz1dj9DLBREAiM8jmxM0LLicTJDUv272Qm/4ZQgdqpFYBFjGab/9MX+Ih2x47FkVY1woBkckMaC/QOFv84gbboeQCmJYZC/rZJdh8rCMS+CEPq3uH1PVrvtSdZ9uxnaJ+E4exTPPviIiLIPtqWafNlzdbBt8HZjYaVw+SSe+CGzD2pAQeNttq3Rt/6NjCzrjG8ufKwvRoqnrInMs4x6nnN5/xvobKIBSv4/726usfk8Ug+9Q6Benvfpmre2+1M5PnGTfq78cO3o6mI3cPoBUjp5M0iJjAMGeMt81tyHkimZrEZm6pLa4NQMOEjArBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECC5nVaiJaWt+gAhgeYvxUOYHXw==", "publicSettings": "{\"GCS_AUTO_CONFIG\":true}" } ] }, { "name": "Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent", "version": "2.15.112", "location": "https://zrdfepirv2cbn04prdstr01a.blob.core.windows.net/4ef06ad957494df49c807a5334f2b5d2/Microsoft.Azure.Security.Monitoring_AzureSecurityLinuxAgent_useast2euap_manifest.xml", "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "settings": [ { "protectedSettingsCertThumbprint": "F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEFpB/HKM/7evRk+DBz754wUwDQYJKoZIhvcNAQEBBQAEggEADPJwniDeIUXzxNrZCloitFdscQ59Bz1dj9DLBREAiM8jmxM0LLicTJDUv272Qm/4ZQgdqpFYBFjGab/9MX+Ih2x47FkVY1woBkckMaC/QOFv84gbboeQCmJYZC/rZJdh8rCMS+CEPq3uH1PVrvtSdZ9uxnaJ+E4exTPPviIiLIPtqWafNlzdbBt8HZjYaVw+SSe+CGzD2pAQeNttq3Rt/6NjCzrjG8ufKwvRoqnrInMs4x6nnN5/xvobKIBSv4/726usfk8Ug+9Q6Benvfpmre2+1M5PnGTfq78cO3o6mI3cPoBUjp5M0iJjAMGeMt81tyHkimZrEZm6pLa4NQMOEjArBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECC5nVaiJaWt+gAhgeYvxUOYHXw==", "publicSettings": "{\"enableGenevaUpload\":true}" } ] }, { "name": "Microsoft.Azure.Extensions.CustomScript", "version": "2.1.6", "location": "https://umsavwggj2v40kvqhc0w.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "failoverlocation": "https://umsafwzhkbm1rfrhl0ws.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "additionalLocations": [ "https://umsanh4b5rfz0q0p4pwm.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false, "settings": [ { "publicSettings": "{\"commandToExecute\":\"echo 'cee174d4-4daa-4b07-9958-53b9649445c2'\"}" } ] }, { "name": "Microsoft.CPlat.Core.RunCommandHandlerLinux", "version": "1.2.0", "location": "https://umsavbvncrpzbnxmxzmr.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml", "failoverlocation": "https://umsajbjtqrb3zqjvgb2z.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml", "additionalLocations": [ "https://umsawqtlsshtn5v2nfgh.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": true, "settings": [ { "publicSettings": "{\"source\":{\"script\":\"echo '4abb1e88-f349-41f8-8442-247d9fdfcac5'\"}}", "seqNo": 0, "extensionName": "MCExt1", "extensionState": "enabled" }, { "publicSettings": "{\"source\":{\"script\":\"echo 'e865c9bc-a7b3-42c6-9a79-cfa98a1ee8b3'\"}}", "seqNo": 0, "extensionName": "MCExt2", "extensionState": "enabled" }, { "publicSettings": "{\"source\":{\"script\":\"echo 'f923e416-0340-485c-9243-8b84fb9930c6'\"}}", "seqNo": 0, "extensionName": "MCExt3", "extensionState": "enabled" } ] } ] }Azure-WALinuxAgent-a976115/tests/data/hostgaplugin/vm_settings-difference_in_required_features.json000066400000000000000000000247711510742556200337750ustar00rootroot00000000000000{ "hostGAPluginVersion": "1.0.8.133", "vmSettingsSchemaVersion": "0.0", "activityId": "a33f6f53-43d6-4625-b322-1a39651a00c9", "correlationId": "9a47a2a2-e740-4bfc-b11b-4f2f7cfe7d2e", "inSvdSeqNo": 1, "extensionsLastModifiedTickCount": 637726657706205217, "extensionGoalStatesSource": "FastTrack", "onHold": true, "statusUploadBlob": { "statusBlobType": "BlockBlob", "value": "https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.status?sv=2018-03-28&sr=b&sk=system-1&sig=KNWgC2%3d&se=9999-01-01T00%3a00%3a00Z&sp=w" }, "inVMMetadata": { "subscriptionId": "8e037ad4-618f-4466-8bc8-5099d41ac15b", "resourceGroupName": "rg-dc-86fjzhp", "vmName": "edp0plkw2b", "location": "CentralUSEUAP", "vmId": "86f4ae0a-61f8-48ae-9199-40f402d56864", "vmSize": "Standard_B2s", "osType": "Linux" }, "requiredFeatures": [ { "name": "A_NON_EXISTING_FEATURE_USED_TO_PRODUCE_AN_ERROR" } ], "gaFamilies": [ { "name": "Prod", "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml" ] }, { "name": "Test", "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml" ] } ], "extensionGoalStates": [ { "name": "Microsoft.Azure.Monitor.AzureMonitorLinuxAgent", "version": "1.9.1", "location": "https://zrdfepirv2cbn04prdstr01a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml", "failoverlocation": "https://zrdfepirv2cbn06prdstr01a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml", "additionalLocations": ["https://zrdfepirv2cbn09pr02a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml"], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "settings": [ { "protectedSettingsCertThumbprint": "F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEFpB/HKM/7evRk+DBz754wUwDQYJKoZIhvcNAQEBBQAEggEADPJwniDeIUXzxNrZCloitFdscQ59Bz1dj9DLBREAiM8jmxM0LLicTJDUv272Qm/4ZQgdqpFYBFjGab/9MX+Ih2x47FkVY1woBkckMaC/QOFv84gbboeQCmJYZC/rZJdh8rCMS+CEPq3uH1PVrvtSdZ9uxnaJ+E4exTPPviIiLIPtqWafNlzdbBt8HZjYaVw+SSe+CGzD2pAQeNttq3Rt/6NjCzrjG8ufKwvRoqnrInMs4x6nnN5/xvobKIBSv4/726usfk8Ug+9Q6Benvfpmre2+1M5PnGTfq78cO3o6mI3cPoBUjp5M0iJjAMGeMt81tyHkimZrEZm6pLa4NQMOEjArBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECC5nVaiJaWt+gAhgeYvxUOYHXw==", "publicSettings": "{\"GCS_AUTO_CONFIG\":true}" } ] }, { "name": "Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent", "version": "2.15.112", "location": "https://zrdfepirv2cbn04prdstr01a.blob.core.windows.net/4ef06ad957494df49c807a5334f2b5d2/Microsoft.Azure.Security.Monitoring_AzureSecurityLinuxAgent_useast2euap_manifest.xml", "failoverlocation": "https://zrdfepirv2cbz06prdstr01a.blob.core.windows.net/4ef06ad957494df49c807a5334f2b5d2/Microsoft.Azure.Security.Monitoring_AzureSecurityLinuxAgent_useast2euap_manifest.xml", "additionalLocations": ["https://zrdfepirv2cbn06prdstr01a.blob.core.windows.net/4ef06ad957494df49c807a5334f2b5d2/Microsoft.Azure.Security.Monitoring_AzureSecurityLinuxAgent_useast2euap_manifest.xml"], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "settings": [ { "protectedSettingsCertThumbprint": "F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEFpB/HKM/7evRk+DBz754wUwDQYJKoZIhvcNAQEBBQAEggEADPJwniDeIUXzxNrZCloitFdscQ59Bz1dj9DLBREAiM8jmxM0LLicTJDUv272Qm/4ZQgdqpFYBFjGab/9MX+Ih2x47FkVY1woBkckMaC/QOFv84gbboeQCmJYZC/rZJdh8rCMS+CEPq3uH1PVrvtSdZ9uxnaJ+E4exTPPviIiLIPtqWafNlzdbBt8HZjYaVw+SSe+CGzD2pAQeNttq3Rt/6NjCzrjG8ufKwvRoqnrInMs4x6nnN5/xvobKIBSv4/726usfk8Ug+9Q6Benvfpmre2+1M5PnGTfq78cO3o6mI3cPoBUjp5M0iJjAMGeMt81tyHkimZrEZm6pLa4NQMOEjArBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECC5nVaiJaWt+gAhgeYvxUOYHXw==", "publicSettings": "{\"enableGenevaUpload\":true}" } ] }, { "name": "Microsoft.Azure.Extensions.CustomScript", "version": "2.1.6", "location": "https://umsavwggj2v40kvqhc0w.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "failoverlocation": "https://umsafwzhkbm1rfrhl0ws.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "additionalLocations": [ "https://umsanh4b5rfz0q0p4pwm.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false, "settings": [ { "publicSettings": "{\"commandToExecute\":\"echo 'cee174d4-4daa-4b07-9958-53b9649445c2'\"}" } ], "dependsOn": [ { "DependsOnExtension": [ { "handler": "Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent" } ], "dependencyLevel": 1 } ] }, { "name": "Microsoft.CPlat.Core.RunCommandHandlerLinux", "version": "1.2.0", "location": "https://umsavbvncrpzbnxmxzmr.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml", "failoverlocation": "https://umsajbjtqrb3zqjvgb2z.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml", "additionalLocations": [ "https://umsawqtlsshtn5v2nfgh.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": true, "settings": [ { "publicSettings": "{\"source\":{\"script\":\"echo '4abb1e88-f349-41f8-8442-247d9fdfcac5'\"}}", "seqNo": 0, "extensionName": "MCExt1", "extensionState": "enabled" }, { "publicSettings": "{\"source\":{\"script\":\"echo 'e865c9bc-a7b3-42c6-9a79-cfa98a1ee8b3'\"}}", "seqNo": 0, "extensionName": "MCExt2", "extensionState": "enabled" }, { "publicSettings": "{\"source\":{\"script\":\"echo 'f923e416-0340-485c-9243-8b84fb9930c6'\"}}", "seqNo": 0, "extensionName": "MCExt3", "extensionState": "enabled" } ], "dependsOn": [ { "dependsOnExtension": [ { "extension": "...", "handler": "..." }, { "extension": "...", "handler": "..." } ], "dependencyLevel": 2, "name": "MCExt1" }, { "dependsOnExtension": [ { "extension": "...", "handler": "..." } ], "dependencyLevel": 1, "name": "MCExt2" } ] }, { "name": "Microsoft.OSTCExtensions.VMAccessForLinux", "version": "1.5.11", "location": "https://umsasc25p0kjg0c1dg4b.blob.core.windows.net/2bbece4f-0283-d415-b034-cc0adc6997a1/2bbece4f-0283-d415-b034-cc0adc6997a1_manifest.xml", "failoverlocation": "https://umsamfwlmfshvxx2lsjm.blob.core.windows.net/2bbece4f-0283-d415-b034-cc0adc6997a1/2bbece4f-0283-d415-b034-cc0adc6997a1_manifest.xml", "additionalLocations": [ "https://umsah3cwjlctnmhsvzqv.blob.core.windows.net/2bbece4f-0283-d415-b034-cc0adc6997a1/2bbece4f-0283-d415-b034-cc0adc6997a1_manifest.xml" ], "state": "enabled", "autoUpgrade": false, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false, "settings": [ { "protectedSettingsCertThumbprint": "59A10F50FFE2A0408D3F03FE336C8FD5716CF25C", "protectedSettings": "*** REDACTED ***" } ] } ] } Azure-WALinuxAgent-a976115/tests/data/hostgaplugin/vm_settings-empty_depends_on.json000066400000000000000000000061021510742556200307370ustar00rootroot00000000000000{ "hostGAPluginVersion": "1.0.8.133", "vmSettingsSchemaVersion": "0.0", "activityId": "2e7f8b5d-f637-4721-b757-cb190d49b4e9", "correlationId": "1bef4c48-044e-4225-8f42-1d1eac1eb158", "inSvdSeqNo": 1, "extensionsLastModifiedTickCount": 637693267431616449, "extensionGoalStatesSource": "FastTrack", "StatusUploadBlob": { "statusBlobType": "BlockBlob", "value": "https://dcrcqabsr1.blob.core.windows.net/$system/edpxmal5j1.058b176d-445b-4e75-bd97-4911511b7d96.status?sv=2018-03-28&sr=b&sk=system-1&sig=U4KaLxlyYfgQ%2fie8RCwgMBSXa3E4vlW0ozPYOEHikoc%3d&se=9999-01-01T00%3a00%3a00Z&sp=w" }, "inVMMetadata": { "subscriptionId": "8e037ad4-618f-4466-8bc8-5099d41ac15b", "resourceGroupName": "rg-dc-qphvx25", "vmName": "edpxmal5j1", "location": "CentralUSEUAP", "vmId": "058b176d-445b-4e75-bd97-4911511b7d96", "vmSize": "Standard_D2s_v3", "osType": "Linux" }, "requiredFeatures": [ { "name": "MultipleExtensionsPerHandler" } ], "gaFamilies": [ { "Name": "Prod", "Version": "2.5.0.2", "Uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml" ] }, { "Name": "Test", "Version": "2.5.0.2", "Uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml" ] } ], "extensionGoalStates": [ { "name": "Microsoft.Azure.Extensions.CustomScript", "version": "2.1.6", "location": "https://umsavwggj2v40kvqhc0w.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "failoverlocation": "https://umsanh4b5rfz0q0p4pwm.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "additionalLocations": [ "https://umsafwzhkbm1rfrhl0ws.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false, "settings": [ { "publicSettings": "{\"commandToExecute\":\"echo '09cd27e9-fbd6-48ad-be86-55f3783e0a23'\"}" } ], "dependsOn": [] } ] }Azure-WALinuxAgent-a976115/tests/data/hostgaplugin/vm_settings-fabric-no_thumbprints.json000066400000000000000000000215441510742556200317110ustar00rootroot00000000000000{ "hostGAPluginVersion": "1.0.8.133", "vmSettingsSchemaVersion": "0.0", "activityId": "a33f6f53-43d6-4625-b322-1a39651a00c9", "correlationId": "9a47a2a2-e740-4bfc-b11b-4f2f7cfe7d2e", "inSvdSeqNo": 1, "extensionsLastModifiedTickCount": 637726657706205299, "extensionGoalStatesSource": "Fabric", "onHold": true, "statusUploadBlob": { "statusBlobType": "BlockBlob", "value": "https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.status?sv=2018-03-28&sr=b&sk=system-1&sig=KNWgC2%3d&se=9999-01-01T00%3a00%3a00Z&sp=w" }, "inVMMetadata": { "subscriptionId": "8e037ad4-618f-4466-8bc8-5099d41ac15b", "resourceGroupName": "rg-dc-86fjzhp", "vmName": "edp0plkw2b", "location": "CentralUSEUAP", "vmId": "86f4ae0a-61f8-48ae-9199-40f402d56864", "vmSize": "Standard_B2s", "osType": "Linux" }, "requiredFeatures": [ { "name": "MultipleExtensionsPerHandler" } ], "gaFamilies": [ { "name": "Prod", "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml" ] }, { "name": "Test", "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml" ] } ], "extensionGoalStates": [ { "name": "Microsoft.Azure.Monitor.AzureMonitorLinuxAgent", "version": "1.9.1", "location": "https://zrdfepirv2cbn04prdstr01a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml", "failoverlocation": "https://zrdfepirv2cbn06prdstr01a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml", "additionalLocations": ["https://zrdfepirv2cbn09pr02a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml"], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "settings": [ { "publicSettings": "{\"GCS_AUTO_CONFIG\":true}" } ] }, { "name": "Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent", "version": "2.15.112", "location": "https://zrdfepirv2cbn04prdstr01a.blob.core.windows.net/4ef06ad957494df49c807a5334f2b5d2/Microsoft.Azure.Security.Monitoring_AzureSecurityLinuxAgent_useast2euap_manifest.xml", "failoverlocation": "https://zrdfepirv2cbz06prdstr01a.blob.core.windows.net/4ef06ad957494df49c807a5334f2b5d2/Microsoft.Azure.Security.Monitoring_AzureSecurityLinuxAgent_useast2euap_manifest.xml", "additionalLocations": ["https://zrdfepirv2cbn06prdstr01a.blob.core.windows.net/4ef06ad957494df49c807a5334f2b5d2/Microsoft.Azure.Security.Monitoring_AzureSecurityLinuxAgent_useast2euap_manifest.xml"], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "settings": [ { "publicSettings": "{\"enableGenevaUpload\":true}" } ] }, { "name": "Microsoft.Azure.Extensions.CustomScript", "version": "2.1.6", "location": "https://umsavwggj2v40kvqhc0w.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "failoverlocation": "https://umsafwzhkbm1rfrhl0ws.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "additionalLocations": [ "https://umsanh4b5rfz0q0p4pwm.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false, "settings": [ { "publicSettings": "{\"commandToExecute\":\"echo 'cee174d4-4daa-4b07-9958-53b9649445c2'\"}" } ], "dependsOn": [ { "DependsOnExtension": [ { "handler": "Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent" } ], "dependencyLevel": 1 } ] }, { "name": "Microsoft.CPlat.Core.RunCommandHandlerLinux", "version": "1.2.0", "location": "https://umsavbvncrpzbnxmxzmr.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml", "failoverlocation": "https://umsajbjtqrb3zqjvgb2z.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml", "additionalLocations": [ "https://umsawqtlsshtn5v2nfgh.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": true, "settings": [ { "publicSettings": "{\"source\":{\"script\":\"echo '4abb1e88-f349-41f8-8442-247d9fdfcac5'\"}}", "seqNo": 0, "extensionName": "MCExt1", "extensionState": "enabled" }, { "publicSettings": "{\"source\":{\"script\":\"echo 'e865c9bc-a7b3-42c6-9a79-cfa98a1ee8b3'\"}}", "seqNo": 0, "extensionName": "MCExt2", "extensionState": "enabled" }, { "publicSettings": "{\"source\":{\"script\":\"echo 'f923e416-0340-485c-9243-8b84fb9930c6'\"}}", "seqNo": 0, "extensionName": "MCExt3", "extensionState": "enabled" } ], "dependsOn": [ { "dependsOnExtension": [ { "extension": "...", "handler": "..." }, { "extension": "...", "handler": "..." } ], "dependencyLevel": 2, "name": "MCExt1" }, { "dependsOnExtension": [ { "extension": "...", "handler": "..." } ], "dependencyLevel": 1, "name": "MCExt2" } ] }, { "name": "Microsoft.OSTCExtensions.VMAccessForLinux", "version": "1.5.11", "location": "https://umsasc25p0kjg0c1dg4b.blob.core.windows.net/2bbece4f-0283-d415-b034-cc0adc6997a1/2bbece4f-0283-d415-b034-cc0adc6997a1_manifest.xml", "failoverlocation": "https://umsamfwlmfshvxx2lsjm.blob.core.windows.net/2bbece4f-0283-d415-b034-cc0adc6997a1/2bbece4f-0283-d415-b034-cc0adc6997a1_manifest.xml", "additionalLocations": [ "https://umsah3cwjlctnmhsvzqv.blob.core.windows.net/2bbece4f-0283-d415-b034-cc0adc6997a1/2bbece4f-0283-d415-b034-cc0adc6997a1_manifest.xml" ], "state": "enabled", "autoUpgrade": false, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false, "settings": [ ] } ] } Azure-WALinuxAgent-a976115/tests/data/hostgaplugin/vm_settings-invalid_blob_type.json000066400000000000000000000114371510742556200310770ustar00rootroot00000000000000{ "hostGAPluginVersion": "1.0.8.133", "vmSettingsSchemaVersion": "0.0", "activityId": "2e7f8b5d-f637-4721-b757-cb190d49b4e9", "correlationId": "1bef4c48-044e-4225-8f42-1d1eac1eb158", "inSvdSeqNo": 1, "extensionsLastModifiedTickCount": 637693267431616449, "extensionGoalStatesSource": "FastTrack", "StatusUploadBlob": { "statusBlobType": "INVALID_BLOB_TYPE", "value": "https://dcrcqabsr1.blob.core.windows.net/$system/edpxmal5j1.058b176d-445b-4e75-bd97-4911511b7d96.status?sv=2018-03-28&sr=b&sk=system-1&sig=U4KaLxlyYfgQ%2fie8RCwgMBSXa3E4vlW0ozPYOEHikoc%3d&se=9999-01-01T00%3a00%3a00Z&sp=w" }, "inVMMetadata": { "subscriptionId": "8e037ad4-618f-4466-8bc8-5099d41ac15b", "resourceGroupName": "rg-dc-qphvx25", "vmName": "edpxmal5j1", "location": "CentralUSEUAP", "vmId": "058b176d-445b-4e75-bd97-4911511b7d96", "vmSize": "Standard_D2s_v3", "osType": "Linux" }, "requiredFeatures": [ { "name": "MultipleExtensionsPerHandler" } ], "gaFamilies": [ { "Name": "Prod", "Version": "2.5.0.2", "Uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml" ] }, { "Name": "Test", "Version": "2.5.0.2", "Uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml" ] } ], "extensionGoalStates": [ { "name": "Microsoft.Azure.Extensions.CustomScript", "version": "2.1.6", "location": "https://umsavwggj2v40kvqhc0w.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "failoverlocation": "https://umsanh4b5rfz0q0p4pwm.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "additionalLocations": [ "https://umsafwzhkbm1rfrhl0ws.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false, "settings": [ { "publicSettings": "{\"commandToExecute\":\"echo '09cd27e9-fbd6-48ad-be86-55f3783e0a23'\"}" } ] }, { "name": "Microsoft.CPlat.Core.RunCommandHandlerLinux", "version": "1.2.0", "location": "https://umsavbvncrpzbnxmxzmr.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml", "failoverlocation": "https://umsawqtlsshtn5v2nfgh.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml", "additionalLocations": [ "https://umsajbjtqrb3zqjvgb2z.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": true, "settings": [ { "publicSettings": "{\"source\":{\"script\":\"echo '737fa9f1-e9bf-4c3e-ab1f-9e03cd0b5b40'\"}}", "seqNo": 0, "extensionName": "MCExt1", "extensionState": "enabled" }, { "publicSettings": "{\"source\":{\"script\":\"echo 'f6a0a405-c028-4e68-bd77-5d491fbbd9cf'\"}}", "seqNo": 0, "extensionName": "MCExt2", "extensionState": "enabled" }, { "publicSettings": "{\"source\":{\"script\":\"echo '338e316a-01bf-4513-8ae1-b603b09ad155'\"}}", "seqNo": 0, "extensionName": "MCExt3", "extensionState": "enabled" } ] } ] }Azure-WALinuxAgent-a976115/tests/data/hostgaplugin/vm_settings-missing_cert.json000066400000000000000000000061441510742556200300770ustar00rootroot00000000000000{ "hostGAPluginVersion": "1.0.8.133", "vmSettingsSchemaVersion": "0.0", "activityId": "a33f6f53-43d6-4625-b322-1a39651a00c9", "correlationId": "9a47a2a2-e740-4bfc-b11b-4f2f7cfe7d2e", "inSvdSeqNo": 1, "extensionsLastModifiedTickCount": 637726657706205299, "extensionGoalStatesSource": "FastTrack", "onHold": true, "statusUploadBlob": { "statusBlobType": "BlockBlob", "value": "https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.status?sv=2018-03-28&sr=b&sk=system-1&sig=KNWgC2%3d&se=9999-01-01T00%3a00%3a00Z&sp=w" }, "inVMMetadata": { "subscriptionId": "8e037ad4-618f-4466-8bc8-5099d41ac15b", "resourceGroupName": "rg-dc-86fjzhp", "vmName": "edp0plkw2b", "location": "CentralUSEUAP", "vmId": "86f4ae0a-61f8-48ae-9199-40f402d56864", "vmSize": "Standard_B2s", "osType": "Linux" }, "requiredFeatures": [ { "name": "MultipleExtensionsPerHandler" } ], "gaFamilies": [ { "name": "Prod", "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml" ] }, { "name": "Test", "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml" ] } ], "extensionGoalStates": [ { "name": "Microsoft.OSTCExtensions.VMAccessForLinux", "version": "1.5.11", "location": "https://umsasc25p0kjg0c1dg4b.blob.core.windows.net/2bbece4f-0283-d415-b034-cc0adc6997a1/2bbece4f-0283-d415-b034-cc0adc6997a1_manifest.xml", "failoverlocation": "https://umsamfwlmfshvxx2lsjm.blob.core.windows.net/2bbece4f-0283-d415-b034-cc0adc6997a1/2bbece4f-0283-d415-b034-cc0adc6997a1_manifest.xml", "additionalLocations": [ "https://umsah3cwjlctnmhsvzqv.blob.core.windows.net/2bbece4f-0283-d415-b034-cc0adc6997a1/2bbece4f-0283-d415-b034-cc0adc6997a1_manifest.xml" ], "state": "enabled", "autoUpgrade": false, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false, "settings": [ { "protectedSettingsCertThumbprint": "59A10F50FFE2A0408D3F03FE336C8FD5716CF25C", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpddesZQewdDBgegkxNzA1BgoJkgergres/Microsoft.OSTCExtensions.VMAccessForLinux==" } ] } ] } Azure-WALinuxAgent-a976115/tests/data/hostgaplugin/vm_settings-no_manifests.json000066400000000000000000000053371510742556200301010ustar00rootroot00000000000000{ "hostGAPluginVersion": "1.0.8.133", "vmSettingsSchemaVersion": "0.0", "activityId": "89d50bf1-fa55-4257-8af3-3db0c9f81ab4", "correlationId": "c143f8f0-a66b-4881-8c06-1efd278b0b02", "inSvdSeqNo": 978, "extensionsLastModifiedTickCount": 637829610574739741, "extensionGoalStatesSource": "FastTrack", "statusUploadBlob": { "statusBlobType": "PageBlob", "value": "https://md-ssd-xpdjf15s.blob.core.windows.net/$system/u-sqlwatcher.f338f67e.status?sv=2018-03-28&sr=b&sk=system-1&sig=88Y3NM%2b1aU%3d&se=9999-01-01T00%3a00%3a00Z&sp=rw" }, "inVMMetadata": { "subscriptionId": "8d3c2715-f063-40b8-9402-49784992ae8d", "resourceGroupName": "SYSTEMCENTERCURRENTBRANCH", "vmName": "ubuntu-sqlwatcher", "location": "centralus", "vmId": "f338f67e-5d06-4f13-892a-ff1b047ba5bf", "vmSize": "Standard_D2s_v3", "osType": "Linux", "vmImage": { "publisher": "Canonical", "offer": "UbuntuServer", "sku": "18.04-LTS", "version": "18.04.202005220" } }, "gaFamilies": [ { "name": "Prod" } ], "extensionGoalStates": [ { "name": "Microsoft.Azure.Monitor.WorkloadInsights.Test.Workload.LinuxConfigAgent", "version": "3.0", "state": "uninstall", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "isMultiConfig": false }, { "name": "Microsoft.Azure.Monitor.WorkloadInsights.Test.Workload.LinuxInstallerAgent", "version": "11.0", "state": "uninstall", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "isMultiConfig": false }, { "name": "Microsoft.Azure.Monitor.Workloads.Workload.WLILinuxExtension", "version": "0.2.127", "location": "https://umsakzkwhng2ft0jjptl.blob.core.windows.net/deeb2df6-c025-e6fb-b015-449ed6a676bc/deeb2df6-c025-e6fb-b015-449ed6a676bc_manifest.xml", "failoverLocation": "https://umsafmqfbv4hgrd1hqff.blob.core.windows.net/deeb2df6-c025-e6fb-b015-449ed6a676bc/deeb2df6-c025-e6fb-b015-449ed6a676bc_manifest.xml", "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 7, "isMultiConfig": false, "settings": [ { "publicSettings": "{\"workloadConfig\": null}" } ] } ] }Azure-WALinuxAgent-a976115/tests/data/hostgaplugin/vm_settings-no_status_upload_blob.json000066400000000000000000000051621510742556200317710ustar00rootroot00000000000000{ "hostGAPluginVersion": "1.0.8.133", "vmSettingsSchemaVersion": "0.0", "activityId": "a33f6f53-43d6-4625-b322-1a39651a00c9", "correlationId": "9a47a2a2-e740-4bfc-b11b-4f2f7cfe7d2e", "inSvdSeqNo": 1, "extensionsLastModifiedTickCount": 637726657706209999, "extensionGoalStatesSource": "FastTrack", "onHold": true, "inVMMetadata": { "subscriptionId": "8e037ad4-618f-4466-8bc8-5099d41ac15b", "resourceGroupName": "rg-dc-86fjzhp", "vmName": "edp0plkw2b", "location": "CentralUSEUAP", "vmId": "86f4ae0a-61f8-48ae-9199-40f402d56864", "vmSize": "Standard_B2s", "osType": "Linux" }, "requiredFeatures": [ { "name": "MultipleExtensionsPerHandler" } ], "gaFamilies": [ { "name": "Prod", "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml" ] } ], "extensionGoalStates": [ { "name": "Microsoft.Azure.Extensions.CustomScript", "version": "2.1.6", "location": "https://umsavwggj2v40kvqhc0w.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "failoverlocation": "https://umsafwzhkbm1rfrhl0ws.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "additionalLocations": [ "https://umsanh4b5rfz0q0p4pwm.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false, "settings": [ { "publicSettings": "{\"commandToExecute\":\"echo 'cee174d4-4daa-4b07-9958-53b9649445c2'\"}" } ], "dependsOn": [ { "DependsOnExtension": [ { "handler": "Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent" } ], "dependencyLevel": 1 } ] } ] } Azure-WALinuxAgent-a976115/tests/data/hostgaplugin/vm_settings-out-of-sync.json000066400000000000000000000071721510742556200275760ustar00rootroot00000000000000{ "hostGAPluginVersion": "1.0.8.133", "vmSettingsSchemaVersion": "0.0", "activityId": "AAAAAAA-BBBB-CCCC-DDDD-EEEEEEEEEEEE", "correlationId": "EEEEEEEE-DDDD-CCCC-BBBB-AAAAAAAAAAAA", "inSvdSeqNo": 1, "extensionsLastModifiedTickCount": 637726657000000000, "extensionGoalStatesSource": "FastTrack", "onHold": true, "statusUploadBlob": { "statusBlobType": "BlockBlob", "value": "https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.status?sv=2018-03-28&sr=b&sk=system-1&sig=KNWgC2%3d&se=9999-01-01T00%3a00%3a00Z&sp=w" }, "inVMMetadata": { "subscriptionId": "8e037ad4-618f-4466-8bc8-5099d41ac15b", "resourceGroupName": "rg-dc-86fjzhp", "vmName": "edp0plkw2b", "location": "CentralUSEUAP", "vmId": "86f4ae0a-61f8-48ae-9199-40f402d56864", "vmSize": "Standard_B2s", "osType": "Linux" }, "requiredFeatures": [ { "name": "MultipleExtensionsPerHandler" } ], "gaFamilies": [ { "name": "Prod", "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml" ] }, { "name": "Test", "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml" ] } ], "extensionGoalStates": [ { "name": "Microsoft.Azure.Monitor.AzureMonitorLinuxAgent", "version": "1.9.1", "location": "https://zrdfepirv2cbn04prdstr01a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml", "failoverlocation": "https://zrdfepirv2cbn06prdstr01a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml", "additionalLocations": ["https://zrdfepirv2cbn09pr02a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml"], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "settings": [ { "protectedSettingsCertThumbprint": "F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEFpB/HKM/7evRk+DBz754wUwDQYJKoZIhvcNAQEBBQAEggEADPJwniDeIUXzxNrZCloitFdscQ59Bz1dj9DLBREAiM8jmxM0LLicTJDUv272Qm/4ZQgdqpFYBFjGab/9MX+Ih2x47FkVY1woBkckMaC/QOFv84gbboeQCmJYZC/rZJdh8rCMS+CEPq3uH1PVrvtSdZ9uxnaJ+E4exTPPviIiLIPtqWafNlzdbBt8HZjYaVw+SSe+CGzD2pAQeNttq3Rt/6NjCzrjG8ufKwvRoqnrInMs4x6nnN5/xvobKIBSv4/726usfk8Ug+9Q6Benvfpmre2+1M5PnGTfq78cO3o6mI3cPoBUjp5M0iJjAMGeMt81tyHkimZrEZm6pLa4NQMOEjArBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECC5nVaiJaWt+gAhgeYvxUOYHXw==", "publicSettings": "{\"GCS_AUTO_CONFIG\":true}" } ] } ] } Azure-WALinuxAgent-a976115/tests/data/hostgaplugin/vm_settings-parse_error.json000066400000000000000000000063101510742556200277270ustar00rootroot00000000000000{ "hostGAPluginVersion": "1.0.8.133", "vmSettingsSchemaVersion": THIS_IS_A_SYNTAX_ERROR, "activityId": "a33f6f53-43d6-4625-b322-1a39651a00c9", "correlationId": "9a47a2a2-e740-4bfc-b11b-4f2f7cfe7d2e", "inSvdSeqNo": 1, "extensionsLastModifiedTickCount": 637726657706205217, "extensionGoalStatesSource": "FastTrack", "onHold": true, "statusUploadBlob": { "statusBlobType": "BlockBlob", "value": "https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.status?sv=2018-03-28&sr=b&sk=system-1&sig=KNWgC2%3d&se=9999-01-01T00%3a00%3a00Z&sp=w" }, "inVMMetadata": { "subscriptionId": "8e037ad4-618f-4466-8bc8-5099d41ac15b", "resourceGroupName": "rg-dc-86fjzhp", "vmName": "edp0plkw2b", "location": "CentralUSEUAP", "vmId": "86f4ae0a-61f8-48ae-9199-40f402d56864", "vmSize": "Standard_B2s", "osType": "Linux" }, "gaFamilies": [ { "name": "Prod", "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml" ] }, { "name": "Test", "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml" ] } ], "extensionGoalStates": [ { "name": "Microsoft.Azure.Extensions.CustomScript", "version": "2.1.6", "location": "https://umsavwggj2v40kvqhc0w.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "failoverlocation": "https://umsafwzhkbm1rfrhl0ws.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "additionalLocations": [ "https://umsanh4b5rfz0q0p4pwm.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false, "settings": [ { "publicSettings": "{\"commandToExecute\":\"echo 'cee174d4-4daa-4b07-9958-53b9649445c2'\"}" } ], "dependsOn": [ { "DependsOnExtension": [ { "handler": "Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent" } ], "dependencyLevel": 1 } ] } ] } Azure-WALinuxAgent-a976115/tests/data/hostgaplugin/vm_settings-redact.json000066400000000000000000000100601510742556200266430ustar00rootroot00000000000000{"hostGAPluginVersion":"1.0.8.159","activityId":"845594b9-d46c-4649-9f3d-be3bbf795a21","correlationId":"2f027dce-6fba-4b5e-a07f-ae7d8097cc6d","inSvdSeqNo":69,"extensionsLastModifiedTickCount":638693753947160515,"extensionGoalStatesSource":"FastTrack","statusUploadBlob":{"statusBlobType":"PageBlob","value":"https://md-z9z999zzzz99.z38.blob.storage.azure.net/$system/vm01.667ae9f0-zz99-99z9-9zz9-9z9z99999zz9.status?sv=2018-03-28&sr=b&sk=system-1&sig=9ZzZZzZZZZ999zzz%2bzz99zzZzZZZ99999zz%2b9ZZZ%3d&se=9999-01-01T00%3a00%3a00Z&sp=w"},"gaFamilies":[{"name":"Prod","version":"2.12.0.2","isVersionFromRSM":false,"isVMEnabledForRSMUpgrades":false,"uris":["https://umsaq0ksgznjhnhvqvmj.blob.core.windows.net/568bb00f-455e-32b8-8deb-0e1bf1636254/568bb00f-455e-32b8-8deb-0e1bf1636254_manifest.xml"]}],"extensionGoalStates":[{"name":"Microsoft.Azure.Extensions.CustomScript","version":"2.1.10","location":"https://umsalhsl5scrdmt03dql.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml","failoverLocation":"https://umsaff4lwzr0mqltcdsj.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml","additionalLocations":["https://umsalvlwk4lfvxs4w05v.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml"],"state":"enabled","autoUpgrade":true,"runAsStartupTask":false,"isJson":true,"useExactVersion":true,"settingsSeqNo":1,"isMultiConfig":false,"settings":[{"protectedSettingsCertThumbprint":"FF56DAC2F36EDDE292DE9D49B200D7CBEE05D2F4","protectedSettings":"MIIC4AYJKoZIhvcNAQcWMicrosoft.Azure.Extensions.CustomScript/IsZAEZFidXVFB/8Ucc8OxGFWpqtXZdrFcuuSxOa6ib/+la5ukd7pXjzYCd/Sp9yGj7Rw3Kqf9Lpix/sPuqWjV6e2XgDhW4j2nRNTJU4yXxocQm/PhAr39Um7n0pgI2Cn28AabYtsHWjKqr8Al9LX6bKm8cnmnLjqTntphCw=="}]},{"name":"Microsoft.Azure.AzureDefenderForServers.MDE.Linux","version":"1.0.4.7","location":"https://umsavph101h3qhc1pz3t.blob.core.windows.net/60eb5f5d-256d-1d4b-3db7-8808756d9b09/60eb5f5d-256d-1d4b-3db7-8808756d9b09_manifest.xml","failoverLocation":"https://umsazq0spn4w5z3grwcv.blob.core.windows.net/60eb5f5d-256d-1d4b-3db7-8808756d9b09/60eb5f5d-256d-1d4b-3db7-8808756d9b09_manifest.xml","additionalLocations":["https://umsargnfzcnssnpfskrr.blob.core.windows.net/60eb5f5d-256d-1d4b-3db7-8808756d9b09/60eb5f5d-256d-1d4b-3db7-8808756d9b09_manifest.xml"],"state":"enabled","autoUpgrade":true,"runAsStartupTask":false,"isJson":true,"useExactVersion":true,"settingsSeqNo":12,"isMultiConfig":false,"settings":[{"protectedSettingsCertThumbprint":"FF56DAC2F36EDDE292DE9D49B200D7CBEE05D2F4","protectedSettings":"MIIC4AYJKoZIhvcNAQcWMicrosoft.Azure.AzureDefenderForServers.MDE.Linux/IsZAEZFidXVFB/8Ucc8OxGFWpqtXZdrFcuuSxOa6ib/+la5ukd7pXjzYCd/Sp9yGj7Rw3Kqf9Lpix/sPuqWjV6e2XgDhW4j2nRNTJU4yXxocQm/PhAr39Um7n0pgI2Cn28AabYtsHWjKqr8Al9LX6bKm8cnmnLjqTntphCw==","publicSettings":"{\"azureResourceId\":\"/subscriptions/z99z9999/resourceGroups/RG/providers/Microsoft.Compute/virtualMachines/vm01\",\"forceReOnboarding\":false,\"vNextEnabled\":false,\"autoUpdate\":true}"}]},{"name":"Microsoft.CPlat.Core.RunCommandLinux","version":"1.0.5","location":"https://umsac5lzx3m5mmb1qsnk.blob.core.windows.net/4beefb67-5a1e-aa37-7ede-3180ba34ba0d/4beefb67-5a1e-aa37-7ede-3180ba34ba0d_manifest.xml","failoverLocation":"https://umsarmtx2bjcqg0cj14s.blob.core.windows.net/4beefb67-5a1e-aa37-7ede-3180ba34ba0d/4beefb67-5a1e-aa37-7ede-3180ba34ba0d_manifest.xml","additionalLocations":["https://umsapqg4jwbpzxdrjjwr.blob.core.windows.net/4beefb67-5a1e-aa37-7ede-3180ba34ba0d/4beefb67-5a1e-aa37-7ede-3180ba34ba0d_manifest.xml"],"state":"enabled","autoUpgrade":true,"runAsStartupTask":false,"isJson":true,"useExactVersion":true,"settingsSeqNo":13,"isMultiConfig":false,"settings":[{"protectedSettingsCertThumbprint":"FF56DAC2F36EDDE292DE9D49B200D7CBEE05D2F4","protectedSettings":"MIIC4AYJKoZIhvcNAQcWMicrosoft.CPlat.Core.RunCommandLinux/IsZAEZFidXVFB/8Ucc8OxGFWpqtXZdrFcuuSxOa6ib/+la5ukd7pXjzYCd/Sp9yGj7Rw3Kqf9Lpix/sPuqWjV6e2XgDhW4j2nRNTJU4yXxocQm/PhAr39Um7n0pgI2Cn28AabYtsHWjKqr8Al9LX6bKm8cnmnLjqTntphCw==","publicSettings":"{}"}]}]}Azure-WALinuxAgent-a976115/tests/data/hostgaplugin/vm_settings-redact_formatted.json000066400000000000000000000123571510742556200307230ustar00rootroot00000000000000{ "hostGAPluginVersion": "1.0.8.159", "activityId": "845594b9-d46c-4649-9f3d-be3bbf795a21", "correlationId": "2f027dce-6fba-4b5e-a07f-ae7d8097cc6d", "inSvdSeqNo": 69, "extensionsLastModifiedTickCount": 638693753947160515, "extensionGoalStatesSource": "FastTrack", "statusUploadBlob": { "statusBlobType": "PageBlob", "value": "https://md-z9z999zzzz99.z38.blob.storage.azure.net/$system/vm01.667ae9f0-zz99-99z9-9zz9-9z9z99999zz9.status?sv=2018-03-28&sr=b&sk=system-1&sig=9ZzZZzZZZZ999zzz%2bzz99zzZzZZZ99999zz%2b9ZZZ%3d&se=9999-01-01T00%3a00%3a00Z&sp=w" }, "gaFamilies": [ { "name": "Prod", "version": "2.12.0.2", "isVersionFromRSM": false, "isVMEnabledForRSMUpgrades": false, "uris": [ "https://umsaq0ksgznjhnhvqvmj.blob.core.windows.net/568bb00f-455e-32b8-8deb-0e1bf1636254/568bb00f-455e-32b8-8deb-0e1bf1636254_manifest.xml" ] } ], "extensionGoalStates": [ { "name": "Microsoft.Azure.Extensions.CustomScript", "version": "2.1.10", "location": "https://umsalhsl5scrdmt03dql.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "failoverLocation": "https://umsaff4lwzr0mqltcdsj.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "additionalLocations": [ "https://umsalvlwk4lfvxs4w05v.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 1, "isMultiConfig": false, "settings": [ { "protectedSettingsCertThumbprint": "FF56DAC2F36EDDE292DE9D49B200D7CBEE05D2F4", "protectedSettings": "MIIC4AYJKoZIhvcNAQcWMicrosoft.Azure.Extensions.CustomScript/IsZAEZFidXVFB/8Ucc8OxGFWpqtXZdrFcuuSxOa6ib/+la5ukd7pXjzYCd/Sp9yGj7Rw3Kqf9Lpix/sPuqWjV6e2XgDhW4j2nRNTJU4yXxocQm/PhAr39Um7n0pgI2Cn28AabYtsHWjKqr8Al9LX6bKm8cnmnLjqTntphCw==" } ] }, { "name": "Microsoft.Azure.AzureDefenderForServers.MDE.Linux", "version": "1.0.4.7", "location": "https://umsavph101h3qhc1pz3t.blob.core.windows.net/60eb5f5d-256d-1d4b-3db7-8808756d9b09/60eb5f5d-256d-1d4b-3db7-8808756d9b09_manifest.xml", "failoverLocation": "https://umsazq0spn4w5z3grwcv.blob.core.windows.net/60eb5f5d-256d-1d4b-3db7-8808756d9b09/60eb5f5d-256d-1d4b-3db7-8808756d9b09_manifest.xml", "additionalLocations": [ "https://umsargnfzcnssnpfskrr.blob.core.windows.net/60eb5f5d-256d-1d4b-3db7-8808756d9b09/60eb5f5d-256d-1d4b-3db7-8808756d9b09_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 12, "isMultiConfig": false, "settings": [ { "protectedSettingsCertThumbprint": "FF56DAC2F36EDDE292DE9D49B200D7CBEE05D2F4", "protectedSettings": "MIIC4AYJKoZIhvcNAQcWMicrosoft.Azure.AzureDefenderForServers.MDE.Linux/IsZAEZFidXVFB/8Ucc8OxGFWpqtXZdrFcuuSxOa6ib/+la5ukd7pXjzYCd/Sp9yGj7Rw3Kqf9Lpix/sPuqWjV6e2XgDhW4j2nRNTJU4yXxocQm/PhAr39Um7n0pgI2Cn28AabYtsHWjKqr8Al9LX6bKm8cnmnLjqTntphCw==", "publicSettings": "{\"azureResourceId\":\"/subscriptions/z99z9999/resourceGroups/RG/providers/Microsoft.Compute/virtualMachines/vm01\",\"forceReOnboarding\":false,\"vNextEnabled\":false,\"autoUpdate\":true}" } ] }, { "name": "Microsoft.CPlat.Core.RunCommandLinux", "version": "1.0.5", "location": "https://umsac5lzx3m5mmb1qsnk.blob.core.windows.net/4beefb67-5a1e-aa37-7ede-3180ba34ba0d/4beefb67-5a1e-aa37-7ede-3180ba34ba0d_manifest.xml", "failoverLocation": "https://umsarmtx2bjcqg0cj14s.blob.core.windows.net/4beefb67-5a1e-aa37-7ede-3180ba34ba0d/4beefb67-5a1e-aa37-7ede-3180ba34ba0d_manifest.xml", "additionalLocations": [ "https://umsapqg4jwbpzxdrjjwr.blob.core.windows.net/4beefb67-5a1e-aa37-7ede-3180ba34ba0d/4beefb67-5a1e-aa37-7ede-3180ba34ba0d_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 13, "isMultiConfig": false, "settings": [ { "protectedSettingsCertThumbprint": "FF56DAC2F36EDDE292DE9D49B200D7CBEE05D2F4", "protectedSettings": "MIIC4AYJKoZIhvcNAQcWMicrosoft.CPlat.Core.RunCommandLinux/IsZAEZFidXVFB/8Ucc8OxGFWpqtXZdrFcuuSxOa6ib/+la5ukd7pXjzYCd/Sp9yGj7Rw3Kqf9Lpix/sPuqWjV6e2XgDhW4j2nRNTJU4yXxocQm/PhAr39Um7n0pgI2Cn28AabYtsHWjKqr8Al9LX6bKm8cnmnLjqTntphCw==", "publicSettings": "{}" } ] } ] }vm_settings-requested_version_properties_false.json000066400000000000000000000172331510742556200345270ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/hostgaplugin{ "hostGAPluginVersion": "1.0.8.133", "vmSettingsSchemaVersion": "0.0", "activityId": "a33f6f53-43d6-4625-b322-1a39651a00c9", "correlationId": "9a47a2a2-e740-4bfc-b11b-4f2f7cfe7d2e", "inSvdSeqNo": 1, "extensionsLastModifiedTickCount": 637726699999999999, "extensionGoalStatesSource": "FastTrack", "onHold": true, "statusUploadBlob": { "statusBlobType": "BlockBlob", "value": "https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.status?sv=2018-03-28&sr=b&sk=system-1&sig=KNWgC2%3d&se=9999-01-01T00%3a00%3a00Z&sp=w" }, "inVMMetadata": { "subscriptionId": "8e037ad4-618f-4466-8bc8-5099d41ac15b", "resourceGroupName": "rg-dc-86fjzhp", "vmName": "edp0plkw2b", "location": "CentralUSEUAP", "vmId": "86f4ae0a-61f8-48ae-9199-40f402d56864", "vmSize": "Standard_B2s", "osType": "Linux" }, "requiredFeatures": [ { "name": "MultipleExtensionsPerHandler" } ], "gaFamilies": [ { "name": "Prod", "version": "9.9.9.9", "isVersionFromRSM": false, "isVMEnabledForRSMUpgrades": false, "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml" ] }, { "name": "Test", "version": "9.9.9.9", "isVersionFromRSM": false, "isVMEnabledForRSMUpgrades": false, "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml" ] } ], "extensionGoalStates": [ { "name": "Microsoft.Azure.Monitor.AzureMonitorLinuxAgent", "version": "1.9.1", "location": "https://zrdfepirv2cbn04prdstr01a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml", "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "settings": [ { "protectedSettingsCertThumbprint": "F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEFpB/HKM/7evRk+DBz754wUwDQYJKoZIhvcNAQEBBQAEggEADPJwniDeIUXzxNrZCloitFdscQ59Bz1dj9DLBREAiM8jmxM0LLicTJDUv272Qm/4ZQgdqpFYBFjGab/9MX+Ih2x47FkVY1woBkckMaC/QOFv84gbboeQCmJYZC/rZJdh8rCMS+CEPq3uH1PVrvtSdZ9uxnaJ+E4exTPPviIiLIPtqWafNlzdbBt8HZjYaVw+SSe+CGzD2pAQeNttq3Rt/6NjCzrjG8ufKwvRoqnrInMs4x6nnN5/xvobKIBSv4/726usfk8Ug+9Q6Benvfpmre2+1M5PnGTfq78cO3o6mI3cPoBUjp5M0iJjAMGeMt81tyHkimZrEZm6pLa4NQMOEjArBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECC5nVaiJaWt+gAhgeYvxUOYHXw==", "publicSettings": "{\"GCS_AUTO_CONFIG\":true}" } ] }, { "name": "Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent", "version": "2.15.112", "location": "https://zrdfepirv2cbn04prdstr01a.blob.core.windows.net/4ef06ad957494df49c807a5334f2b5d2/Microsoft.Azure.Security.Monitoring_AzureSecurityLinuxAgent_useast2euap_manifest.xml", "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "settings": [ { "protectedSettingsCertThumbprint": "F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEFpB/HKM/7evRk+DBz754wUwDQYJKoZIhvcNAQEBBQAEggEADPJwniDeIUXzxNrZCloitFdscQ59Bz1dj9DLBREAiM8jmxM0LLicTJDUv272Qm/4ZQgdqpFYBFjGab/9MX+Ih2x47FkVY1woBkckMaC/QOFv84gbboeQCmJYZC/rZJdh8rCMS+CEPq3uH1PVrvtSdZ9uxnaJ+E4exTPPviIiLIPtqWafNlzdbBt8HZjYaVw+SSe+CGzD2pAQeNttq3Rt/6NjCzrjG8ufKwvRoqnrInMs4x6nnN5/xvobKIBSv4/726usfk8Ug+9Q6Benvfpmre2+1M5PnGTfq78cO3o6mI3cPoBUjp5M0iJjAMGeMt81tyHkimZrEZm6pLa4NQMOEjArBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECC5nVaiJaWt+gAhgeYvxUOYHXw==", "publicSettings": "{\"enableGenevaUpload\":true}" } ] }, { "name": "Microsoft.Azure.Extensions.CustomScript", "version": "2.1.6", "location": "https://umsavwggj2v40kvqhc0w.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "failoverlocation": "https://umsafwzhkbm1rfrhl0ws.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "additionalLocations": [ "https://umsanh4b5rfz0q0p4pwm.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false, "settings": [ { "publicSettings": "{\"commandToExecute\":\"echo 'cee174d4-4daa-4b07-9958-53b9649445c2'\"}" } ] }, { "name": "Microsoft.CPlat.Core.RunCommandHandlerLinux", "version": "1.2.0", "location": "https://umsavbvncrpzbnxmxzmr.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml", "failoverlocation": "https://umsajbjtqrb3zqjvgb2z.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml", "additionalLocations": [ "https://umsawqtlsshtn5v2nfgh.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": true, "settings": [ { "publicSettings": "{\"source\":{\"script\":\"echo '4abb1e88-f349-41f8-8442-247d9fdfcac5'\"}}", "seqNo": 0, "extensionName": "MCExt1", "extensionState": "enabled" }, { "publicSettings": "{\"source\":{\"script\":\"echo 'e865c9bc-a7b3-42c6-9a79-cfa98a1ee8b3'\"}}", "seqNo": 0, "extensionName": "MCExt2", "extensionState": "enabled" }, { "publicSettings": "{\"source\":{\"script\":\"echo 'f923e416-0340-485c-9243-8b84fb9930c6'\"}}", "seqNo": 0, "extensionName": "MCExt3", "extensionState": "enabled" } ] } ] }vm_settings-supported_hgap_version_for_signature.json000066400000000000000000000431171510742556200350530ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/hostgaplugin{ "hostGAPluginVersion": "1.0.8.159", "vmSettingsSchemaVersion": "0.0", "activityId": "a33f6f53-43d6-4625-b322-1a39651a00c9", "correlationId": "9a47a2a2-e740-4bfc-b11b-4f2f7cfe7d2e", "inSvdSeqNo": 1, "extensionsLastModifiedTickCount": 637726657706205299, "extensionGoalStatesSource": "FastTrack", "onHold": true, "statusUploadBlob": { "statusBlobType": "BlockBlob", "value": "https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.status?sv=2018-03-28&sr=b&sk=system-1&sig=KNWgC2%3d&se=9999-01-01T00%3a00%3a00Z&sp=w" }, "inVMMetadata": { "subscriptionId": "8e037ad4-618f-4466-8bc8-5099d41ac15b", "resourceGroupName": "rg-dc-86fjzhp", "vmName": "edp0plkw2b", "location": "CentralUSEUAP", "vmId": "86f4ae0a-61f8-48ae-9199-40f402d56864", "vmSize": "Standard_B2s", "osType": "Linux" }, "requiredFeatures": [ { "name": "MultipleExtensionsPerHandler" } ], "gaFamilies": [ { "name": "Prod", "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml" ] }, { "name": "Test", "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml" ] } ], "extensionGoalStates": [ { "name": "Microsoft.Azure.Monitor.AzureMonitorLinuxAgent", "version": "1.9.1", "location": "https://zrdfepirv2cbn04prdstr01a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml", "failoverlocation": "https://zrdfepirv2cbn06prdstr01a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml", "additionalLocations": ["https://zrdfepirv2cbn09pr02a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml"], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "encodedSignature": "MIInEAYJKoZIhvcNAQcCoIInATCCJv0CAQMxDTALBglghkgBZQMEAgIwCQYHgUuDSAcICaCCDXYwggX0MIID3KADAgECAhMzAAADrzBADkyjTQVBAAAAAAOvMA0GCSqGSIb3DQEBCwUAMH4xCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xKDAmBgNVBAMTH01pY3Jvc29mdCBDb2RlIFNpZ25pbmcgUENBIDIwMTEwHhcNMjMxMTE2MTkwOTAwWhcNMjQxMTE0MTkwOTAwWjB0MQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMR4wHAYDVQQDExVNaWNyb3NvZnQgQ29ycG9yYXRpb24wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDOS8s1ra6f0YGtg0OhEaQa/t3Q+q1MEHhWJhqQVuO5amYXQpy8MDPNoJYk+FWAhePP5LxwcSge5aen+f5Q6WNPd6EDxGzotvVpNi5ve0H97S3F7C/axDfKxyNh21MG0W8Sb0vxi/vorcLHOL9i+t2D6yvvDzLlEefUCbQV/zGCBjXGlYJcUj6RAzXyeNANxSpKXAGd7Fh+ocGHPPphcD9LQTOJgG7Y7aYztHqBLJiQQ4eAgZNU4ac6+8LnEGALgo1ydC5BJEuJQjYKbNTy959HrKSu7LO3Ws0w8jw6pYdC1IMpdTkk2puTgY2PDNzBtLM4evG7FYer3WX+8t1UMYNTAgMBAAGjggFzMIIBbzAfBgNVHSUEGDAWBgorBgEEAYI3TAgBBggrBgEFBQcDAzAdBgNVHQ4EFgQURxxxNPIEPGSO8kqz+bgCAQWGXsEwRQYDVR0RBD4wPKQ6MDgxHjAcBgNVBAsTFU1pY3Jvc29mdCBDb3Jwb3JhdGlvbjEWMBQGA1UEBRMNMjMwMDEyKzUwMTgyNjAfBgNVHSMEGDAWgBRIbmTlUAXTgqoXNzcitW2oynUClTBUBgNVHR8ETTBLMEmgR6BFhkNodHRwOi8vd3d3Lm1pY3Jvc29mdC5jb20vcGtpb3BzL2NybC9NaWNDb2RTaWdQQ0EyMDExXzIwMTEtMDctMDguY3JsMGEGCCsGAQUFBwEBBFUwUzBRBggrBgEFBQcwAoZFaHR0cDovL3d3dy5taWNyb3NvZnQuY29tL3BraW9wcy9jZXJ0cy9NaWNDb2RTaWdQQ0EyMDExXzIwMTEtMDctMDguY3J0MAwGA1UdEwEB/wQCMAAwDQYJKoZIhvcNAQELBQADggIBAISxFt/zR2frTFPB45YdmhZpB2nNJoOoi+qlgcTlnO4QwlYN1w/vYwbDy/oFJolD5r6FMJd0RGcgEM8q9TgQ2OC7gQEmhweVJ7yuKJlQBH7P7Pg5RiqgV3cSonJ+OM4kFHbP3gPLiyzssSQdRuPY1mIWoGg9i7Y4ZC8ST7WhpSyc0pns2XsUe1XsIjaUcGu7zd7gg97eCUiLRdVklPmpXobH9CEAWakRUGNICYN2AgjhRTC4j3KJfqMkU04R6Toyh4/Toswm1uoDcGr5laYnTfcX3u5WnJqJLhuPe8Uj9kGAOcyo0O1mNwDa+LhFEzB6CB32+wfJMumfr6degvLTe8x55urQLeTjimBQgS49BSUkhFN7ois3cZyNpnrMca5AZaC7pLI72vuqSsSlLalGOcZmPHZGYJqZ0BacN274OZ80Q8B11iNokns9Od348bMb5Z4fihxaBWebl8kWEi2OPvQImOAeq3nt7UWJBzJYLAGEpfasaA3ZQgIcEXdD+uwo6ymMzDY6UamFOfYqYWXkntxDGu7ngD2ugKUuccYKJJRiiz+LAUcj90BVcSHRLQop9N8zoALr/1sJuwPrVAtxHNEgSW+AKBqIxYWM4Ev32l6agSUAezLMbq5f3d8x9qzT031jMDT+sUAoCw0M5wVtCUQcqINPuYjbS1WgJyZIiEkBMIIHejCCBWKgAwIBAgIKYQ6Q0gAAAAAAAzANBgkqhkiG9w0BAQsFADCBiDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCldhc2hpbmd0b24xEDAOBgNVBAcTB1JlZG1vbmQxHjAcBgNVBAoTFU1pY3Jvc29mdCBDb3Jwb3JhdGlvbjEyMDAGA1UEAxMpTWljcm9zb2Z0IFJvb3QgQ2VydGlmaWNhdGUgQXV0aG9yaXR5IDIwMTEwHhcNMTEwNzA4MjA1OTA5WhcNMjYwNzA4MjEwOTA5WjB+MQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMSgwJgYDVQQDEx9NaWNyb3NvZnQgQ29kZSBTaWduaW5nIFBDQSAyMDExMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAq/D6chAcLq3YbqqCEE00uvK2WCGfQhsqa+laUKq4BjgaBEm6f8MMHt03a8YS2AvwOMKZBrDIOdUBFDFC04kNeWSHfpRgJGyvnkmc6Whe0t+bU7IKLMOv2akrrnoJr9eWWcpgGgXpZnboMlImEi/nqwhQz7NEt13YxC4Ddato88tt8zpcoRb0RrrgOGSsbmQ1eKagYw8t00CT+OPeBw3VXHmlSSnnDb6gE3e+lD3v++MrWhAfTVYoonpy4BI6t0le2O3tQ5GD2Xuye4Yb2T6xjF3oiU+EGvKhL1nkkDstrjNYxbc+/jLTswM9sbKvkjh+0p2ALPVOVpEhNSXDOW5kf1O6nA+tGSOEy/S6A4aN91/w0FK/jJSHvMAhdCVfGCi2zCcoOCWYOUo2z3yxkq4cI6epZuxhH2rhKEmdX4jiJV3TIUs+UsS1Vz8kA/DRelsv1SPjcF0PUUZ3s/gA4bysAoJf28AVs70b1FVL5zmhD+kjSbwYuER8ReTBw3J64HLnJN+/RpnF78IcV9uDjexNSTCnq47f7Fufr/zdsGbiwZeBe+3W7UvnSSmnEyimp31ngOaKYnhfsi+E11ecXL93KCjx7W3DKI8sj0A3T8HhhUSJxAlMxdSlQy90lfdu+HggWCwTXWCVmj5PM4TasIgX3p5O9JawvEagbJjS4NaIjAsCAwEAAaOCAe0wggHpMBAGCSsGAQQBgjcVAQQDAgEAMB0GA1UdDgQWBBRIbmTlUAXTgqoXNzcitW2oynUClTAZBgkrBgEEAYI3FAIEDB4KAFMAdQBiAEMAQTALBgNVHQ8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zAfBgNVHSMEGDAWgBRyLToCMZBDuRQFTuHqp8cx0SOJNDBaBgNVHR8EUzBRME+gTaBLhklodHRwOi8vY3JsLm1pY3Jvc29mdC5jb20vcGtpL2NybC9wcm9kdWN0cy9NaWNSb29DZXJBdXQyMDExXzIwMTFfMDNfMjIuY3JsMF4GCCsGAQUFBwEBBFIwUDBOBggrBgEFBQcwAoZCaHR0cDovL3d3dy5taWNyb3NvZnQuY29tL3BraS9jZXJ0cy9NaWNSb29DZXJBdXQyMDExXzIwMTFfMDNfMjIuY3J0MIGfBgNVHSAEgZcwgZQwgZEGCSsGAQQBgjcuAzCBgzA/BggrBgEFBQcCARYzaHR0cDovL3d3dy5taWNyb3NvZnQuY29tL3BraW9wcy9kb2NzL3ByaW1hcnljcHMuaHRtMEAGCCsGAQUFBwICMDQeMiAdAEwAZQBnAGEAbABfAHAAbwBsAGkAYwB5AF8AcwB0AGEAdABlAG0AZQBuAHQALiAdMA0GCSqGSIb3DQEBCwUAA4ICAQBn8oalmOBUeRou09h0ZyKbC5YR4WOSmUKWfdJ5DJDBZV8uLD74w3LRbYP+vj/oCso7v0epo/Np22O/IjWll11lhJB9i0ZQVdgMknzSGksc8zxCi1LQsP1r4z4HLimb5j0bpdS1HXeUOeLpZMlEPXh6I/MTfaaQdION9MsmAkYqwooQu6SpBQyb7Wj6aC6VoCo/KmtYSWMfCWluWpiW5IP0wI/zRive/DvQvTXvbiWu5a8n7dDd8w6vmSiXmE0OPQvyCInWH8MyGOLwxS3OW560STkKxgrCxq2u5bLZ2xWIUUVYODJxJxp/sfQn+N4sOiBpmLJZiWhub6e3dMNABQamASooPoI/E01mC8CzTfXhj38cbxV9Rad25UAqZaPDXVJihsMdYzaXht/a8/jyFqGaJ+HNpZfQ7l1jQeNbB5yHPgZ3BtEGsXUfFL5hYbXw3MYbBL7fQccOKO7eZS/sl/ahXJbYANahRr1Z85elCUtIEJmAH9AAKcWxm6U/RXceNcbSoqKfenoi+kiVH6v7RyOA9Z74v2u3S5fi63V4GuzqN5l5GEv/1rMjaHXmr/r8i+sLgOppO6/8MO0ETI7f33VtY5E90Z1WTk+/gFcioXgRMiF670EKsT/7qMykXcGhiJtXcVZOSEXAQsmbdlsKgEhr/Xmfwb1tbWrJUnMTDXpQzTGCGWIwghleAgEBMIGVMH4xCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xKDAmBgNVBAMTH01pY3Jvc29mdCBDb2RlIFNpZ25pbmcgUENBIDIwMTECEzMAAAOvMEAOTKNNBUEAAAAAA68wCwYJYIZIAWUDBAICoFkwFgYJKoZIhvcNAQkDMQkGB4FLg0gHCAkwPwYJKoZIhvcNAQkEMTIEMDBbd8WC98w2hp0LRsyGXkhY0ZY+y0Pl20deVXonOXR+vDsyK96L9uBzpNRlolZD0DANBgkqhkiG9w0BAQEFAASCAQAIaK9t6Unz6YcKR2q8D2Vjvq9j+YK0U1+tb8s2ZslmmL19Yeb+NRy4tkS7lVEmMYRiFTy+jyis6UGL81ziXEXqAfqjkJt/zjN/8Qek91fzKYJMuCfEm6xVv+gfNHCp0fuGn4b9QNoD7UUMe4oBskSSLSiW0ri9FblSdjeoLZKvoRzHFBF94wI2Kw0iCBUQgNKHKT3lyG9D4NQySAaS0BnYG/s/HPgGMPT6peWRWAXkuTQ8zxb98pOzdf3HZ4Zz2n8qEh1BM6nHba2CKnDP0yjEz7OERVWcLUVPcTHC/xG94cp1gdlKQ09t3H7lBwccxmztUt9sIGUAdeJFAChTvvnSoYIXRDCCF0AGCyqGSIb3DQEJEAIOMYIXLzCCFysGCSqGSIb3DQEHAqCCFxwwghcYAgEDMQ8wDQYJYIZIAWUDBAIBBQAwggFzBgsqhkiG9w0BCRABBKCCAWIEggFeMIIBWgIBAQYKKwYBBAGEWQoDATAxMA0GCWCGSAFlAwQCAQUABCALbe+1JlANO/4xRH8dJHYO8uMX6ee/KhxzL1ZHE4fguAIGZnLzb33XGBMyMDI0MDYyMDIzMzgyOS4yMzNaMASAAgH0AhgsprYE/OXhkFp093+I2SkmqEFqhU3g+VWggdikgdUwgdIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xLTArBgNVBAsTJE1pY3Jvc29mdCBJcmVsYW5kIE9wZXJhdGlvbnMgTGltaXRlZDEmMCQGA1UECxMdVGhhbGVzIFRTUyBFU046ODZERi00QkJDLTkzMzUxJTAjBgNVBAMTHE1pY3Jvc29mdCBUaW1lLVN0YW1wIFNlcnZpY2WgghF4MIIHJzCCBQ+gAwIBAgITMwAAAd1dVx2V1K2qGwABAAAB3TANBgkqhkiG9w0BAQsFADB8MQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMSYwJAYDVQQDEx1NaWNyb3NvZnQgVGltZS1TdGFtcCBQQ0EgMjAxMDAeFw0yMzEwMTIxOTA3MDlaFw0yNTAxMTAxOTA3MDlaMIHSMQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMS0wKwYDVQQLEyRNaWNyb3NvZnQgSXJlbGFuZCBPcGVyYXRpb25zIExpbWl0ZWQxJjAkBgNVBAsTHVRoYWxlcyBUU1MgRVNOOjg2REYtNEJCQy05MzM1MSUwIwYDVQQDExxNaWNyb3NvZnQgVGltZS1TdGFtcCBTZXJ2aWNlMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAqE4DlETqLnecdREfiWd8oun70m+Km5O1y1qKsLExRKs9LLkJYrYO2uJA/5PnYdds3aDsCS1DWlBltMMYXMrp3Te9hg2sI+4kr49Gw/YU9UOMFfLmastEXMgcctqIBqhsTm8Um6jFnRlZ0owKzxpyOEdSZ9pj7v38JHu434Hj7GMmrC92lT+anSYCrd5qvIf4Aqa/qWStA3zOCtxsKAfCyq++pPqUQWpimLu4qfswBhtJ4t7Skx1q1XkRbo1Wdcxg5NEq4Y9/J8Ep1KG5qUujzyQbupraZsDmXvv5fTokB6wySjJivj/0KAMWMdSlwdI4O6OUUEoyLXrzNF0t6t2lbRsFf0QO7HbMEwxoQrw3LFrAIS4Crv77uS0UBuXeFQq27NgLUVRm5SXYGrpTXtLgIqypHeK0tP2o1xvakAniOsgN2WXlOCip5/mCm/5hy8EzzfhtcU3DK13e6MMPbg/0N3zF9Um+6aOwFBCQrlP+rLcetAny53WcdK+0VWLlJr+5sa5gSlLyAXoYNY3n8pu94WR2yhNUg+jymRaGM+zRDucDn64HFAHjOWMSMrPlZbsEDjCmYWbbh+EGZGNXg1un6fvxyACO8NJ9OUDoNgFy/aTHUkfZ0iFpGdJ45d49PqEwXQiXn3wsy7SvDflWJRZwBCRQ1RPFGeoYXHPnD5m6wwMCAwEAAaOCAUkwggFFMB0GA1UdDgQWBBRuovW2jI9R2kXLIdIMpaPQjiXD8TAfBgNVHSMEGDAWgBSfpxVdAF5iXYP05dJlpxtTNRnpcjBfBgNVHR8EWDBWMFSgUqBQhk5odHRwOi8vd3d3Lm1pY3Jvc29mdC5jb20vcGtpb3BzL2NybC9NaWNyb3NvZnQlMjBUaW1lLVN0YW1wJTIwUENBJTIwMjAxMCgxKS5jcmwwbAYIKwYBBQUHAQEEYDBeMFwGCCsGAQUFBzAChlBodHRwOi8vd3d3Lm1pY3Jvc29mdC5jb20vcGtpb3BzL2NlcnRzL01pY3Jvc29mdCUyMFRpbWUtU3RhbXAlMjBQQ0ElMjAyMDEwKDEpLmNydDAMBgNVHRMBAf8EAjAAMBYGA1UdJQEB/wQMMAoGCCsGAQUFBwMIMA4GA1UdDwEB/wQEAwIHgDANBgkqhkiG9w0BAQsFAAOCAgEALlTZsg0uBcgdZsxypW5/2ORRP8rzPIsG+7mHwmuphHbP95o7bKjU6hz1KHK/Ft70ZkO7uSRTPFLInUhmSxlnDoUOrrJk1Pc8SMASdESlEEvxL6ZteD47hUtLQtKZvxchmIuxqpnR8MRy/cd4D7/L+oqcJBaReCGloQzAYxDNGSEbBwZ1evXMalDsdPG9+7nvEXFlfUyQqdYUQ0nq6t37i15SBePSeAg7H/+Xdcwrce3xPb7O8Yk0AX7n/moGTuevTv3MgJsVe/G2J003l6hd1b72sAiRL5QYPX0Bl0Gu23p1n450Cq4GIORhDmRV9QwpLfXIdA4aCYXG4I7NOlYdqWuql0iWWzLwo2yPlT2w42JYB3082XIQcdtBkOaL38E2U5jJO3Rh6EtsOi+ZlQ1rOTv0538D3XuaoJ1OqsTHAEZQ9sw/7+91hSpomym6kGdS2M5//voMCFXLx797rNH3w+SmWaWI7ZusvdDesPr5kJV2sYz1GbqFQMEGS9iH5iOYZ1xDkcHpZP1F5zz6oMeZuEuFfhl1pqt3n85d4tuDHZ/svhBBCPcqCqOoM5YidWE0TWBi1NYsd7jzzZ3+Tsu6LQrWDwRmsoPuZo6uwkso8qV6Bx4n0UKpjWwNQpSFFrQQdRb5mQouWiEqtLsXCN2sg1aQ8GBtDOcKN0TabjtCNNswggdxMIIFWaADAgECAhMzAAAAFcXna54Cm0mZAAAAAAAVMA0GCSqGSIb3DQEBCwUAMIGIMQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMTIwMAYDVQQDEylNaWNyb3NvZnQgUm9vdCBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkgMjAxMDAeFw0yMTA5MzAxODIyMjVaFw0zMDA5MzAxODMyMjVaMHwxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xJjAkBgNVBAMTHU1pY3Jvc29mdCBUaW1lLVN0YW1wIFBDQSAyMDEwMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEA5OGmTOe0ciELeaLL1yR5vQ7VgtP97pwHB9KpbE51yMo1V/YBf2xK4OK9uT4XYDP/XE/HZveVU3Fa4n5KWv64NmeFRiMMtY0Tz3cywBAY6GB9alKDRLemjkZrBxTzxXb1hlDcwUTIcVxRMTegCjhuje3XD9gmU3w5YQJ6xKr9cmmvHaus9ja+NSZk2pg7uhp7M62AW36MEBydUv626GIl3GoPz130/o5Tz9bshVZN7928jaTjkY+yOSxRnOlwaQ3KNi1wjjHINSi947SHJMPgyY9+tVSP3PoFVZhtaDuaRr3tpK56KTesy+uDRedGbsoy1cCGMFxPLOJiss254o2I5JasAUq7vnGpF1tnYN74kpEeHT39IM9zfUGaRnXNxF803RKJ1v2lIH1+/NmeRd+2ci/bfV+AutuqfjbsNkz2K26oElHovwUDo9Fzpk03dJQcNIIP8BDyt0cY7afomXw/TNuvXsLz1dhzPUNOwTM5TI4CvEJoLhDqhFFG4tG9ahhaYQFzymeiXtcodgLiMxhy16cg8ML6EgrXY28MyTZki1ugpoMhXV8wdJGUlNi5UPkLiWHzNgY1GIRH29wb0f2y1BzFa/ZcUlFdEtsluq9QBXpsxREdcu+N+VLEhReTwDwV2xo3xwgVGD94q0W29R6HXtqPnhZyacaue7e3PmriLq0CAwEAAaOCAd0wggHZMBIGCSsGAQQBgjcVAQQFAgMBAAEwIwYJKwYBBAGCNxUCBBYEFCqnUv5kxJq+gpE8RjUpzxD/LwTuMB0GA1UdDgQWBBSfpxVdAF5iXYP05dJlpxtTNRnpcjBcBgNVHSAEVTBTMFEGDCsGAQQBgjdMg30BATBBMD8GCCsGAQUFBwIBFjNodHRwOi8vd3d3Lm1pY3Jvc29mdC5jb20vcGtpb3BzL0RvY3MvUmVwb3NpdG9yeS5odG0wEwYDVR0lBAwwCgYIKwYBBQUHAwgwGQYJKwYBBAGCNxQCBAweCgBTAHUAYgBDAEEwCwYDVR0PBAQDAgGGMA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAU1fZWy4/oolxiaNE9lJBb186aGMQwVgYDVR0fBE8wTTBLoEmgR4ZFaHR0cDovL2NybC5taWNyb3NvZnQuY29tL3BraS9jcmwvcHJvZHVjdHMvTWljUm9vQ2VyQXV0XzIwMTAtMDYtMjMuY3JsMFoGCCsGAQUFBwEBBE4wTDBKBggrBgEFBQcwAoY+aHR0cDovL3d3dy5taWNyb3NvZnQuY29tL3BraS9jZXJ0cy9NaWNSb29DZXJBdXRfMjAxMC0wNi0yMy5jcnQwDQYJKoZIhvcNAQELBQADggIBAJ1VffwqreEsH2cBMSRb4Z5yS/ypb+pcFLY+TkdkeLEGk5c9MTO1OdfCcTY/2mRsfNB1OW27DzHkwo/7bNGhlBgi7ulmZzpTTd2YurYeeNg2LpypglYAA7AFvonoaeC6Ce5732pvvinLbtg/SHUB2RjebYIM9W0jVOR4U3UkV7ndn/OOPcbzaN9l9qRWqveVtihVJ9AkvUCgvxm2EhIRXT0n4ECWOKz3+SmJw7wXsFSFQrP8DJ6LGYnn8AtqgcKBGUIZUnWKNsIdw2FzLixre24/LAl4FOmRsqlb30mjdAy87JGA0j3mSj5mO0+7hvoyGtmW9I/2kQH2zsZ0/fZMcm8Qq3UwxTSwethQ/gpY3UA8x1RtnWN0SCyxTkctwRQEcb9k+SS+c23Kjgm9swFXSVRk2XPXfx5bRAGOWhmRaw2fpCjcZxkoJLo4S5pu+yFUa2pFEUep8beuyOiJXk+d0tBMdrVXVAmxaQFEfnyhYWxz/gq77EFmPWn9y8FBSX5+k77L+DvktxW/tM4+pTFRhLy/AsGConsXHRWJjXD+57XQKBqJC4822rpM+Zv/Cuk0+CQ1ZyvgDbjmjJnW4SLq8CdCPSWU5nR0W2rRnj7tfqAxM328y+l7vzhwRNGQ8cirOoo6CGJ/2XBjU02N7oJtpQUQwXEGahC0HVUzWLOhcGbyoYIC1DCCAj0CAQEwggEAoYHYpIHVMIHSMQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMS0wKwYDVQQLEyRNaWNyb3NvZnQgSXJlbGFuZCBPcGVyYXRpb25zIExpbWl0ZWQxJjAkBgNVBAsTHVRoYWxlcyBUU1MgRVNOOjg2REYtNEJCQy05MzM1MSUwIwYDVQQDExxNaWNyb3NvZnQgVGltZS1TdGFtcCBTZXJ2aWNloiMKAQEwBwYFKw4DAhoDFQA2I0cZZds1oM/GfKINsQ5yJKMWEKCBgzCBgKR+MHwxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xJjAkBgNVBAMTHU1pY3Jvc29mdCBUaW1lLVN0YW1wIFBDQSAyMDEwMA0GCSqGSIb3DQEBBQUAAgUA6h4aiTAiGA8yMDI0MDYyMDExMDMzN1oYDzIwMjQwNjIxMTEwMzM3WjB0MDoGCisGAQQBhFkKBAExLDAqMAoCBQDqHhqJAgEAMAcCAQACAgX7MAcCAQACAhH8MAoCBQDqH2wJAgEAMDYGCisGAQQBhFkKBAIxKDAmMAwGCisGAQQBhFkKAwKgCjAIAgEAAgMHoSChCjAIAgEAAgMBhqAwDQYJKoZIhvcNAQEFBQADgYEAGfu+JpdwJYpU+xUOu693Nef9bUv1la7pxXUtY+P82b5q8/FFZp5WUobGx6JrVuJTDuvqbEZYjwTzWIVUHog1kTXjji1NCFLCVnrlJqPwtH9uRQhnFDSmiP0tG1rNwht6ZViFrRexp+7cebOHSPfk+ZzrUyp9DptMAJmagfLClxAxggQNMIIECQIBATCBkzB8MQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMSYwJAYDVQQDEx1NaWNyb3NvZnQgVGltZS1TdGFtcCBQQ0EgMjAxMAITMwAAAd1dVx2V1K2qGwABAAAB3TANBglghkgBZQMEAgEFAKCCAUowGgYJKoZIhvcNAQkDMQ0GCyqGSIb3DQEJEAEEMC8GCSqGSIb3DQEJBDEiBCCZX/UOu+vfJ4kbHbQYoi1Ztz4aZycnWIB1vBYNNo/atDCB+gYLKoZIhvcNAQkQAi8xgeowgecwgeQwgb0EIGH/Di2aZaxPeJmce0fRWTftQI3TaVHFj5GI43rAMWNmMIGYMIGApH4wfDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCldhc2hpbmd0b24xEDAOBgNVBAcTB1JlZG1vbmQxHjAcBgNVBAoTFU1pY3Jvc29mdCBDb3Jwb3JhdGlvbjEmMCQGA1UEAxMdTWljcm9zb2Z0IFRpbWUtU3RhbXAgUENBIDIwMTACEzMAAAHdXVcdldStqhsAAQAAAd0wIgQg5Fd0dBTHG2u3SYEF2YcmJ7rHH4kHcV0GlSr/y6AQOYEwDQYJKoZIhvcNAQELBQAEggIAGcOQBnVMUPnu4d2wmccNjUncMe5i0C5VkJ7/VjqN4W6vSuKz7BFVIaUMoufkY94epjipx+Ip3BTj2heew7xB+f6zBKTlkXfakH7TEWeju3WzUYNt3kjJyS3SJeJGFJEiln1S6apObwPtbSq9EqwwFOt8pJy9bAvoxuRM6Olib/eiHr3uiKkk6FCccUgG0PYN/PRUU7htzv6uyRXzCpuNpld3eorXt6nqt6bP7k1NFcwcYSv7V3WcoQzObk5Y9G5n/1rc5Hy9eRHwnz1l7MWOZGsJ9swOBFmoVUK8tB1vPy3bjooJBm7jRT9AcdGTaRS/t5nYe5sECI51sIyq3UBPCH8rNse1BIX9WCtcar1Bg6L64lzdPC7FVSh03vVlDZhNNf7tWRZqlYID2zTaY4p4LIW47O0/Rw2Swe4+hvl49e0v0m0FnmmwXN5097waF3Xv7FIDxbcrK+0DTv2p810Igwj6tErwxhP/367Q9EBzxODSJ8uD35DGMmHsTnViavQUBzj8LeTiA6sUZhF54AbI5dQkZLPydlR3GCmo1RKKO1VhDZnpFanj/N856MOlQqe/6x8sguPM+OpF6MWGvQH5SxsSzSf6dxhzS2pEHbirwJ4k1+tuF0LKOxNLwVVQQ9qPABNiWqml4bJk9oZ1dOTDd9EFjepHqynKk4olY3kq5sA=", "settingsSeqNo": 0, "settings": [ { "protectedSettingsCertThumbprint": "F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/Microsoft.Azure.Monitor.AzureMonitorLinuxAgent==", "publicSettings": "{\"GCS_AUTO_CONFIG\":true}" } ] }, { "name": "Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent", "version": "2.15.112", "location": "https://zrdfepirv2cbn04prdstr01a.blob.core.windows.net/4ef06ad957494df49c807a5334f2b5d2/Microsoft.Azure.Security.Monitoring_AzureSecurityLinuxAgent_useast2euap_manifest.xml", "failoverlocation": "https://zrdfepirv2cbz06prdstr01a.blob.core.windows.net/4ef06ad957494df49c807a5334f2b5d2/Microsoft.Azure.Security.Monitoring_AzureSecurityLinuxAgent_useast2euap_manifest.xml", "additionalLocations": ["https://zrdfepirv2cbn06prdstr01a.blob.core.windows.net/4ef06ad957494df49c807a5334f2b5d2/Microsoft.Azure.Security.Monitoring_AzureSecurityLinuxAgent_useast2euap_manifest.xml"], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "settings": [ { "protectedSettingsCertThumbprint": "F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent==", "publicSettings": "{\"enableGenevaUpload\":true}" } ] } ] } vm_settings-unsupported_hgap_version_for_signature.json000066400000000000000000000063641510742556200354210ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/hostgaplugin{ "hostGAPluginVersion": "1.0.8.158", "vmSettingsSchemaVersion": "0.0", "activityId": "a33f6f53-43d6-4625-b322-1a39651a00c9", "correlationId": "9a47a2a2-e740-4bfc-b11b-4f2f7cfe7d2e", "inSvdSeqNo": 1, "extensionsLastModifiedTickCount": 637726657706205299, "extensionGoalStatesSource": "FastTrack", "onHold": true, "statusUploadBlob": { "statusBlobType": "BlockBlob", "value": "https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.status?sv=2018-03-28&sr=b&sk=system-1&sig=KNWgC2%3d&se=9999-01-01T00%3a00%3a00Z&sp=w" }, "inVMMetadata": { "subscriptionId": "8e037ad4-618f-4466-8bc8-5099d41ac15b", "resourceGroupName": "rg-dc-86fjzhp", "vmName": "edp0plkw2b", "location": "CentralUSEUAP", "vmId": "86f4ae0a-61f8-48ae-9199-40f402d56864", "vmSize": "Standard_B2s", "osType": "Linux" }, "requiredFeatures": [ { "name": "MultipleExtensionsPerHandler" } ], "gaFamilies": [ { "name": "Prod", "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml" ] }, { "name": "Test", "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml" ] } ], "extensionGoalStates": [ { "name": "Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent", "version": "2.15.112", "location": "https://zrdfepirv2cbn04prdstr01a.blob.core.windows.net/4ef06ad957494df49c807a5334f2b5d2/Microsoft.Azure.Security.Monitoring_AzureSecurityLinuxAgent_useast2euap_manifest.xml", "failoverlocation": "https://zrdfepirv2cbz06prdstr01a.blob.core.windows.net/4ef06ad957494df49c807a5334f2b5d2/Microsoft.Azure.Security.Monitoring_AzureSecurityLinuxAgent_useast2euap_manifest.xml", "additionalLocations": ["https://zrdfepirv2cbn06prdstr01a.blob.core.windows.net/4ef06ad957494df49c807a5334f2b5d2/Microsoft.Azure.Security.Monitoring_AzureSecurityLinuxAgent_useast2euap_manifest.xml"], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "settings": [ { "protectedSettingsCertThumbprint": "F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent==", "publicSettings": "{\"enableGenevaUpload\":true}" } ] } ] } Azure-WALinuxAgent-a976115/tests/data/hostgaplugin/vm_settings-unsupported_version.json000066400000000000000000000062671510742556200315540ustar00rootroot00000000000000{ "hostGAPluginVersion": "1.0.8.116", "vmSettingsSchemaVersion": "0.0", "activityId": "a33f6f53-43d6-4625-b322-1a39651a00c9", "correlationId": "9a47a2a2-e740-4bfc-b11b-4f2f7cfe7d2e", "inSvdSeqNo": 1, "extensionsLastModifiedTickCount": 637726657706205217, "extensionGoalStatesSource": "FastTrack", "onHold": true, "statusUploadBlob": { "statusBlobType": "BlockBlob", "value": "https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.status?sv=2018-03-28&sr=b&sk=system-1&sig=KNWgC2%3d&se=9999-01-01T00%3a00%3a00Z&sp=w" }, "inVMMetadata": { "subscriptionId": "8e037ad4-618f-4466-8bc8-5099d41ac15b", "resourceGroupName": "rg-dc-86fjzhp", "vmName": "edp0plkw2b", "location": "CentralUSEUAP", "vmId": "86f4ae0a-61f8-48ae-9199-40f402d56864", "vmSize": "Standard_B2s", "osType": "Linux" }, "gaFamilies": [ { "name": "Prod", "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml" ] }, { "name": "Test", "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml" ] } ], "extensionGoalStates": [ { "name": "Microsoft.Azure.Extensions.CustomScript", "version": "2.1.6", "location": "https://umsavwggj2v40kvqhc0w.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "failoverlocation": "https://umsafwzhkbm1rfrhl0ws.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "additionalLocations": [ "https://umsanh4b5rfz0q0p4pwm.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false, "settings": [ { "publicSettings": "{\"commandToExecute\":\"echo 'cee174d4-4daa-4b07-9958-53b9649445c2'\"}" } ], "dependsOn": [ { "DependsOnExtension": [ { "handler": "Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent" } ], "dependencyLevel": 1 } ] } ] } Azure-WALinuxAgent-a976115/tests/data/hostgaplugin/vm_settings.json000066400000000000000000000553631510742556200254220ustar00rootroot00000000000000{ "hostGAPluginVersion": "1.0.8.133", "vmSettingsSchemaVersion": "0.0", "activityId": "a33f6f53-43d6-4625-b322-1a39651a00c9", "correlationId": "9a47a2a2-e740-4bfc-b11b-4f2f7cfe7d2e", "inSvdSeqNo": 1, "extensionsLastModifiedTickCount": 637726657706205299, "extensionGoalStatesSource": "FastTrack", "onHold": true, "statusUploadBlob": { "statusBlobType": "BlockBlob", "value": "https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.status?sv=2018-03-28&sr=b&sk=system-1&sig=KNWgC2%3d&se=9999-01-01T00%3a00%3a00Z&sp=w" }, "inVMMetadata": { "subscriptionId": "8e037ad4-618f-4466-8bc8-5099d41ac15b", "resourceGroupName": "rg-dc-86fjzhp", "vmName": "edp0plkw2b", "location": "CentralUSEUAP", "vmId": "86f4ae0a-61f8-48ae-9199-40f402d56864", "vmSize": "Standard_B2s", "osType": "Linux" }, "requiredFeatures": [ { "name": "MultipleExtensionsPerHandler" } ], "gaFamilies": [ { "name": "Prod", "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml" ] }, { "name": "Test", "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml" ] } ], "extensionGoalStates": [ { "name": "Microsoft.Azure.Monitor.AzureMonitorLinuxAgent", "version": "1.9.1", "location": "https://zrdfepirv2cbn04prdstr01a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml", "failoverlocation": "https://zrdfepirv2cbn06prdstr01a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml", "additionalLocations": ["https://zrdfepirv2cbn09pr02a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml"], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "encodedSignature": "MIInEAYJKoZIhvcNAQcCoIInATCCJv0CAQMxDTALBglghkgBZQMEAgIwCQYHgUuDSAcICaCCDXYwggX0MIID3KADAgECAhMzAAADrzBADkyjTQVBAAAAAAOvMA0GCSqGSIb3DQEBCwUAMH4xCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xKDAmBgNVBAMTH01pY3Jvc29mdCBDb2RlIFNpZ25pbmcgUENBIDIwMTEwHhcNMjMxMTE2MTkwOTAwWhcNMjQxMTE0MTkwOTAwWjB0MQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMR4wHAYDVQQDExVNaWNyb3NvZnQgQ29ycG9yYXRpb24wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDOS8s1ra6f0YGtg0OhEaQa/t3Q+q1MEHhWJhqQVuO5amYXQpy8MDPNoJYk+FWAhePP5LxwcSge5aen+f5Q6WNPd6EDxGzotvVpNi5ve0H97S3F7C/axDfKxyNh21MG0W8Sb0vxi/vorcLHOL9i+t2D6yvvDzLlEefUCbQV/zGCBjXGlYJcUj6RAzXyeNANxSpKXAGd7Fh+ocGHPPphcD9LQTOJgG7Y7aYztHqBLJiQQ4eAgZNU4ac6+8LnEGALgo1ydC5BJEuJQjYKbNTy959HrKSu7LO3Ws0w8jw6pYdC1IMpdTkk2puTgY2PDNzBtLM4evG7FYer3WX+8t1UMYNTAgMBAAGjggFzMIIBbzAfBgNVHSUEGDAWBgorBgEEAYI3TAgBBggrBgEFBQcDAzAdBgNVHQ4EFgQURxxxNPIEPGSO8kqz+bgCAQWGXsEwRQYDVR0RBD4wPKQ6MDgxHjAcBgNVBAsTFU1pY3Jvc29mdCBDb3Jwb3JhdGlvbjEWMBQGA1UEBRMNMjMwMDEyKzUwMTgyNjAfBgNVHSMEGDAWgBRIbmTlUAXTgqoXNzcitW2oynUClTBUBgNVHR8ETTBLMEmgR6BFhkNodHRwOi8vd3d3Lm1pY3Jvc29mdC5jb20vcGtpb3BzL2NybC9NaWNDb2RTaWdQQ0EyMDExXzIwMTEtMDctMDguY3JsMGEGCCsGAQUFBwEBBFUwUzBRBggrBgEFBQcwAoZFaHR0cDovL3d3dy5taWNyb3NvZnQuY29tL3BraW9wcy9jZXJ0cy9NaWNDb2RTaWdQQ0EyMDExXzIwMTEtMDctMDguY3J0MAwGA1UdEwEB/wQCMAAwDQYJKoZIhvcNAQELBQADggIBAISxFt/zR2frTFPB45YdmhZpB2nNJoOoi+qlgcTlnO4QwlYN1w/vYwbDy/oFJolD5r6FMJd0RGcgEM8q9TgQ2OC7gQEmhweVJ7yuKJlQBH7P7Pg5RiqgV3cSonJ+OM4kFHbP3gPLiyzssSQdRuPY1mIWoGg9i7Y4ZC8ST7WhpSyc0pns2XsUe1XsIjaUcGu7zd7gg97eCUiLRdVklPmpXobH9CEAWakRUGNICYN2AgjhRTC4j3KJfqMkU04R6Toyh4/Toswm1uoDcGr5laYnTfcX3u5WnJqJLhuPe8Uj9kGAOcyo0O1mNwDa+LhFEzB6CB32+wfJMumfr6degvLTe8x55urQLeTjimBQgS49BSUkhFN7ois3cZyNpnrMca5AZaC7pLI72vuqSsSlLalGOcZmPHZGYJqZ0BacN274OZ80Q8B11iNokns9Od348bMb5Z4fihxaBWebl8kWEi2OPvQImOAeq3nt7UWJBzJYLAGEpfasaA3ZQgIcEXdD+uwo6ymMzDY6UamFOfYqYWXkntxDGu7ngD2ugKUuccYKJJRiiz+LAUcj90BVcSHRLQop9N8zoALr/1sJuwPrVAtxHNEgSW+AKBqIxYWM4Ev32l6agSUAezLMbq5f3d8x9qzT031jMDT+sUAoCw0M5wVtCUQcqINPuYjbS1WgJyZIiEkBMIIHejCCBWKgAwIBAgIKYQ6Q0gAAAAAAAzANBgkqhkiG9w0BAQsFADCBiDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCldhc2hpbmd0b24xEDAOBgNVBAcTB1JlZG1vbmQxHjAcBgNVBAoTFU1pY3Jvc29mdCBDb3Jwb3JhdGlvbjEyMDAGA1UEAxMpTWljcm9zb2Z0IFJvb3QgQ2VydGlmaWNhdGUgQXV0aG9yaXR5IDIwMTEwHhcNMTEwNzA4MjA1OTA5WhcNMjYwNzA4MjEwOTA5WjB+MQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMSgwJgYDVQQDEx9NaWNyb3NvZnQgQ29kZSBTaWduaW5nIFBDQSAyMDExMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAq/D6chAcLq3YbqqCEE00uvK2WCGfQhsqa+laUKq4BjgaBEm6f8MMHt03a8YS2AvwOMKZBrDIOdUBFDFC04kNeWSHfpRgJGyvnkmc6Whe0t+bU7IKLMOv2akrrnoJr9eWWcpgGgXpZnboMlImEi/nqwhQz7NEt13YxC4Ddato88tt8zpcoRb0RrrgOGSsbmQ1eKagYw8t00CT+OPeBw3VXHmlSSnnDb6gE3e+lD3v++MrWhAfTVYoonpy4BI6t0le2O3tQ5GD2Xuye4Yb2T6xjF3oiU+EGvKhL1nkkDstrjNYxbc+/jLTswM9sbKvkjh+0p2ALPVOVpEhNSXDOW5kf1O6nA+tGSOEy/S6A4aN91/w0FK/jJSHvMAhdCVfGCi2zCcoOCWYOUo2z3yxkq4cI6epZuxhH2rhKEmdX4jiJV3TIUs+UsS1Vz8kA/DRelsv1SPjcF0PUUZ3s/gA4bysAoJf28AVs70b1FVL5zmhD+kjSbwYuER8ReTBw3J64HLnJN+/RpnF78IcV9uDjexNSTCnq47f7Fufr/zdsGbiwZeBe+3W7UvnSSmnEyimp31ngOaKYnhfsi+E11ecXL93KCjx7W3DKI8sj0A3T8HhhUSJxAlMxdSlQy90lfdu+HggWCwTXWCVmj5PM4TasIgX3p5O9JawvEagbJjS4NaIjAsCAwEAAaOCAe0wggHpMBAGCSsGAQQBgjcVAQQDAgEAMB0GA1UdDgQWBBRIbmTlUAXTgqoXNzcitW2oynUClTAZBgkrBgEEAYI3FAIEDB4KAFMAdQBiAEMAQTALBgNVHQ8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zAfBgNVHSMEGDAWgBRyLToCMZBDuRQFTuHqp8cx0SOJNDBaBgNVHR8EUzBRME+gTaBLhklodHRwOi8vY3JsLm1pY3Jvc29mdC5jb20vcGtpL2NybC9wcm9kdWN0cy9NaWNSb29DZXJBdXQyMDExXzIwMTFfMDNfMjIuY3JsMF4GCCsGAQUFBwEBBFIwUDBOBggrBgEFBQcwAoZCaHR0cDovL3d3dy5taWNyb3NvZnQuY29tL3BraS9jZXJ0cy9NaWNSb29DZXJBdXQyMDExXzIwMTFfMDNfMjIuY3J0MIGfBgNVHSAEgZcwgZQwgZEGCSsGAQQBgjcuAzCBgzA/BggrBgEFBQcCARYzaHR0cDovL3d3dy5taWNyb3NvZnQuY29tL3BraW9wcy9kb2NzL3ByaW1hcnljcHMuaHRtMEAGCCsGAQUFBwICMDQeMiAdAEwAZQBnAGEAbABfAHAAbwBsAGkAYwB5AF8AcwB0AGEAdABlAG0AZQBuAHQALiAdMA0GCSqGSIb3DQEBCwUAA4ICAQBn8oalmOBUeRou09h0ZyKbC5YR4WOSmUKWfdJ5DJDBZV8uLD74w3LRbYP+vj/oCso7v0epo/Np22O/IjWll11lhJB9i0ZQVdgMknzSGksc8zxCi1LQsP1r4z4HLimb5j0bpdS1HXeUOeLpZMlEPXh6I/MTfaaQdION9MsmAkYqwooQu6SpBQyb7Wj6aC6VoCo/KmtYSWMfCWluWpiW5IP0wI/zRive/DvQvTXvbiWu5a8n7dDd8w6vmSiXmE0OPQvyCInWH8MyGOLwxS3OW560STkKxgrCxq2u5bLZ2xWIUUVYODJxJxp/sfQn+N4sOiBpmLJZiWhub6e3dMNABQamASooPoI/E01mC8CzTfXhj38cbxV9Rad25UAqZaPDXVJihsMdYzaXht/a8/jyFqGaJ+HNpZfQ7l1jQeNbB5yHPgZ3BtEGsXUfFL5hYbXw3MYbBL7fQccOKO7eZS/sl/ahXJbYANahRr1Z85elCUtIEJmAH9AAKcWxm6U/RXceNcbSoqKfenoi+kiVH6v7RyOA9Z74v2u3S5fi63V4GuzqN5l5GEv/1rMjaHXmr/r8i+sLgOppO6/8MO0ETI7f33VtY5E90Z1WTk+/gFcioXgRMiF670EKsT/7qMykXcGhiJtXcVZOSEXAQsmbdlsKgEhr/Xmfwb1tbWrJUnMTDXpQzTGCGWIwghleAgEBMIGVMH4xCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xKDAmBgNVBAMTH01pY3Jvc29mdCBDb2RlIFNpZ25pbmcgUENBIDIwMTECEzMAAAOvMEAOTKNNBUEAAAAAA68wCwYJYIZIAWUDBAICoFkwFgYJKoZIhvcNAQkDMQkGB4FLg0gHCAkwPwYJKoZIhvcNAQkEMTIEMDBbd8WC98w2hp0LRsyGXkhY0ZY+y0Pl20deVXonOXR+vDsyK96L9uBzpNRlolZD0DANBgkqhkiG9w0BAQEFAASCAQAIaK9t6Unz6YcKR2q8D2Vjvq9j+YK0U1+tb8s2ZslmmL19Yeb+NRy4tkS7lVEmMYRiFTy+jyis6UGL81ziXEXqAfqjkJt/zjN/8Qek91fzKYJMuCfEm6xVv+gfNHCp0fuGn4b9QNoD7UUMe4oBskSSLSiW0ri9FblSdjeoLZKvoRzHFBF94wI2Kw0iCBUQgNKHKT3lyG9D4NQySAaS0BnYG/s/HPgGMPT6peWRWAXkuTQ8zxb98pOzdf3HZ4Zz2n8qEh1BM6nHba2CKnDP0yjEz7OERVWcLUVPcTHC/xG94cp1gdlKQ09t3H7lBwccxmztUt9sIGUAdeJFAChTvvnSoYIXRDCCF0AGCyqGSIb3DQEJEAIOMYIXLzCCFysGCSqGSIb3DQEHAqCCFxwwghcYAgEDMQ8wDQYJYIZIAWUDBAIBBQAwggFzBgsqhkiG9w0BCRABBKCCAWIEggFeMIIBWgIBAQYKKwYBBAGEWQoDATAxMA0GCWCGSAFlAwQCAQUABCALbe+1JlANO/4xRH8dJHYO8uMX6ee/KhxzL1ZHE4fguAIGZnLzb33XGBMyMDI0MDYyMDIzMzgyOS4yMzNaMASAAgH0AhgsprYE/OXhkFp093+I2SkmqEFqhU3g+VWggdikgdUwgdIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xLTArBgNVBAsTJE1pY3Jvc29mdCBJcmVsYW5kIE9wZXJhdGlvbnMgTGltaXRlZDEmMCQGA1UECxMdVGhhbGVzIFRTUyBFU046ODZERi00QkJDLTkzMzUxJTAjBgNVBAMTHE1pY3Jvc29mdCBUaW1lLVN0YW1wIFNlcnZpY2WgghF4MIIHJzCCBQ+gAwIBAgITMwAAAd1dVx2V1K2qGwABAAAB3TANBgkqhkiG9w0BAQsFADB8MQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMSYwJAYDVQQDEx1NaWNyb3NvZnQgVGltZS1TdGFtcCBQQ0EgMjAxMDAeFw0yMzEwMTIxOTA3MDlaFw0yNTAxMTAxOTA3MDlaMIHSMQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMS0wKwYDVQQLEyRNaWNyb3NvZnQgSXJlbGFuZCBPcGVyYXRpb25zIExpbWl0ZWQxJjAkBgNVBAsTHVRoYWxlcyBUU1MgRVNOOjg2REYtNEJCQy05MzM1MSUwIwYDVQQDExxNaWNyb3NvZnQgVGltZS1TdGFtcCBTZXJ2aWNlMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAqE4DlETqLnecdREfiWd8oun70m+Km5O1y1qKsLExRKs9LLkJYrYO2uJA/5PnYdds3aDsCS1DWlBltMMYXMrp3Te9hg2sI+4kr49Gw/YU9UOMFfLmastEXMgcctqIBqhsTm8Um6jFnRlZ0owKzxpyOEdSZ9pj7v38JHu434Hj7GMmrC92lT+anSYCrd5qvIf4Aqa/qWStA3zOCtxsKAfCyq++pPqUQWpimLu4qfswBhtJ4t7Skx1q1XkRbo1Wdcxg5NEq4Y9/J8Ep1KG5qUujzyQbupraZsDmXvv5fTokB6wySjJivj/0KAMWMdSlwdI4O6OUUEoyLXrzNF0t6t2lbRsFf0QO7HbMEwxoQrw3LFrAIS4Crv77uS0UBuXeFQq27NgLUVRm5SXYGrpTXtLgIqypHeK0tP2o1xvakAniOsgN2WXlOCip5/mCm/5hy8EzzfhtcU3DK13e6MMPbg/0N3zF9Um+6aOwFBCQrlP+rLcetAny53WcdK+0VWLlJr+5sa5gSlLyAXoYNY3n8pu94WR2yhNUg+jymRaGM+zRDucDn64HFAHjOWMSMrPlZbsEDjCmYWbbh+EGZGNXg1un6fvxyACO8NJ9OUDoNgFy/aTHUkfZ0iFpGdJ45d49PqEwXQiXn3wsy7SvDflWJRZwBCRQ1RPFGeoYXHPnD5m6wwMCAwEAAaOCAUkwggFFMB0GA1UdDgQWBBRuovW2jI9R2kXLIdIMpaPQjiXD8TAfBgNVHSMEGDAWgBSfpxVdAF5iXYP05dJlpxtTNRnpcjBfBgNVHR8EWDBWMFSgUqBQhk5odHRwOi8vd3d3Lm1pY3Jvc29mdC5jb20vcGtpb3BzL2NybC9NaWNyb3NvZnQlMjBUaW1lLVN0YW1wJTIwUENBJTIwMjAxMCgxKS5jcmwwbAYIKwYBBQUHAQEEYDBeMFwGCCsGAQUFBzAChlBodHRwOi8vd3d3Lm1pY3Jvc29mdC5jb20vcGtpb3BzL2NlcnRzL01pY3Jvc29mdCUyMFRpbWUtU3RhbXAlMjBQQ0ElMjAyMDEwKDEpLmNydDAMBgNVHRMBAf8EAjAAMBYGA1UdJQEB/wQMMAoGCCsGAQUFBwMIMA4GA1UdDwEB/wQEAwIHgDANBgkqhkiG9w0BAQsFAAOCAgEALlTZsg0uBcgdZsxypW5/2ORRP8rzPIsG+7mHwmuphHbP95o7bKjU6hz1KHK/Ft70ZkO7uSRTPFLInUhmSxlnDoUOrrJk1Pc8SMASdESlEEvxL6ZteD47hUtLQtKZvxchmIuxqpnR8MRy/cd4D7/L+oqcJBaReCGloQzAYxDNGSEbBwZ1evXMalDsdPG9+7nvEXFlfUyQqdYUQ0nq6t37i15SBePSeAg7H/+Xdcwrce3xPb7O8Yk0AX7n/moGTuevTv3MgJsVe/G2J003l6hd1b72sAiRL5QYPX0Bl0Gu23p1n450Cq4GIORhDmRV9QwpLfXIdA4aCYXG4I7NOlYdqWuql0iWWzLwo2yPlT2w42JYB3082XIQcdtBkOaL38E2U5jJO3Rh6EtsOi+ZlQ1rOTv0538D3XuaoJ1OqsTHAEZQ9sw/7+91hSpomym6kGdS2M5//voMCFXLx797rNH3w+SmWaWI7ZusvdDesPr5kJV2sYz1GbqFQMEGS9iH5iOYZ1xDkcHpZP1F5zz6oMeZuEuFfhl1pqt3n85d4tuDHZ/svhBBCPcqCqOoM5YidWE0TWBi1NYsd7jzzZ3+Tsu6LQrWDwRmsoPuZo6uwkso8qV6Bx4n0UKpjWwNQpSFFrQQdRb5mQouWiEqtLsXCN2sg1aQ8GBtDOcKN0TabjtCNNswggdxMIIFWaADAgECAhMzAAAAFcXna54Cm0mZAAAAAAAVMA0GCSqGSIb3DQEBCwUAMIGIMQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMTIwMAYDVQQDEylNaWNyb3NvZnQgUm9vdCBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkgMjAxMDAeFw0yMTA5MzAxODIyMjVaFw0zMDA5MzAxODMyMjVaMHwxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xJjAkBgNVBAMTHU1pY3Jvc29mdCBUaW1lLVN0YW1wIFBDQSAyMDEwMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEA5OGmTOe0ciELeaLL1yR5vQ7VgtP97pwHB9KpbE51yMo1V/YBf2xK4OK9uT4XYDP/XE/HZveVU3Fa4n5KWv64NmeFRiMMtY0Tz3cywBAY6GB9alKDRLemjkZrBxTzxXb1hlDcwUTIcVxRMTegCjhuje3XD9gmU3w5YQJ6xKr9cmmvHaus9ja+NSZk2pg7uhp7M62AW36MEBydUv626GIl3GoPz130/o5Tz9bshVZN7928jaTjkY+yOSxRnOlwaQ3KNi1wjjHINSi947SHJMPgyY9+tVSP3PoFVZhtaDuaRr3tpK56KTesy+uDRedGbsoy1cCGMFxPLOJiss254o2I5JasAUq7vnGpF1tnYN74kpEeHT39IM9zfUGaRnXNxF803RKJ1v2lIH1+/NmeRd+2ci/bfV+AutuqfjbsNkz2K26oElHovwUDo9Fzpk03dJQcNIIP8BDyt0cY7afomXw/TNuvXsLz1dhzPUNOwTM5TI4CvEJoLhDqhFFG4tG9ahhaYQFzymeiXtcodgLiMxhy16cg8ML6EgrXY28MyTZki1ugpoMhXV8wdJGUlNi5UPkLiWHzNgY1GIRH29wb0f2y1BzFa/ZcUlFdEtsluq9QBXpsxREdcu+N+VLEhReTwDwV2xo3xwgVGD94q0W29R6HXtqPnhZyacaue7e3PmriLq0CAwEAAaOCAd0wggHZMBIGCSsGAQQBgjcVAQQFAgMBAAEwIwYJKwYBBAGCNxUCBBYEFCqnUv5kxJq+gpE8RjUpzxD/LwTuMB0GA1UdDgQWBBSfpxVdAF5iXYP05dJlpxtTNRnpcjBcBgNVHSAEVTBTMFEGDCsGAQQBgjdMg30BATBBMD8GCCsGAQUFBwIBFjNodHRwOi8vd3d3Lm1pY3Jvc29mdC5jb20vcGtpb3BzL0RvY3MvUmVwb3NpdG9yeS5odG0wEwYDVR0lBAwwCgYIKwYBBQUHAwgwGQYJKwYBBAGCNxQCBAweCgBTAHUAYgBDAEEwCwYDVR0PBAQDAgGGMA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAU1fZWy4/oolxiaNE9lJBb186aGMQwVgYDVR0fBE8wTTBLoEmgR4ZFaHR0cDovL2NybC5taWNyb3NvZnQuY29tL3BraS9jcmwvcHJvZHVjdHMvTWljUm9vQ2VyQXV0XzIwMTAtMDYtMjMuY3JsMFoGCCsGAQUFBwEBBE4wTDBKBggrBgEFBQcwAoY+aHR0cDovL3d3dy5taWNyb3NvZnQuY29tL3BraS9jZXJ0cy9NaWNSb29DZXJBdXRfMjAxMC0wNi0yMy5jcnQwDQYJKoZIhvcNAQELBQADggIBAJ1VffwqreEsH2cBMSRb4Z5yS/ypb+pcFLY+TkdkeLEGk5c9MTO1OdfCcTY/2mRsfNB1OW27DzHkwo/7bNGhlBgi7ulmZzpTTd2YurYeeNg2LpypglYAA7AFvonoaeC6Ce5732pvvinLbtg/SHUB2RjebYIM9W0jVOR4U3UkV7ndn/OOPcbzaN9l9qRWqveVtihVJ9AkvUCgvxm2EhIRXT0n4ECWOKz3+SmJw7wXsFSFQrP8DJ6LGYnn8AtqgcKBGUIZUnWKNsIdw2FzLixre24/LAl4FOmRsqlb30mjdAy87JGA0j3mSj5mO0+7hvoyGtmW9I/2kQH2zsZ0/fZMcm8Qq3UwxTSwethQ/gpY3UA8x1RtnWN0SCyxTkctwRQEcb9k+SS+c23Kjgm9swFXSVRk2XPXfx5bRAGOWhmRaw2fpCjcZxkoJLo4S5pu+yFUa2pFEUep8beuyOiJXk+d0tBMdrVXVAmxaQFEfnyhYWxz/gq77EFmPWn9y8FBSX5+k77L+DvktxW/tM4+pTFRhLy/AsGConsXHRWJjXD+57XQKBqJC4822rpM+Zv/Cuk0+CQ1ZyvgDbjmjJnW4SLq8CdCPSWU5nR0W2rRnj7tfqAxM328y+l7vzhwRNGQ8cirOoo6CGJ/2XBjU02N7oJtpQUQwXEGahC0HVUzWLOhcGbyoYIC1DCCAj0CAQEwggEAoYHYpIHVMIHSMQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMS0wKwYDVQQLEyRNaWNyb3NvZnQgSXJlbGFuZCBPcGVyYXRpb25zIExpbWl0ZWQxJjAkBgNVBAsTHVRoYWxlcyBUU1MgRVNOOjg2REYtNEJCQy05MzM1MSUwIwYDVQQDExxNaWNyb3NvZnQgVGltZS1TdGFtcCBTZXJ2aWNloiMKAQEwBwYFKw4DAhoDFQA2I0cZZds1oM/GfKINsQ5yJKMWEKCBgzCBgKR+MHwxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xJjAkBgNVBAMTHU1pY3Jvc29mdCBUaW1lLVN0YW1wIFBDQSAyMDEwMA0GCSqGSIb3DQEBBQUAAgUA6h4aiTAiGA8yMDI0MDYyMDExMDMzN1oYDzIwMjQwNjIxMTEwMzM3WjB0MDoGCisGAQQBhFkKBAExLDAqMAoCBQDqHhqJAgEAMAcCAQACAgX7MAcCAQACAhH8MAoCBQDqH2wJAgEAMDYGCisGAQQBhFkKBAIxKDAmMAwGCisGAQQBhFkKAwKgCjAIAgEAAgMHoSChCjAIAgEAAgMBhqAwDQYJKoZIhvcNAQEFBQADgYEAGfu+JpdwJYpU+xUOu693Nef9bUv1la7pxXUtY+P82b5q8/FFZp5WUobGx6JrVuJTDuvqbEZYjwTzWIVUHog1kTXjji1NCFLCVnrlJqPwtH9uRQhnFDSmiP0tG1rNwht6ZViFrRexp+7cebOHSPfk+ZzrUyp9DptMAJmagfLClxAxggQNMIIECQIBATCBkzB8MQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMSYwJAYDVQQDEx1NaWNyb3NvZnQgVGltZS1TdGFtcCBQQ0EgMjAxMAITMwAAAd1dVx2V1K2qGwABAAAB3TANBglghkgBZQMEAgEFAKCCAUowGgYJKoZIhvcNAQkDMQ0GCyqGSIb3DQEJEAEEMC8GCSqGSIb3DQEJBDEiBCCZX/UOu+vfJ4kbHbQYoi1Ztz4aZycnWIB1vBYNNo/atDCB+gYLKoZIhvcNAQkQAi8xgeowgecwgeQwgb0EIGH/Di2aZaxPeJmce0fRWTftQI3TaVHFj5GI43rAMWNmMIGYMIGApH4wfDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCldhc2hpbmd0b24xEDAOBgNVBAcTB1JlZG1vbmQxHjAcBgNVBAoTFU1pY3Jvc29mdCBDb3Jwb3JhdGlvbjEmMCQGA1UEAxMdTWljcm9zb2Z0IFRpbWUtU3RhbXAgUENBIDIwMTACEzMAAAHdXVcdldStqhsAAQAAAd0wIgQg5Fd0dBTHG2u3SYEF2YcmJ7rHH4kHcV0GlSr/y6AQOYEwDQYJKoZIhvcNAQELBQAEggIAGcOQBnVMUPnu4d2wmccNjUncMe5i0C5VkJ7/VjqN4W6vSuKz7BFVIaUMoufkY94epjipx+Ip3BTj2heew7xB+f6zBKTlkXfakH7TEWeju3WzUYNt3kjJyS3SJeJGFJEiln1S6apObwPtbSq9EqwwFOt8pJy9bAvoxuRM6Olib/eiHr3uiKkk6FCccUgG0PYN/PRUU7htzv6uyRXzCpuNpld3eorXt6nqt6bP7k1NFcwcYSv7V3WcoQzObk5Y9G5n/1rc5Hy9eRHwnz1l7MWOZGsJ9swOBFmoVUK8tB1vPy3bjooJBm7jRT9AcdGTaRS/t5nYe5sECI51sIyq3UBPCH8rNse1BIX9WCtcar1Bg6L64lzdPC7FVSh03vVlDZhNNf7tWRZqlYID2zTaY4p4LIW47O0/Rw2Swe4+hvl49e0v0m0FnmmwXN5097waF3Xv7FIDxbcrK+0DTv2p810Igwj6tErwxhP/367Q9EBzxODSJ8uD35DGMmHsTnViavQUBzj8LeTiA6sUZhF54AbI5dQkZLPydlR3GCmo1RKKO1VhDZnpFanj/N856MOlQqe/6x8sguPM+OpF6MWGvQH5SxsSzSf6dxhzS2pEHbirwJ4k1+tuF0LKOxNLwVVQQ9qPABNiWqml4bJk9oZ1dOTDd9EFjepHqynKk4olY3kq5sA=", "settingsSeqNo": 0, "settings": [ { "protectedSettingsCertThumbprint": "F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/Microsoft.Azure.Monitor.AzureMonitorLinuxAgent==", "publicSettings": "{\"GCS_AUTO_CONFIG\":true}" } ] }, { "name": "Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent", "version": "2.15.112", "location": "https://zrdfepirv2cbn04prdstr01a.blob.core.windows.net/4ef06ad957494df49c807a5334f2b5d2/Microsoft.Azure.Security.Monitoring_AzureSecurityLinuxAgent_useast2euap_manifest.xml", "failoverlocation": "https://zrdfepirv2cbz06prdstr01a.blob.core.windows.net/4ef06ad957494df49c807a5334f2b5d2/Microsoft.Azure.Security.Monitoring_AzureSecurityLinuxAgent_useast2euap_manifest.xml", "additionalLocations": ["https://zrdfepirv2cbn06prdstr01a.blob.core.windows.net/4ef06ad957494df49c807a5334f2b5d2/Microsoft.Azure.Security.Monitoring_AzureSecurityLinuxAgent_useast2euap_manifest.xml"], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "settings": [ { "protectedSettingsCertThumbprint": "F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent==", "publicSettings": "{\"enableGenevaUpload\":true}" } ] }, { "name": "Microsoft.Azure.Extensions.CustomScript", "version": "2.1.6", "location": "https://umsavwggj2v40kvqhc0w.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "failoverlocation": "https://umsafwzhkbm1rfrhl0ws.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "additionalLocations": [ "https://umsanh4b5rfz0q0p4pwm.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false, "settings": [ { "publicSettings": "{\"commandToExecute\":\"echo 'cee174d4-4daa-4b07-9958-53b9649445c2'\"}" } ], "dependsOn": [ { "DependsOnExtension": [ { "handler": "Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent" } ], "dependencyLevel": 1 } ] }, { "name": "Microsoft.CPlat.Core.RunCommandHandlerLinux", "version": "1.2.0", "location": "https://umsavbvncrpzbnxmxzmr.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml", "failoverlocation": "https://umsajbjtqrb3zqjvgb2z.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml", "additionalLocations": [ "https://umsawqtlsshtn5v2nfgh.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": true, "settings": [ { "publicSettings": "{\"source\":{\"script\":\"echo '4abb1e88-f349-41f8-8442-247d9fdfcac5'\"}}", "seqNo": 0, "extensionName": "MCExt1", "extensionState": "enabled" }, { "publicSettings": "{\"source\":{\"script\":\"echo 'e865c9bc-a7b3-42c6-9a79-cfa98a1ee8b3'\"}}", "seqNo": 0, "extensionName": "MCExt2", "extensionState": "enabled" }, { "publicSettings": "{\"source\":{\"script\":\"echo 'f923e416-0340-485c-9243-8b84fb9930c6'\"}}", "seqNo": 0, "extensionName": "MCExt3", "extensionState": "enabled" } ], "dependsOn": [ { "dependsOnExtension": [ { "extension": "...", "handler": "..." }, { "extension": "...", "handler": "..." } ], "dependencyLevel": 2, "name": "MCExt1" }, { "dependsOnExtension": [ { "extension": "...", "handler": "..." } ], "dependencyLevel": 1, "name": "MCExt2" } ] }, { "name": "Microsoft.OSTCExtensions.VMAccessForLinux", "version": "1.5.11", "location": "https://umsasc25p0kjg0c1dg4b.blob.core.windows.net/2bbece4f-0283-d415-b034-cc0adc6997a1/2bbece4f-0283-d415-b034-cc0adc6997a1_manifest.xml", "failoverlocation": "https://umsamfwlmfshvxx2lsjm.blob.core.windows.net/2bbece4f-0283-d415-b034-cc0adc6997a1/2bbece4f-0283-d415-b034-cc0adc6997a1_manifest.xml", "additionalLocations": [ "https://umsah3cwjlctnmhsvzqv.blob.core.windows.net/2bbece4f-0283-d415-b034-cc0adc6997a1/2bbece4f-0283-d415-b034-cc0adc6997a1_manifest.xml" ], "state": "enabled", "autoUpgrade": false, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false, "settings": [ { "protectedSettingsCertThumbprint": "F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpddesZQewdDBgegkxNzA1BgoJkgergres/Microsoft.OSTCExtensions.VMAccessForLinux==" } ] } ] } Azure-WALinuxAgent-a976115/tests/data/imds/000077500000000000000000000000001510742556200204015ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/imds/unicode.json000066400000000000000000000020031510742556200227150ustar00rootroot00000000000000{ "compute": { "location": "wéstus", "name": "héalth", "offer": "UbuntuSérvér", "osType": "Linux", "placementGroupId": "", "platformFaultDomain": "0", "platformUpdateDomain": "0", "publisher": "Canonical", "resourceGroupName": "tésts", "sku": "16.04-LTS", "subscriptionId": "21b2dc34-bcé6-4é63-9449-d2a8d1c2339é", "tags": "", "version": "16.04.201805220", "vmId": "é7fdbfc4-2déb-4a4é-8615-éa6aaf50162é", "vmScaleSetName": "", "vmSize": "Standard_D2_V2", "zone": "" }, "network": { "interface": [ { "ipv4": { "ipAddress": [ { "privateIpAddress": "10.0.1.4", "publicIpAddress": "40.112.128.120" } ], "subnet": [ { "address": "10.0.1.0", "prefix": "24" } ] }, "ipv6": { "ipAddress": [] }, "macAddress": "000D3A3382E8" } ] } } Azure-WALinuxAgent-a976115/tests/data/imds/valid.json000066400000000000000000000017661510742556200224050ustar00rootroot00000000000000{ "compute": { "location": "westus", "name": "health", "offer": "UbuntuServer", "osType": "Linux", "placementGroupId": "", "platformFaultDomain": "0", "platformUpdateDomain": "0", "publisher": "Canonical", "resourceGroupName": "tests", "sku": "16.04-LTS", "subscriptionId": "21b2dc34-bce6-4e63-9449-d2a8d1c2339e", "tags": "", "version": "16.04.201805220", "vmId": "e7fdbfc4-2deb-4a4e-8615-ea6aaf50162e", "vmScaleSetName": "", "vmSize": "Standard_D2_V2", "zone": "" }, "network": { "interface": [ { "ipv4": { "ipAddress": [ { "privateIpAddress": "10.0.1.4", "publicIpAddress": "40.112.128.120" } ], "subnet": [ { "address": "10.0.1.0", "prefix": "24" } ] }, "ipv6": { "ipAddress": [] }, "macAddress": "000D3A3382E8" } ] } } Azure-WALinuxAgent-a976115/tests/data/init/000077500000000000000000000000001510742556200204105ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/init/12-CPUQuota.conf000066400000000000000000000000261510742556200231360ustar00rootroot00000000000000[Service] CPUQuota=50%Azure-WALinuxAgent-a976115/tests/data/init/azure-vmextensions.slice000066400000000000000000000001671510742556200253230ustar00rootroot00000000000000[Unit] Description=Slice for Azure VM Extensions DefaultDependencies=no Before=slices.target [Slice] CPUAccounting=yes Azure-WALinuxAgent-a976115/tests/data/init/azure-walinuxagent-logcollector.slice000066400000000000000000000002711510742556200277510ustar00rootroot00000000000000[Unit] Description=Slice for Azure VM Agent Periodic Log Collector DefaultDependencies=no Before=slices.target [Slice] CPUAccounting=yes CPUQuota=5% MemoryAccounting=yes MemoryLimit=30MAzure-WALinuxAgent-a976115/tests/data/init/azure.slice000066400000000000000000000001471510742556200225610ustar00rootroot00000000000000[Unit] Description=Slice for Azure VM Agent and Extensions DefaultDependencies=no Before=slices.target Azure-WALinuxAgent-a976115/tests/data/init/walinuxagent.service000066400000000000000000000007721510742556200245060ustar00rootroot00000000000000# # NOTE: This is the service file used in current versions of the agent (>= 2.2.55) # [Unit] Description=Azure Linux Agent After=network-online.target cloud-init.service Wants=network-online.target sshd.service sshd-keygen.service ConditionFileIsExecutable=/usr/sbin/waagent ConditionPathExists=/etc/waagent.conf [Service] Type=simple ExecStart=/usr/bin/python3 -u /usr/sbin/waagent -daemon Restart=always Slice=azure.slice CPUAccounting=yes MemoryAccounting=yes [Install] WantedBy=multi-user.target Azure-WALinuxAgent-a976115/tests/data/init/walinuxagent.service.previous000066400000000000000000000006771510742556200263650ustar00rootroot00000000000000# # NOTE: This is the service file used in older versions of the agent (<= 2.2.54) # [Unit] Description=Azure Linux Agent After=network-online.target cloud-init.service Wants=network-online.target sshd.service sshd-keygen.service ConditionFileIsExecutable=/usr/sbin/waagent ConditionPathExists=/etc/waagent.conf [Service] Type=simple ExecStart=/usr/bin/python3 -u /usr/sbin/waagent -daemon Restart=always [Install] WantedBy=multi-user.target Azure-WALinuxAgent-a976115/tests/data/init/walinuxagent.service_system-slice000066400000000000000000000010251510742556200271770ustar00rootroot00000000000000# # NOTE: # This file hosted on WALinuxAgent repository only for reference purposes. # Please refer to a recent image to find out the up-to-date systemd unit file. # [Unit] Description=Azure Linux Agent After=network-online.target cloud-init.service Wants=network-online.target sshd.service sshd-keygen.service ConditionFileIsExecutable=/usr/sbin/waagent ConditionPathExists=/etc/waagent.conf [Service] Type=simple ExecStart=/usr/bin/python3 -u /usr/sbin/waagent -daemon Restart=always [Install] WantedBy=multi-user.target Azure-WALinuxAgent-a976115/tests/data/metadata/000077500000000000000000000000001510742556200212255ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/metadata/certificates.json000066400000000000000000000002051510742556200245620ustar00rootroot00000000000000{ "certificates":[{ "name":"foo", "thumbprint":"bar", "certificateDataUri":"certificates_data" }] } Azure-WALinuxAgent-a976115/tests/data/metadata/certificates_data.json000066400000000000000000000112531510742556200255600ustar00rootroot00000000000000{"certificateData":"MIINswYJKoZIhvcNAQcDoIINpDCCDaACAQIxggEwMIIBLAIBAoAUvyL+x6GkZXog QNfsXRZAdD9lc7IwDQYJKoZIhvcNAQEBBQAEggEArhMPepD/RqwdPcHEVqvrdZid 72vXrOCuacRBhwlCGrNlg8oI+vbqmT6CSv6thDpet31ALUzsI4uQHq1EVfV1+pXy NlYD1CKhBCoJxs2fSPU4rc8fv0qs5JAjnbtW7lhnrqFrXYcyBYjpURKfa9qMYBmj NdijN+1T4E5qjxPr7zK5Dalp7Cgp9P2diH4Nax2nixotfek3MrEFBaiiegDd+7tE ux685GWYPqB5Fn4OsDkkYOdb0OE2qzLRrnlCIiBCt8VubWH3kMEmSCxBwSJupmQ8 sxCWk+sBPQ9gJSt2sIqfx/61F8Lpu6WzP+ZOnMLTUn2wLU/d1FN85HXmnQALzTCC DGUGCSqGSIb3DQEHATAUBggqhkiG9w0DBwQIbEcBfddWPv+AggxAAOAt/kCXiffe GeJG0P2K9Q18XZS6Rz7Xcz+Kp2PVgqHKRpPjjmB2ufsRO0pM4z/qkHTOdpfacB4h gz912D9U04hC8mt0fqGNTvRNAFVFLsmo7KXc/a8vfZNrGWEnYn7y1WfP52pqA/Ei SNFf0NVtMyqg5Gx+hZ/NpWAE5vcmRRdoYyWeg13lhlW96QUxf/W7vY/D5KpAGACI ok79/XI4eJkbq3Dps0oO/difNcvdkE74EU/GPuL68yR0CdzzafbLxzV+B43TBRgP jH1hCdRqaspjAaZL5LGfp1QUM8HZIKHuTze/+4dWzS1XR3/ix9q/2QFI7YCuXpuE un3AFYXE4QX/6kcPklZwh9FqjSie3I5HtC1vczqYVjqT4oHrs8ktkZ7oAzeXaXTF k6+JQNNa/IyJw24I1MR77q7HlHSSfhXX5cFjVCd/+SiA4HJQjJgeIuXZ+dXmSPdL 9xLbDbtppifFyNaXdlSzcsvepKy0WLF49RmbL7Bnd46ce/gdQ6Midwi2MTnUtapu tHmu/iJtaUpwXXC0B93PHfAk7Y3SgeY4tl/gKzn9/x5SPAcHiNRtOsNBU8ZThzos Wh41xMLZavmX8Yfm/XWtl4eU6xfhcRAbJQx7E1ymGEt7xGqyPV7hjqhoB9i3oR5N itxHgf1+jw/cr7hob+Trd1hFqZO6ePMyWpqUg97G2ThJvWx6cv+KRtTlVA6/r/UH gRGBArJKBlLpXO6dAHFztT3Y6DFThrus4RItcfA8rltfQcRm8d0nPb4lCa5kRbCx iudq3djWtTIe64sfk8jsc6ahWYSovM+NmhbpxEUbZVWLVEcHAYOeMbKgXSu5sxNO JZNeFdzZqDRRY9fGjYNS7DdNOmrMmWKH+KXuMCItpNZsZS/3W7QxAo3ugYLdUylU Zg8H/BjUGZCGn1rEBAuQX78m0SZ1xHlgHSwJIOmxOJUDHLPHtThfbELY9ec14yi5 so1aQwhhfhPvF+xuXBrVeTAfhFNYkf2uxcEp7+tgFAc5W0QfT9SBn5vSvIxv+dT4 7B2Pg1l/zjdsM74g58lmRJeDoz4psAq+Uk7n3ImBhIku9qX632Q1hanjC8D4xM4W sI/W0ADCuAbY7LmwMpAMdrGg//SJUnBftlom7C9VA3EVf8Eo+OZH9hze+gIgUq+E iEUL5M4vOHK2ttsYrSkAt8MZzjQiTlDr1yzcg8fDIrqEAi5arjTPz0n2s0NFptNW lRD+Xz6pCXrnRgR8YSWpxvq3EWSJbZkSEk/eOmah22sFnnBZpDqn9+UArAznXrRi nYK9w38aMGPKM39ymG8kcbY7jmDZlRgGs2ab0Fdj1jl3CRo5IUatkOJwCEMd/tkB eXLQ8hspJhpFnVNReX0oithVZir+j36epk9Yn8d1l+YlKmuynjunKl9fhmoq5Q6i DFzdYpqBV+x9nVhnmPfGyrOkXvGL0X6vmXAEif/4JoOW4IZpyXjgn+VoCJUoae5J Djl45Bcc2Phrn4HW4Gg/+pIwTFqqZZ2jFrznNdgeIxTGjBrVsyJUeO3BHI0mVLaq jtjhTshYCI7mXOis9W3ic0RwE8rgdDXOYKHhLVw9c4094P/43utSVXE7UzbEhhLE Ngb4H5UGrQmPTNbq40tMUMUCej3zIKuVOvamzeE0IwLhkjNrvKhCG1EUhX4uoJKu DQ++3KVIVeYSv3+78Jfw9F3usAXxX1ICU74/La5DUNjU7DVodLDvCAy5y1jxP3Ic If6m7aBYVjFSQAcD8PZPeIEl9W4ZnbwyBfSDd11P2a8JcZ7N99GiiH3yS1QgJnAO g9XAgjT4Gcn7k4lHPHLULgijfiDSvt94Ga4/hse0F0akeZslVN/bygyib7x7Lzmq JkepRianrvKHbatuxvcajt/d+dxCnr32Q1qCEc5fcgDsjvviRL2tKR0qhuYjn1zR Vk/fRtYOmlaGBVzUXcjLRAg3gC9+Gy8KvXIDrnHxD+9Ob+DUP9fgbKqMeOzKcCK8 NSfSQ+tQjBYD5Ku4zAPUQJoRGgx43vXzcl2Z2i3E2otpoH82Kx8S9WlVEUlTtBjQ QIGM5aR0QUNt8z34t2KWRA8SpP54VzBmEPdwLnzna+PkrGKsKiHVn4K+HfjDp1uW xyO8VjrolAOYosTPXMpNp2u/FoFxaAPTa/TvmKc0kQ3ED9/sGLS2twDnEccvHP+9 zzrnzzN3T2CWuXveDpuyuAty3EoAid1nuC86WakSaAZoa8H2QoRgsrkkBCq+K/yl 4FO9wuP+ksZoVq3mEDQ9qv6H4JJEWurfkws3OqrA5gENcLmSUkZie4oqAxeOD4Hh Zx4ckG5egQYr0PnOd2r7ZbIizv3MKT4RBrfOzrE6cvm9bJEzNWXdDyIxZ/kuoLA6 zX7gGLdGhg7dqzKqnGtopLAsyM1b/utRtWxOTGO9K9lRxyX82oCVT9Yw0DwwA+cH Gutg1w7JHrIAYEtY0ezHgxhqMGuuTyJMX9Vr0D+9DdMeBK7hVOeSnxkaQ0f9HvF6 0XI/2OTIoBSCBpUXjpgsYt7m7n2rFJGJmtqgLAosCAkacHnHLwX0EnzBw3sdDU6Q jFXUWIDd5xUsNkFDCbspLMFs22hjNI6f/GREwd23Q4ujF8pUIcxcfbs2myjbK45s tsn/jrkxmKRgwCIeN/H7CM+4GXSkEGLWbiGCxWzWt9wW1F4M7NW9nho3D1Pi2LBL 1ByTmjfo/9u9haWrp53enDLJJbcaslfe+zvo3J70Nnzu3m3oJ3dmUxgJIstG10g3 lhpUm1ynvx04IFkYJ3kr/QHG/xGS+yh/pMZlwcUSpjEgYFmjFHU4A1Ng4LGI4lnw 5wisay4J884xmDgGfK0sdVQyW5rExIg63yYXp2GskRdDdwvWlFUzPzGgCNXQU96A ljZfjs2u4IiVCC3uVsNbGqCeSdAl9HC5xKuPNbw5yTxPkeRL1ouSdkBy7rvdFaFf dMPw6sBRNW8ZFInlgOncR3+xT/rZxru87LCq+3hRN3kw3hvFldrW2QzZSksO759b pJEP+4fxuG96Wq25fRmzHzE0bdJ+2qF3fp/hy4oRi+eVPa0vHdtkymE4OUFWftb6 +P++JVOzZ4ZxYA8zyUoJb0YCaxL+Jp/QqiUiH8WZVmYZmswqR48sUUKr7TIvpNbY 6jEH6F7KiZCoWfKH12tUC69iRYx3UT/4Bmsgi3S4yUxfieYRMIwihtpP4i0O+OjB /DPbb13qj8ZSfXJ+jmF2SRFfFG+2T7NJqm09JvT9UcslVd+vpUySNe9UAlpcvNGZ 2+j180ZU7YAgpwdVwdvqiJxkeVtAsIeqAvIXMFm1PDe7FJB0BiSVZdihB6cjnKBI dv7Lc1tI2sQe7QSfk+gtionLrEnto+aXF5uVM5LMKi3gLElz7oXEIhn54OeEciB1 cEmyX3Kb4HMRDMHyJxqJXwxm88RgC6RekoPvstu+AfX/NgSpRj5beaj9XkweJT3H rKWhkjq4Ghsn1LoodxluMMHd61m47JyoqIP9PBKoW+Na0VUKIVHw9e9YeW0nY1Zi 5qFA/pHPAt9AbEilRay6NEm8P7TTlNo216amc8byPXanoNrqBYZQHhZ93A4yl6jy RdpYskMivT+Sh1nhZAioKqqTZ3HiFR8hFGspAt5gJc4WLYevmxSicGa6AMyhrkvG rvOSdjY6JY/NkxtcgeycBX5MLF7uDbhUeqittvmlcrVN6+V+2HIbCCrvtow9pcX9 EkaaNttj5M0RzjQxogCG+S5TkhCy04YvKIkaGJFi8xO3icdlxgOrKD8lhtbf4UpR cDuytl70JD95mSUWL53UYjeRf9OsLRJMHQOpS02japkMwCb/ngMCQuUXA8hGkBZL Xw7RwwPuM1Lx8edMXn5C0E8UK5e0QmI/dVIl2aglXk2oBMBJbnyrbfUPm462SG6u ke4gQKFmVy2rKICqSkh2DMr0NzeYEUjZ6KbmQcV7sKiFxQ0/ROk8eqkYYxGWUWJv ylPF1OTLH0AIbGlFPLQO4lMPh05yznZTac4tmowADSHY9RCxad1BjBeine2pj48D u36OnnuQIsedxt5YC+h1bs+mIvwMVsnMLidse38M/RayCDitEBvL0KeG3vWYzaAL h0FCZGOW0ilVk8tTF5+XWtsQEp1PpclvkcBMkU3DtBUnlmPSKNfJT0iRr2T0sVW1 h+249Wj0Bw=="}Azure-WALinuxAgent-a976115/tests/data/metadata/ext_handler_pkgs.json000066400000000000000000000002701510742556200254400ustar00rootroot00000000000000{ "versions": [{ "version":"1.3.0.0", "uris":[{ "uri":"http://localhost/foo1" },{ "uri":"http://localhost/foo2" }] }] } Azure-WALinuxAgent-a976115/tests/data/metadata/ext_handlers.json000066400000000000000000000006611510742556200246030ustar00rootroot00000000000000[{ "name":"foo", "properties":{ "version":"1.3.0.0", "upgradePolicy": "manual", "state": "enabled", "extensions":[{ "name":"baz", "sequenceNumber":0, "publicSettings":{ "commandToExecute": "echo 123", "uris":[] } }] }, "versionUris":[{ "uri":"http://ext_handler_pkgs/versionUri" }] }] Azure-WALinuxAgent-a976115/tests/data/metadata/ext_handlers_no_ext.json000066400000000000000000000000031510742556200261450ustar00rootroot00000000000000[] Azure-WALinuxAgent-a976115/tests/data/metadata/identity.json000066400000000000000000000000641510742556200237510ustar00rootroot00000000000000{ "vmName":"foo", "subscriptionId":"bar" } Azure-WALinuxAgent-a976115/tests/data/metadata/trans_cert000066400000000000000000000021271510742556200233160ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIDBzCCAe+gAwIBAgIJANujJuVt5eC8MA0GCSqGSIb3DQEBCwUAMBkxFzAVBgNV BAMMDkxpbnV4VHJhbnNwb3J0MCAXDTE0MTAyNDA3MjgwN1oYDzIxMDQwNzEyMDcy ODA3WjAZMRcwFQYDVQQDDA5MaW51eFRyYW5zcG9ydDCCASIwDQYJKoZIhvcNAQEB BQADggEPADCCAQoCggEBANPcJAkd6V5NeogSKjIeTXOWC5xzKTyuJPt4YZMVSosU 0lI6a0wHp+g2fP22zrVswW+QJz6AVWojIEqLQup3WyCXZTv8RUblHnIjkvX/+J/G aLmz0G5JzZIpELL2C8IfQLH2IiPlK9LOQH00W74WFcK3QqcJ6Kw8GcVaeSXT1r7X QcGMqEjcWJkpKLoMJv3LMufE+JMdbXDUGY+Ps7Zicu8KXvBPaKVsc6H2jrqBS8et jXbzLyrezTUDz45rmyRJzCO5Sk2pohuYg73wUykAUPVxd7L8WnSyqz1v4zrObqnw BAyor67JR/hjTBfjFOvd8qFGonfiv2Vnz9XsYFTZsXECAwEAAaNQME4wHQYDVR0O BBYEFL8i/sehpGV6IEDX7F0WQHQ/ZXOyMB8GA1UdIwQYMBaAFL8i/sehpGV6IEDX 7F0WQHQ/ZXOyMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAMPLrimT Gptu5pLRHPT8OFRN+skNSkepYaUaJuq6cSKxLumSYkD8++rohu+1+a7t1YNjjNSJ 8ohRAynRJ7aRqwBmyX2OPLRpOfyRZwR0rcFfAMORm/jOE6WBdqgYD2L2b+tZplGt /QqgQzebaekXh/032FK4c74Zg5r3R3tfNSUMG6nLauWzYHbQ5SCdkuQwV0ehGqh5 VF1AOdmz4CC2237BNznDFQhkeU0LrqqAoE/hv5ih7klJKZdS88rOYEnVJsFFJb0g qaycXjOm5Khgl4hKrd+DBD/qj4IVVzsmdpFli72k6WLBHGOXusUGo/3isci2iAIt DsfY6XGSEIhZnA4= -----END CERTIFICATE----- Azure-WALinuxAgent-a976115/tests/data/metadata/trans_prv000066400000000000000000000032501510742556200231660ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQDT3CQJHeleTXqI EioyHk1zlguccyk8riT7eGGTFUqLFNJSOmtMB6foNnz9ts61bMFvkCc+gFVqIyBK i0Lqd1sgl2U7/EVG5R5yI5L1//ifxmi5s9BuSc2SKRCy9gvCH0Cx9iIj5SvSzkB9 NFu+FhXCt0KnCeisPBnFWnkl09a+10HBjKhI3FiZKSi6DCb9yzLnxPiTHW1w1BmP j7O2YnLvCl7wT2ilbHOh9o66gUvHrY128y8q3s01A8+Oa5skScwjuUpNqaIbmIO9 8FMpAFD1cXey/Fp0sqs9b+M6zm6p8AQMqK+uyUf4Y0wX4xTr3fKhRqJ34r9lZ8/V 7GBU2bFxAgMBAAECggEBAM4hsfog3VAAyIieS+npq+gbhH6bWfMNaTQ3g5CNNbMu 9hhFeOJHzKnWYjSlamgBQhAfTN+2E+Up+iAtcVUZ/lMumrQLlwgMo1vgmvu5Kxmh /YE5oEG+k0JzrCjD1trwd4zvc3ZDYyk/vmVTzTOc311N248UyArUiyqHBbq1a4rP tJhCLn2c4S7flXGF0MDVGZyV9V7J8N8leq/dRGMB027Li21T+B4mPHXa6b8tpRPL 4vc8sHoUJDa2/+mFDJ2XbZfmlgd3MmIPlRn1VWoW7mxgT/AObsPl7LuQx7+t80Wx hIMjuKUHRACQSLwHxJ3SQRFWp4xbztnXSRXYuHTscLUCgYEA//Uu0qIm/FgC45yG nXtoax4+7UXhxrsWDEkbtL6RQ0TSTiwaaI6RSQcjrKDVSo/xo4ZySTYcRgp5GKlI CrWyNM+UnIzTNbZOtvSIAfjxYxMsq1vwpTlOB5/g+cMukeGg39yUlrjVNoFpv4i6 9t4yYuEaF4Vww0FDd2nNKhhW648CgYEA0+UYH6TKu03zDXqFpwf4DP2VoSo8OgfQ eN93lpFNyjrfzvxDZkGF+7M/ebyYuI6hFplVMu6BpgpFP7UVJpW0Hn/sXkTq7F1Q rTJTtkTp2+uxQVP/PzSOqK0Twi5ifkfoEOkPkNNtTiXzwCW6Qmmcvln2u893pyR5 gqo5BHR7Ev8CgYAb7bXpN9ZHLJdMHLU3k9Kl9YvqOfjTxXA3cPa79xtEmsrTys4q 4HuL22KSII6Fb0VvkWkBAg19uwDRpw78VC0YxBm0J02Yi8b1AaOhi3dTVzFFlWeh r6oK/PAAcMKxGkyCgMAZ3hstsltGkfXMoBwhW+yL6nyOYZ2p9vpzAGrjkwKBgQDF 0huzbyXVt/AxpTEhv07U0enfjI6tnp4COp5q8zyskEph8yD5VjK/yZh5DpmFs6Kw dnYUFpbzbKM51tToMNr3nnYNjEnGYVfwWgvNHok1x9S0KLcjSu3ki7DmmGdbfcYq A2uEyd5CFyx5Nr+tQOwUyeiPbiFG6caHNmQExLoiAQKBgFPy9H8///xsadYmZ18k r77R2CvU7ArxlLfp9dr19aGYKvHvnpsY6EuChkWfy8Xjqn3ogzgrHz/rn3mlGUpK vbtwtsknAHtTbotXJwfaBZv2RGgGRr3DzNo6ll2Aez0lNblZFXq132h7+y5iLvar 4euORaD/fuM4UPlR5mN+bypU -----END PRIVATE KEY----- Azure-WALinuxAgent-a976115/tests/data/metadata/vmagent_manifest1.json000066400000000000000000000006541510742556200255350ustar00rootroot00000000000000{ "versions": [ { "version": "2.2.8", "uris": [ { "uri": "https: //notused.com/ga/WALinuxAgent-2.2.8.zip" } ] }, { "version": "2.2.9", "uris": [ { "uri": "https: //notused.com/ga/WALinuxAgent-2.2.9.zip" } ] } ] }Azure-WALinuxAgent-a976115/tests/data/metadata/vmagent_manifest2.json000066400000000000000000000006541510742556200255360ustar00rootroot00000000000000{ "versions": [ { "version": "2.2.8", "uris": [ { "uri": "https: //notused.com/ga/WALinuxAgent-2.2.8.zip" } ] }, { "version": "2.2.9", "uris": [ { "uri": "https: //notused.com/ga/WALinuxAgent-2.2.9.zip" } ] } ] }Azure-WALinuxAgent-a976115/tests/data/metadata/vmagent_manifests.json000066400000000000000000000002601510742556200256300ustar00rootroot00000000000000{ "versionsManifestUris" : [ { "uri" : "https://notused.com/vmagent_manifest1.json" }, { "uri" : "https://notused.com/vmagent_manifest2.json" } ] } Azure-WALinuxAgent-a976115/tests/data/metadata/vmagent_manifests_invalid1.json000066400000000000000000000003121510742556200274150ustar00rootroot00000000000000{ "notTheRightKey": [ { "uri": "https://notused.com/vmagent_manifest1.json" }, { "uri": "https://notused.com/vmagent_manifest2.json" } ] }Azure-WALinuxAgent-a976115/tests/data/metadata/vmagent_manifests_invalid2.json000066400000000000000000000003121510742556200274160ustar00rootroot00000000000000{ "notTheRightKey": [ { "foo": "https://notused.com/vmagent_manifest1.json" }, { "bar": "https://notused.com/vmagent_manifest2.json" } ] }Azure-WALinuxAgent-a976115/tests/data/ovf-env-2.xml000066400000000000000000000037141510742556200217130ustar00rootroot00000000000000 1.0 LinuxProvisioningConfiguration HostName UserName UserPassword false EB0C0AB4B2D5FC35F2F0658D19F44C8283E2DD62 $HOME/UserName/.ssh/authorized_keys ssh-rsa AAAANOTAREALKEY== foo@bar.local EB0C0AB4B2D5FC35F2F0658D19F44C8283E2DD62 $HOME/UserName/.ssh/id_rsa CustomData 1.0 kms.core.windows.net true true true false Azure-WALinuxAgent-a976115/tests/data/ovf-env-3.xml000066400000000000000000000037101510742556200217100ustar00rootroot00000000000000 1.0 LinuxProvisioningConfiguration HostName UserName UserPassword false EB0C0AB4B2D5FC35F2F0658D19F44C8283E2DD62 $HOME/UserName/.ssh/authorized_keys ssh-rsa AAAANOTAREALKEY== foo@bar.local EB0C0AB4B2D5FC35F2F0658D19F44C8283E2DD62 $HOME/UserName/.ssh/id_rsa CustomData 1.0 kms.core.windows.net true true false Azure-WALinuxAgent-a976115/tests/data/ovf-env-4.xml000066400000000000000000000037201510742556200217120ustar00rootroot00000000000000 1.0 LinuxProvisioningConfiguration HostName UserName UserPassword false EB0C0AB4B2D5FC35F2F0658D19F44C8283E2DD62 $HOME/UserName/.ssh/authorized_keys ssh-rsa AAAANOTAREALKEY== foo@bar.local EB0C0AB4B2D5FC35F2F0658D19F44C8283E2DD62 $HOME/UserName/.ssh/id_rsa CustomData 1.0 kms.core.windows.net bad data true true false Azure-WALinuxAgent-a976115/tests/data/ovf-env.xml000066400000000000000000000037151510742556200215550ustar00rootroot00000000000000 1.0 LinuxProvisioningConfiguration HostName UserName UserPassword false EB0C0AB4B2D5FC35F2F0658D19F44C8283E2DD62 $HOME/UserName/.ssh/authorized_keys ssh-rsa AAAANOTAREALKEY== foo@bar.local EB0C0AB4B2D5FC35F2F0658D19F44C8283E2DD62 $HOME/UserName/.ssh/id_rsa CustomData 1.0 kms.core.windows.net false true true false Azure-WALinuxAgent-a976115/tests/data/safe_deploy.json000066400000000000000000000010161510742556200226300ustar00rootroot00000000000000{ "blacklisted" : [ "^1.2.3$", "^1.3(?:\\.\\d+)*$" ], "families" : { "ubuntu-x64": { "versions": [ "^Ubuntu,(1[4-9]|2[0-9])\\.\\d+,.*$" ], "require_64bit": true, "partition": 85 }, "fedora-x64": { "versions": [ "^Oracle[^,]*,([7-9]|[1-9][0-9])\\.\\d+,.*$", "^Red\\sHat[^,]*,([7-9]|[1-9][0-9])\\.\\d+,.*$" ], "partition": 20 } } }Azure-WALinuxAgent-a976115/tests/data/signing/000077500000000000000000000000001510742556200211035ustar00rootroot00000000000000Microsoft.OSTCExtensions.Edp.VMAccessForLinux__1.7.0.zip000077500000000000000000001210201510742556200330660ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/signingPKZQ resources/Ubuntu_defaultUT ggux Vn6}bFH|fk'Y#vb7FkH^a/I׻)rH΍A,łipRU&R+dQr&kt^[__J{gPNF3JmĒP)F;ֱ҆nfņKےd:#G\&9Id"a:EFtsߝv3D]|=ֆӚHt9%-$sc|۟XZp?lu`՘;ʥ%Zex_TFmx(Ɍ\q@BF\漂, t`A;pYv;;|R=ыTh^܉Dz//F悆wO޸WA$R?vJg]8 \+wU k7ڤ4gSUqq I4/武Eo>gMUbFRQ ^AUki }FzmK.ݭZӨxO(V=Z\IʚV!.YݫzFuqpKx%$@_㥴@QdKpK Zb26֬Jixq%t/Z ǃ/PKZO'*(resources/fedora_defaultUT ggux W]S#}ƿB "d0kpa6'Ji ci"il&9-ZvW>}j5'?<?OzW¹e=Wh_~jNVU\k!~h4Nt_R욬pبD,h1W9] 1!+VƒPU0ܹ *bLo,m&ysUeXD@Je}V-UQԮLnÑpj^p )Z˼$Y95\J/rkbiES<9]J !qE8M̒srJH]5N yh]?_8D=$BƛtO•EPjCieĹK1@K_Z:F@Jz-r*zbbGmf|k*϶qoOH*}^BB՛;u2|*p4Tsji7@=ۀdiA,r.ڨIЃgh:7fPz9''W*&KrY\TW"y6FQ4f15bpi^ T2)Bu])?6ƇMQ_oUh(6 _jxvr`au)F8ȝNǑF{R' 3^z2v/2e,r;{i V^P12?6,_֙Do;f7DGy }慷% ٣Î`'.FB=1m Dz%^l{)2pIH/t#eRk!EKE2ՋB:ld|h#©U(V-AſsMNyN+Pt[:`wdgd>4H lp,x5U .p|r=:DC8e$ (ʃϸSZXw)~u6%IS*%/۸׺(b 'wW g 7J<`y*5| SnSzq.yDOΏƞu \B|&NjcE0d#8ΦpvyĬ*F /L/K bOFjP:M9!]e8z̐@PyցR9s\vJWE'TJ0!F8kDݾ^ap#QO>a2 LJ4}M}%8%: at hLaЃhp=&C'x'\ r#m(44- @Y<=H"LE.qEsm*g{C(vGwDE+A@b gת5^Mݓ]/lufF7W~ƺOཷ$~Ƹ42_5äv2oD}",T(// ~n][-ix6#K{St*?aVVž'W تqy]J(!OW[B3裸8\EQ?;5qyCaQ-O+G˕vpjE}]Z._Nj8 g$8`.-4`ZjqKDLoiB{Vx}n= nM(r{^,Xq|ҦdU{/lLɬ?LT$-'Al6ś T%svvR/*k"vy& xgyU.ptf vzYꃨz4M3׈?]ZȢ,x!йv:Z3a6+oݧ] LYnD͆6ncSҰ?Qf>r^} ִy< XgZnբjUY!!r8n7רW5J/۲Jq~`2l(G(`L~`z'Dvr{;F d8x# oq4I F#M\H_vC]2 W :Ț1dNqaqI9{J͒MTv?vZPKZQ resources/debian_defaultUT ggux Vn6}bFH|fk'Y#vb7FkH^a/I׻)rH΍A,łipRU&R+dQr&kt^[__J{gPNF3JmĒP)F;ֱ҆nfņKےd:#G\&9Id"a:EFtsߝv3D]|=ֆӚHt9%-$sc|۟XZp?lu`՘;ʥ%Zex_TFmx(Ɍ\q@BF\漂, t`A;pYv;;|R=ыTh^܉Dz//F悆wO޸WA$R?vJg]8 \+wU k7ڤ4gSUqq I4/武Eo>gMUbFRQ ^AUki }FzmK.ݭZӨxO(V=Z\IʚV!.YݫzFuqpKx%$@_㥴@QdKpK Zb26֬Jixq%t/Z ǃ/PKZO'*(resources/centos_defaultUT ggux W]S#}ƿB "d0kpa6'Ji ci"il&9-ZvW>}j5'?<?OzW¹e=Wh_~jNVU\k!~h4Nt_R욬pبD,h1W9] 1!+VƒPU0ܹ *bLo,m&ysUeXD@Je}V-UQԮLnÑpj^p )Z˼$Y95\J/rkbiES<9]J !qE8M̒srJH]5N yh]?_8D=$BƛtO•EPjCieĹK1@K_Z:F@Jz-r*zbbGmf|k*϶qoOH*}^BB՛;u2|*p4Tsji7@=ۀdiA,r.ڨIЃgh:7fPz9''W*&KrY\TW"y6FQ4f15bpi^ T2)Bu])?6ƇMQ_oUh(6 _jxvr`au)F8ȝNǑF{R' 3^z2v/2e,r;{i V^P12?6,_֙Do;f7DGy }慷% ٣Î`'.FB=1m Dz%^l{)2pIH/t#eRk!EKE2ՋB:ld|h#©U(V-AſsMNyN+Pt[:`wdgd>4H lp,x5U .p|r=:DC8e$ (ʃϸSZXw)~u6%IS*%/۸׺(b 'wW g 7J<`y*5| SnSzq.yDOΏƞu \B|&NjcE0d#8ΦpvyĬ*F /L/K bOFjP:M9!]e8z̐@PyցR9s\vJWE'TJ0!F8kDݾ^ap#QO>a2 LJ4}M}%8%: at hLaЃhp=&C'x'\ r#m(44- @Y<=H"LE.qEsm*g{C(vGwDE+A@b gת5^Mݓ]/lufF7W~ƺOཷ$~Ƹ42_5äv2oD}",T(// ~n][-ix6#K{St*?a}j5'?<?OzW¹e=Wh_~jNVU\k!~h4Nt_R욬pبD,h1W9] 1!+VƒPU0ܹ *bLo,m&ysUeXD@Je}V-UQԮLnÑpj^p )Z˼$Y95\J/rkbiES<9]J !qE8M̒srJH]5N yh]?_8D=$BƛtO•EPjCieĹK1@K_Z:F@Jz-r*zbbGmf|k*϶qoOH*}^BB՛;u2|*p4Tsji7@=ۀdiA,r.ڨIЃgh:7fPz9''W*&KrY\TW"y6FQ4f15bpi^ T2)Bu])?6ƇMQ_oUh(6 _jxvr`au)F8ȝNǑF{R' 3^z2v/2e,r;{i V^P12?6,_֙Do;f7DGy }慷% ٣Î`'.FB=1m Dz%^l{)2pIH/t#eRk!EKE2ՋB:ld|h#©U(V-AſsMNyN+Pt[:`wdgd>4H lp,x5U .p|r=:DC8e$ (ʃϸSZXw)~u6%IS*%/۸׺(b 'wW g 7J<`y*5| SnSzq.yDOΏƞu \B|&NjcE0d#8ΦpvyĬ*F /L/K bOFjP:M9!]e8z̐@PyցR9s\vJWE'TJ0!F8kDݾ^ap#QO>a2 LJ4}M}%8%: at hLaЃhp=&C'x'\ r#m(44- @Y<=H"LE.qEsm*g{C(vGwDE+A@b gת5^Mݓ]/lufF7W~ƺOཷ$~Ƹ42_5äv2oD}",T(// ~n][-ix6#K{St*?a7+s[ch32)YƣKWM^\=E흹c ͈u[a̕Bc qڤ%9 DC>` O Rk$ )_p hcfiT;\ p8$Bi%\&+=*P!l֝ N޻g1sxVPOid-KW(gCOsF8= #SCxyoXt`U9):!|jFςUW}IJ-!Y!ꍴF+Du 70<-feveFlCɸa a]5i8K\-O!{ "ILIUP_K[&;/ IY~텨aPMä{O<|m\lt@3Kؔ6Tp)WPFwsE1g ze'C՜V0=oMrLVKyw[Tsy?jO{fhsU>eeo8~/9?t ,k$j]Ū2e 5ťTA8Q/@hPKZOb manifest.xmlUT ggux Sn0+OBfj² +յMT" >,_RJdMڝ}0yA.|zϦ]a$I; "*a7kG ;@CMp;3Jvn他G^^ E|XOI560<2d , %|9lB,![c 9F=3*(Uϴ# l6_%:SO$20͕ st\;3zbC-*RWSt5^=QG0bO"]its&gGN?4'/iӨc,ЀD ^  GQWۘ. (o>H\<xoY8P bP,d8KNe8E&WmLJ߾>2yPKZtRHandlerManifest.jsonUT ggux 1k0w 9V-j2t))ȒНCBȫ;rݧwT D4'kSb%рќ$ePv))n@|ak:-+&.l@JWyVB$]J0л1~J_y>B=m5A@j0ygE ̐" Km.eaJ)22/S X(J2}Y*Z_"vTK.cEB2? PecЦsə1 4[(m؅XX0]fm:u˜e%嬆͔\d?0{{;d:NE|l&m"z(fR#GQ 2>ёkXB8c.Ul7U0|znw7 DKbU-Z3bF~=4"ZCZCPxeABfA,y"Lp$a:dTjb@Px*,&1KE^`!W!yFp R@lQ.^X eI!o2m gA]o621!mDm1l+Z@B3aMm)3PR&b.b^h:'рKC! '尅{:Q 1A1$Ry!tBix϶Un3wMA._X`C%8Q HûQhP$?lHB#;B֢F.,@Y1 3N&h׀^-boj{SO{78c1 37c3\$<.ޓQ%^> thzxG,ۖhbu wARJS9x Y\1W\r3jvoP˰(=wGw;jHJP~jP!aKAe)p=Z7Cwy8/rqn mE0!P>3F[,:/4*{6i1ڧtUv}VN9?rA߇擾*GFL(OH')7b\pTxb_\nA>r)hb1䱭.RAH%83ESMMX)l40> wcTiLIɒ'82;9͖ kEC~y_n>vPbf7K}m8O1{S54oᇓ789/k>_28<~Tih^Kz20HϋY\#MH'Wk;RAU~vT OŁd?j^>MD]aqڛjȫ_ndly5~U?k]T}Cy~kx'|{ܯl3nAWW#'u=Q-M)5yuc#UkoUd}I9RZ5ݭӺRC[+-jbsbLdh[QWM-;V_v1{HUκZ_wқa3u.-;EK]`Z*KS َgl֬j#s(y.}h65E.XSXSp\޺~wΰf8|r)@jV][[`57F}őTX?9k#ښ+T|kw.;4,r{ `ϵ̎ڥ*6+o}QjHkGM!0ڈbp8g-8A-0\&*M볜F +BuB5ƩvB(8o&3""Hd>mΔ/ ݤy]7Dj|Z$~va#gx~9`'8@X!)^r#i>N@qoGkْ =!7Gz+G4T~Ζ͖;J6cfZq8hQ^H%SKN5ge@Ӷu f{73Gd?UKkCz\jt:^t|Q3Xå~2FnZd(S[S p;[5 . G܎_cjS_moѠq>ulڻkň[/.z!KMMIuvȴjiTe?wDzoH?o~a& {%tJ׽@q :vkכ4{[eδT{|foȽOE6c >ccV/Dz:G*ho%C ; λ1{ ZWp(ǐvOJlq|FXh" P Tc["6PbZqX{o`$+5"u pBP+T-l9Ie՜0!gy#bGrd`͒`eDYRW"Ǹ0 ~ һ"_َ!X`!N'> L]>% PZLn͂w"xdiƂû"E|!C=(QY|W5b)`f W0dΦ{{2=Əg]|f?\]_M⊝^;>8_OwCƁT0 Ze? #y4r^C` Y¼2>}Ya+-3"qAܐ'z0`,7 "^_kݤ/Y A 7txX4;jG!ߢ$ F,(Zv ,p%pŀ1H A^DAC{^i':`C@ 9r9ƒ_ 1e(bA.'|蕡Qlj3dVp/6 B+YP vCӄWdO> $LzThNϸ a4*>C|yt2@g4-" 8D-)_j>i8ZC^@ N.҇B8eZF,/`z _Es2tݍ }?Nݜ/fCh@aO=U$2R;|h7\#Z 0 9u¯P ZK cEzUEhuϦbxam5܌8D~T0̹j8p^@ ,̤ٺN>99Rhf`Xc)G $cacG>  5ݪQk7"t7|vcN_ ;v+"F"鑵_HF{qZґ=)7ɲ4J 7)!i]; 7xa@1wd7U:8%x}` 8ah38?0'&=|KQzŦG<+,R'qFnƗ),-(Hc1*3Ԙ,Ꮰz^.IB?h3 !zkZL9JS\xEޭ/B7[g CsAUF6;7jp%nHʷapk-sģѴBU=7iF7ivyQ̈bh(EgUq;xdp/V\XOs <Yw ;GIv_.(- 28%U8t AXZN`ha{>0[}- x:Rב8xCt>*98C?z[ZMoXHcK~pvsi{A]w90u?r* Gm\>t~X;fșq-  AA $"VqN_|@3BL[)..m2-vNJ -gxsI)mMr5+O2#3br$?)n"7e%:ՁkNT)_Y*֩e{\d~)N3oKT-ը$|:Vb.I>ȭ YC^| H[!ȃM|jAE7a(V,* FQ5TqR3N#˖>bn/O(&_+JlNaB@hs] K|7L0y0eԧ/\ MC"pgq7(jjYr; &YN")aowɓ"#!{6ppˠ*?A;-UcL6v[*K FH}͏OB Te%PRnNŠ%=Gb9(mU)h(*努pSZRҫ icZ,D`tekwʡeBmKרL-JGҭ.A諔jdV&rd Qjޠއق;!~L5nv3؟McC|l(UVdtvL/ 0(y;B213 ! v=ZVێ=26;ѷ_8j{B AVQ` tC=I(`#jx'. @VI-]c0mq?3WjoI VP_LOI$Ws4c(`@kL,ֿ{MڣMA'fDQY8jh>Ԝ}cy%{eaV ѳ'd}ArY֢rnT˄ B9u~6F/Q8{n6z,6Qץv h< n:Yya ·p]&}( *EE?ʜV~P/DEooe_}.}ݲ!R+whGYxԶcWgC/R~4 ]wUl^B ,'Ƕ'x sW;A Wl24.]qU&*!@%w-ڋ0n&-CEUN2\ǎ(Q7#ԝ'G#$&bsEZFgI\9v0oGadfγ 5*. \Y]tdNX|4fs`AitHqh\z1NU2HY;~2.=Up;@!;nVoPYC6l  0tZ$S. (/JRPk'vF ~MZKf{.Wwmz^i%,Qͭ/-C}f~`#G43ͼn_mڒ6VKTgn0$V52mC:j./Nxn{4ipZf|.@b m䙠Emgo<2/\i#Pl^ygGgd`&4lnk{Ȝ05\&EMD mpjupKt [Gꊢ)r0c8ܧjWV%usRc[d l>sG7P `Cf^3f?Fw) Obmk/-w`#wX`C5$XXSچi9%erpy9ω-kXo_gd13NwJ14\?hY[R?(( t"27gC5n(+@=^-DE4DYRǔpcuLB }8j(gsf EyY[&DVƦ+ /j;=6X.˝+z/EX1yE5XE&Ij mZ'J\9$G? @_7yd!ay˺٣tP4[67%>w#hE a9>czi*hnmR]Ǔ,]kۇP9Z-HyM'bǕ>^}PKZԧJ% CHANGELOG.mdUT ggux mTKo0 Wa-0{vҤV`: 2 lӱVE2D)J/À cߋt e.fŬȊ[\N2xWH;maݺWAW&UR{G@<|N9wʮ)$u%U.tZ%Ǜ9GΏW2OKɿ\ 5[A A»mеmOHBkǛQ&"C;M{>$"p^'ڞ 3,vtXr9VWJ^"[ \51Ź.{1w9FjWgԒyčg VeԹheRy)w~ܧ~w7,5<ڧ`B@=ֺ\J4l;r.kmoU{ VL6:qG̿lo۠5">5y/Kc:+?QK4F0M~`}nUфPKZ K extension_shim.shUT ggux Vmo6_qc.fn/ .h,KaTNQDNHY!Ͻ^IB 5,b3v B.)fq]Hj˔}b~9}ܮ_wWg< Go>G#!nkֲ#HI / [-YҢ46mU 2u!zCBJYB=Lk! J-hJf& 1gGjC'4:[ w_ }\~yO`B@_gKuUK)^j/Esu.Vc%O֫8IPq Bl}Q+RƤ6 ^%s*R(2yU l! /t mi~:j[B%qy=Ϣ)s >+֬~!xN!1;q>Iq=5)YM]Ӭy=~7㍔;0[0IXT2Ut wWIn#Sǜ`3nUyQ0Sjd;[*bZQ):Ujh4ݕ!@b^C .K{0T?١CfhE@?#,!2K " =zAC,pvp?D̴Ǖ]\Cqy97ŽB\f:NXJ`z3 Y{`0IˆN$ L }E-gyeN̓w9(_F[kkd?&?Q]ͨض>ZW'rz\9T*WMtL|&Mi&}S#gڤ_!St `@痣ph:R%b~{āyۯ_ j5C@D7f]K u{Qɰ\`UVH w|H{N5) ~M+fjm}A청ry3PKZL'vMUtils/__init__.pyUT ggux en0D}iWNړ긨T,A4IvIUw@B;r-"ܝno>~Oo$x1˖*gK.R5$H'B:YIT9nh:\!ZpD=-vz6FNIOW LFFAwkLR^㘛6ҭa\ߗmUo?(˃)FX4ǚRfNHgOpb׭0u6!ui8&ޔN3h]aQ(uY<|=XEu(5v{lv]y(wV J BT#5Sg5ֿ@[uC4ətQui~ PyePKZٺuUtils/constants.pyUT ggux e 0!w]PiA(v)H66?9};3Vw +;5#d2躌?<BDWwEod 2{Vʹ-Uvvby'B! 2yW&K4zې&]\07lCjؙtk7v3q8x=iIPKZթtZUtils/distroutils.pyUT ggux o6 :@Ցa eIbm 4Ȓ@IN}þGR%QKsM."hÜȏW7mfFW͖1+2R|5ajaS}|H8+΂'RQQ_FOɜ)atI/'rW}Ȁ?4WǖAx;ݤY4c$Yd* Th KqI  %+FظFā,;74 w'B-iAei7CQĈy%c暥8N^-N>3^Fnmn6L 00Nc} O:bdҘTG!]]vlI!a|qÈ/(ˇ"_z ;JAN0Fg`4Jeࢍ;DQon6ė Ii_96H]d="ISm 0dx@1TcG*0c0oĥ|冭C#_>6pNRӛ ^> 8pW9,߼>}w)碲T 78c!œ̛Iu'}sbIHuhM/l$++Ygnb65~!tI,Vo KLǘSo";v ?MJls&=Jt/2pE`4 Cp Ȱpࡄڔ61+XoF= pb;jRS>/QkC{`@R Nxz?9r/T##y\Cc^g(Em:6_÷AR@px\LX(t$ ㍓!kڸL%dB[/9P8Hc=1!OIFl?͊F`u>].!GQ32J4zk qג"! ذZ8 8Ϝ,u&s {$=r ]Hb5QwS^&fzҩQjVc(=TX @+t "H@yi~+md=R~sy6~X# ɮ~N9f7 4pХ߇LE/P!KuEi[xuj iwj~Ԥ^%WYN!k`! ,z'>RS`PIL=em$ݾW8 itl)4j0-73K#g\,25pMk Z]qhPNs4xYmQ)dj|iӿY\;g SL {lSd2hU5N2lq3V۷4xzvXv|O?E ?C:!^IlywVj#Y+QD7* :zȫ0U8:T#d/-kޢJzL xfx>c"S$Dj%: u [kgv1蜡u)6 ڀPKLahT^о%2E 7De= An1&>,a˳ֺhGQ'X?_ dž('\TvmeH\lKR(6uL}Qg xMR]d>@~}ե€t?i %X܌c-~Fbk] ?JO/n{}v~QN ytׂHީ`*i;+1ftJ78wx>*S+aH+axyCual/["6"!*#Y`O1}쪜hOUzB_ 4Rjo@v^َL'05{iBiz/wd&?un|禣q3sI H)jߤvq䏫FBڹ}/7rt@ P/Io]xG陁\:KCI5鎕dIې.]MV %r}bb[GaAY?ɶ]x;|>Z ;voߑqf4jve`Auv5y n|~.&_eq$oϚ(\rbi)קwZ}i[l7SGقs5o5ft3x ܈H'D;~Aqc:nsN"8?7gfx_ ?@w?]':}z muYW[j`9iX !pK \oZ ,˦fl/Uʓ5C SYk3ؾaY"Zng"Sq<_ cBQ,+\ e %0۲Wʐ7 ;IDj td jjU걯XR42[]9^c -q#<#c\&r ?)i&bHa,U2J)|#gu EU7$q+FI4fX !gD*)0Ј oR1&zmNwߜߞI&׏Ӭ  - pq v.1k-;&Ddx!]92[45ҙpNeoLe"V2F o HZݔ0[@V־i4^ b}aQQ x 9x͎2t<(*drj ;SռbJ.LJqH |"d(Oq2mw} ͬvn1҆V-v6dV-bUٳv o5Hb.y"S1n_gׁg@q+ jP(H* Ͼ p "(uAɼύΒYmG/7-Um2gI^k]~g ]|EC//x KGQh]Gy* bc_D:9uX2ә#r'J*hP]@JOlHTό8%"Ijhlѝ,g̭ H۸SP`g~E;tKwZ5!Jn k~rer<-Qyzw2 排xedo]wP_˾SՈx1ƺay<'`4+x"ڌ@SR 8nV0{bc>Z+z,0ҝynkF.3D zY#8b;;=,#6>CE"`|mong~:%mC;TnrE;h]vs},UeݧdIF~]3mN꟬.}8@0LoZ+΁[+:\UCeCքHLݗVZכ/;a,Q7+Bfs`Og "tnEp7G1d* -kq]esWoo?k1KG!?c|>1|^nZ(NSa"Xvpu#;Eq5=& Vi_&sJߓ*K{kH ]$A@njpyhжT ʴV\~v#⨻ $ buSB-Sc]MW %è~- tц{'LVEg|jĦd"Tc/]|ډJj^/N܄Y@oP,#~ OzO |]~,q)S$CP\[`iN{VlOQ|XhIK 0"6 z~>| 75o]d(L/Ճg@QAI{BvVO~4GHkj-JyTyµQ!{({|KۣG'a@.hkip [)X b)C MrDH3U"(8\ *2{Wۦ2LՑ6lI$ i>unA{GGo0/?Mh_atZ:U)_HQIq7i ?z_?#0"QA* lEӛ?X\^wV7O2W#!ۊPxÀpϣ_Rw\k kFUtils/handlerutil2.pyUT ggux ;SҿW͖8bu]O}C񁫠B2@< "k ܭrIfzz{z{z: ȩl: m^e ͜, aM6dZ%b[5rÞZj/4UT;>=erGmɶT%ExP7`XZsbKbZ.;Phi:%USh&Q,ckP #Ga ]e( Gd&;=T$3b%WtЩ\tn!Nt6l`u$Q!P 6>BzjLPf ٦Eֆs7&,:9 M"{q}zu'zi i\u~ omR>NY&Dש(1P ݙREi e򘒱Bm!Sjy*`5Cs68iB Bw%*ͶLMz|+"AC"^er`B:Z=O&UIj)OmKJqğ|1m0nb\̑r@$s'9k4AQA'`{P#R#R:M禫Ǜ7_Daۛ0-*}~cn27S[C ASZKss]|;8?5͚PN# es?[;[VF}xۨ__+ ;egxl=ks|ojN+ߞ?ܫn6Ng|iNiw?l,V^szp}~y2Y<ͫ݊瀞/]ڿ'~#SM0+.a{l@hK =Y Y {H64#ߪ~>Ww3ͯИi`N4@Z`]>s/1-`'GpC'jY cM@ܺ9j:zn &>ISٝ2ߘ(xj#Q9/K 87]aa{*u}5t4)>4y+BSviHo3#"l*Ԍa( 0O+"tU*&z= ]Ld~0^}[l쫆zh}6u継\T]殊Ö1} _ b b)+lt)-GE[RjWtĜr ِ\ypԶ-;Q|{[81G0ZS-)1e,H hSwvk* IŚ73)aGI3 ';qpH6BtmjFt<"CdL !A+ф` 1&ps*iS1j#KM=itCxǟ  DYRje#jHܢ%#bz!%U{*$&DLR! aS8f\F(UKj# dc;8bpHq H#T?;q#~&(å;#E5L5*_)`#z B"*F$*U,L"Ep)P2®eye78mC7hvtwZD[|(&ŧB7)1HLw ~v.AS7S uMg8ZLR'XBNІM8m $v4Kt>7ϨDDTVý]/i1]:h“Vw@1ۣM&mcuZ`{9uٳMm*2\B2!Ae.?ԧ 6 ;RUN]z1!gI`H&,eά S;62Cd` !j58oR$(Ur&;)KJJeKhHƲLEf>CbRS-ltʎ%7*j滳'e]Nc[uqWy𿐽o+(m31"L:] ]b$O M0E"ޟ2sr..%@2|\y #Z/cE[4cȱgF)JTD%|rqbf_)-@Ϝb "Z,}Urx~wf m(54z%Uǎi=+S`"&L*Dur 5eqcI]!Bʔˊ(' jX34Sz:&cI Ϡt'6(%D:E $+P XeaI)P yE/8ZsWd5tMeg f6Ǒ_(kˌꝌ<o0I@1shR7mOV65, ?GmYwR>`sQdL\nd<+Z*eRG6Ө~:s e%wI *B?V)++$)NÄ^&#кx=Wr{>b.?KuweI4>#d.e6𬔪[٪L0q/ pso_2gۛ"2 N.B fkYtTa~N~Ej*y2`R"Ʒ{.H^pq].+sěJʖޱW.}|ih2<LT[s0A[*_2Nh)vd p\i^p(]u &~Yw6PP/e 콐[5 C΍)IJмɻ*xbU#.,6E.mF~FqL`m܈aZE@tcތg͔3?#¬uw2wog,Q>hDXoJ#uI#\6#qIK<'v)y2ΰdyO%v~1ŃĿjort ]2 ULeO2]c->6 ptʯz8p R_E~SniB득q L ⪻|rZk9V09-YGe ƸG1¨3U&uaqTPKZ4K/`Utils/logger.pyUT ggux ]o6ݿpF=%,]HaZ:l$R8#)+ɻ}$BH^fiJ!I%. 6BIqJ~Fģ8Z_bDͿ b3؛Hh!fSՌUIf˄,?'Ƴ5_E aF1!KTd,- wNb%A'0%]K%"(HQߧw(J YÚ%#DF)ާ5FnVorciHe`e!y)!UnsB-Cۛ7_ᕛ$"EDф8\33b-d<;iz\MYgLJτH;Ƽl.߆sU+YU%=r ,ɺUA|6ql*]62`*_~B)Yh{dv]^GRH^T&= +($<=m(<[aGN0cMW|,i %+v`Vx 1u l=Rh9zEye\d¬;Hkz0@Sdյ `FdBIih5X`4LN n^k+6,wx >G,3O%]Trqno]%Ej&zsH"J>xRV5D{8l,z|֘+W߂[ ګ'hz~8N ُ {ZI↛^w[d"X\5w=ڳzlvls"02mP PhȆo\6PN ė /<Ln<(UiKC]`0L2n^nm+/+opS9B?*fGgREC%CJu/"FDU3EX7 Efd:^ݟFzɫ .qgc!Qe SEm-7z-8؊HL]BgG\%nƨk-8( ]\ɶd} klJ;@?Ej+.lwlT3ZM[EZ>eɼ~Ӝ=˥ïG_h6?59X {#)rM60K&L!@4 bC!cjSM] |1W<Հ)x@s($ӛD+ZTdN ԏ<$42F~}x}3m.}W+*jFEy E5ج}G Z%_H\X|uocjFD46*yVU9Y%3r7h<4dJF 3K3t"K+qP4/o2~[db;i\p:yy?WR,0GGB,h4N|J; iMc]6;]A7@ޔ`d4 ==V0G4U]eJ3Ml9 Ρ|(KgTU:ǝ].Q{R.B],m1e؛&w{_{Ut9t~n2z p I!LLaU5*T^r XKY#nB-*g.T#v!fUί0ZvWyZ[δCTq¾ȱ͆f 0f#,B+ҍj6 C@0V՚yv]놦C6-ז]?!OBlsi+cgHڋ?RS97 tgKHq̆{X{TUpʨgmv6d i{.hQ N xXL40p)Iݏ>g0rʄC7?K&>ѻs+e%?ڂV>`\187z/W:$U'%`0f (Hܓ˝q*?NsWD2/An  'Pg0d)/12Uum0&.`P$>a+c{)ChFz}'M%U~CazzME͹6Za+'C6e0[ RA@ šgx4yu޶Ea;VufH%/V)ρ7h. T}Qk4Vx]Y{REkw%ژD^G@RU/4-JACٚDP9Z6pspr'm{1F顉QQq[ݦ %MAwI!pNOyl( I3}rw2(w-kjvݴY/JjF@}cPq*xCm4ràHëmN-򘇂E0R$dRa*6q9h(xv!oVi m15ڨ? },m&x@[i(ty+MЌ]mWǟ)9u8ɺ]fҠ Ҿ04֕Ȏӗp<2gkdH,;ĩ+ 7K&#`F ?O(U7G_!ɮtXU,2oCDbJBu\PkQ]~T Ca~ WdE727' ^t s߰!hnvHi\\:C#*.[P} %ڦ%2g\IAh"YG-#$7KFyۑG-PKZQ resources/Ubuntu_defaultUTgux PKZO'*(resources/fedora_defaultUTgux PKZ.I resources/SuSE_defaultUTgux PKZQ resources/debian_defaultUTgux PKZO'*(resources/centos_defaultUTgux PKZO'*(resources/redhat_defaultUTgux PKZh &resources/defaultUTgux PKZOb +manifest.xmlUTgux PKZtR-HandlerManifest.jsonUTgux PKZC" J$ /README.mdUTgux PKZ̄Z b  ;vmaccess.pyUTgux PKZԧJ% VSCHANGELOG.mdUTgux PKZ K SVextension_shim.shUTgux PKZL'vMtZUtils/__init__.pyUTgux PKZٺu5\Utils/constants.pyUTgux PKZթtZ;]Utils/distroutils.pyUTgux PKZKQb 0lUtils/extensionutils.pyUTgux PKZQ>CyUtils/handlerutil2.pyUTgux PKZ4K/`VUtils/logger.pyUTgux PKZ_?HUtils/ovfutils.pyUTgux PKModified_Microsoft.OSTCExtensions.Edp.VMAccessForLinux__1.7.0.zip000077500000000000000000001210201510742556200346660ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/signingPKZQ resources/Ubuntu_defaultUT ggux Vn6}bFH|k'Y#vb7FkH^a/I׻)rH΍A,łipRU&R+dQr&kt^[__J{gPNF3JmĒP)F;ֱ҆nfņKےd:#G\&9Id"a:EFtsߝv3D]|=ֆӚHt9%-$sc|۟XZp?lu`՘;ʥ%Zex_TFmx(Ɍ\q@BF\漂, t`A;pYv;;|R=ыTh^܉Dz//F悆wO޸WA$R?vJg]8 \+wU k7ڤ4gSUqq I4/武Eo>gMUbFRQ ^AUki }FzmK.ݭZӨxO(V=Z\IʚV!.YݫzFuqpKx%$@_㥴@QdKpK Zb26֬Jixq%t/Z ǃ/PKZO'*(resources/fedora_defaultUT ggux W]S#}ƿB "d0kpa6'Ji ci"il&9-ZvW>}j5'?<?OzW¹e=Wh_~jNVU\k!~h4Nt_R욬pبD,h1W9] 1!+VƒPU0ܹ *bLo,m&ysUeXD@Je}V-UQԮLnÑpj^p )Z˼$Y95\J/rkbiES<9]J !qE8M̒srJH]5N yh]?_8D=$BƛtO•EPjCieĹK1@K_Z:F@Jz-r*zbbGmf|k*϶qoOH*}^BB՛;u2|*p4Tsji7@=ۀdiA,r.ڨIЃgh:7fPz9''W*&KrY\TW"y6FQ4f15bpi^ T2)Bu])?6ƇMQ_oUh(6 _jxvr`au)F8ȝNǑF{R' 3^z2v/2e,r;{i V^P12?6,_֙Do;f7DGy }慷% ٣Î`'.FB=1m Dz%^l{)2pIH/t#eRk!EKE2ՋB:ld|h#©U(V-AſsMNyN+Pt[:`wdgd>4H lp,x5U .p|r=:DC8e$ (ʃϸSZXw)~u6%IS*%/۸׺(b 'wW g 7J<`y*5| SnSzq.yDOΏƞu \B|&NjcE0d#8ΦpvyĬ*F /L/K bOFjP:M9!]e8z̐@PyցR9s\vJWE'TJ0!F8kDݾ^ap#QO>a2 LJ4}M}%8%: at hLaЃhp=&C'x'\ r#m(44- @Y<=H"LE.qEsm*g{C(vGwDE+A@b gת5^Mݓ]/lufF7W~ƺOཷ$~Ƹ42_5äv2oD}",T(// ~n][-ix6#K{St*?aVVž'W تqy]J(!OW[B3裸8\EQ?;5qyCaQ-O+G˕vpjE}]Z._Nj8 g$8`.-4`ZjqKDLoiB{Vx}n= nM(r{^,Xq|ҦdU{/lLɬ?LT$-'Al6ś T%svvR/*k"vy& xgyU.ptf vzYꃨz4M3׈?]ZȢ,x!йv:Z3a6+oݧ] LYnD͆6ncSҰ?Qf>r^} ִy< XgZnբjUY!!r8n7רW5J/۲Jq~`2l(G(`L~`z'Dvr{;F d8x# oq4I F#M\H_vC]2 W :Ț1dNqaqI9{J͒MTv?vZPKZQ resources/debian_defaultUT ggux Vn6}bFH|fk'Y#vb7FkH^a/I׻)rH΍A,łipRU&R+dQr&kt^[__J{gPNF3JmĒP)F;ֱ҆nfņKےd:#G\&9Id"a:EFtsߝv3D]|=ֆӚHt9%-$sc|۟XZp?lu`՘;ʥ%Zex_TFmx(Ɍ\q@BF\漂, t`A;pYv;;|R=ыTh^܉Dz//F悆wO޸WA$R?vJg]8 \+wU k7ڤ4gSUqq I4/武Eo>gMUbFRQ ^AUki }FzmK.ݭZӨxO(V=Z\IʚV!.YݫzFuqpKx%$@_㥴@QdKpK Zb26֬Jixq%t/Z ǃ/PKZO'*(resources/centos_defaultUT ggux W]S#}ƿB "d0kpa6'Ji ci"il&9-ZvW>}j5'?<?OzW¹e=Wh_~jNVU\k!~h4Nt_R욬pبD,h1W9] 1!+VƒPU0ܹ *bLo,m&ysUeXD@Je}V-UQԮLnÑpj^p )Z˼$Y95\J/rkbiES<9]J !qE8M̒srJH]5N yh]?_8D=$BƛtO•EPjCieĹK1@K_Z:F@Jz-r*zbbGmf|k*϶qoOH*}^BB՛;u2|*p4Tsji7@=ۀdiA,r.ڨIЃgh:7fPz9''W*&KrY\TW"y6FQ4f15bpi^ T2)Bu])?6ƇMQ_oUh(6 _jxvr`au)F8ȝNǑF{R' 3^z2v/2e,r;{i V^P12?6,_֙Do;f7DGy }慷% ٣Î`'.FB=1m Dz%^l{)2pIH/t#eRk!EKE2ՋB:ld|h#©U(V-AſsMNyN+Pt[:`wdgd>4H lp,x5U .p|r=:DC8e$ (ʃϸSZXw)~u6%IS*%/۸׺(b 'wW g 7J<`y*5| SnSzq.yDOΏƞu \B|&NjcE0d#8ΦpvyĬ*F /L/K bOFjP:M9!]e8z̐@PyցR9s\vJWE'TJ0!F8kDݾ^ap#QO>a2 LJ4}M}%8%: at hLaЃhp=&C'x'\ r#m(44- @Y<=H"LE.qEsm*g{C(vGwDE+A@b gת5^Mݓ]/lufF7W~ƺOཷ$~Ƹ42_5äv2oD}",T(// ~n][-ix6#K{St*?a}j5'?<?OzW¹e=Wh_~jNVU\k!~h4Nt_R욬pبD,h1W9] 1!+VƒPU0ܹ *bLo,m&ysUeXD@Je}V-UQԮLnÑpj^p )Z˼$Y95\J/rkbiES<9]J !qE8M̒srJH]5N yh]?_8D=$BƛtO•EPjCieĹK1@K_Z:F@Jz-r*zbbGmf|k*϶qoOH*}^BB՛;u2|*p4Tsji7@=ۀdiA,r.ڨIЃgh:7fPz9''W*&KrY\TW"y6FQ4f15bpi^ T2)Bu])?6ƇMQ_oUh(6 _jxvr`au)F8ȝNǑF{R' 3^z2v/2e,r;{i V^P12?6,_֙Do;f7DGy }慷% ٣Î`'.FB=1m Dz%^l{)2pIH/t#eRk!EKE2ՋB:ld|h#©U(V-AſsMNyN+Pt[:`wdgd>4H lp,x5U .p|r=:DC8e$ (ʃϸSZXw)~u6%IS*%/۸׺(b 'wW g 7J<`y*5| SnSzq.yDOΏƞu \B|&NjcE0d#8ΦpvyĬ*F /L/K bOFjP:M9!]e8z̐@PyցR9s\vJWE'TJ0!F8kDݾ^ap#QO>a2 LJ4}M}%8%: at hLaЃhp=&C'x'\ r#m(44- @Y<=H"LE.qEsm*g{C(vGwDE+A@b gת5^Mݓ]/lufF7W~ƺOཷ$~Ƹ42_5äv2oD}",T(// ~n][-ix6#K{St*?a7+s[ch32)YƣKWM^\=E흹c ͈u[a̕Bc qڤ%9 DC>` O Rk$ )_p hcfiT;\ p8$Bi%\&+=*P!l֝ N޻g1sxVPOid-KW(gCOsF8= #SCxyoXt`U9):!|jFςUW}IJ-!Y!ꍴF+Du 70<-feveFlCɸa a]5i8K\-O!{ "ILIUP_K[&;/ IY~텨aPMä{O<|m\lt@3Kؔ6Tp)WPFwsE1g ze'C՜V0=oMrLVKyw[Tsy?jO{fhsU>eeo8~/9?t ,k$j]Ū2e 5ťTA8Q/@hPKZOb manifest.xmlUT ggux Sn0+OBfj² +յMT" >,_RJdMڝ}0yA.|zϦ]a$I; "*a7kG ;@CMp;3Jvn他G^^ E|XOI560<2d , %|9lB,![c 9F=3*(Uϴ# l6_%:SO$20͕ st\;3zbC-*RWSt5^=QG0bO"]its&gGN?4'/iӨc,ЀD ^  GQWۘ. (o>H\<xoY8P bP,d8KNe8E&WmLJ߾>2yPKZtRHandlerManifest.jsonUT ggux 1k0w 9V-j2t))ȒНCBȫ;rݧwT D4'kSb%рќ$ePv))n@|ak:-+&.l@JWyVB$]J0л1~J_y>B=m5A@j0ygE ̐" Km.eaJ)22/S X(J2}Y*Z_"vTK.cEB2? PecЦsə1 4[(m؅XX0]fm:u˜e%嬆͔\d?0{{;d:NE|l&m"z(fR#GQ 2>ёkXB8c.Ul7U0|znw7 DKbU-Z3bF~=4"ZCZCPxeABfA,y"Lp$a:dTjb@Px*,&1KE^`!W!yFp R@lQ.^X eI!o2m gA]o621!mDm1l+Z@B3aMm)3PR&b.b^h:'рKC! '尅{:Q 1A1$Ry!tBix϶Un3wMA._X`C%8Q HûQhP$?lHB#;B֢F.,@Y1 3N&h׀^-boj{SO{78c1 37c3\$<.ޓQ%^> thzxG,ۖhbu wARJS9x Y\1W\r3jvoP˰(=wGw;jHJP~jP!aKAe)p=Z7Cwy8/rqn mE0!P>3F[,:/4*{6i1ڧtUv}VN9?rA߇擾*GFL(OH')7b\pTxb_\nA>r)hb1䱭.RAH%83ESMMX)l40> wcTiLIɒ'82;9͖ kEC~y_n>vPbf7K}m8O1{S54oᇓ789/k>_28<~Tih^Kz20HϋY\#MH'Wk;RAU~vT OŁd?j^>MD]aqڛjȫ_ndly5~U?k]T}Cy~kx'|{ܯl3nAWW#'u=Q-M)5yuc#UkoUd}I9RZ5ݭӺRC[+-jbsbLdh[QWM-;V_v1{HUκZ_wқa3u.-;EK]`Z*KS َgl֬j#s(y.}h65E.XSXSp\޺~wΰf8|r)@jV][[`57F}őTX?9k#ښ+T|kw.;4,r{ `ϵ̎ڥ*6+o}QjHkGM!0ڈbp8g-8A-0\&*M볜F +BuB5ƩvB(8o&3""Hd>mΔ/ ݤy]7Dj|Z$~va#gx~9`'8@X!)^r#i>N@qoGkْ =!7Gz+G4T~Ζ͖;J6cfZq8hQ^H%SKN5ge@Ӷu f{73Gd?UKkCz\jt:^t|Q3Xå~2FnZd(S[S p;[5 . G܎_cjS_moѠq>ulڻkň[/.z!KMMIuvȴjiTe?wDzoH?o~a& {%tJ׽@q :vkכ4{[eδT{|foȽOE6c >ccV/Dz:G*ho%C ; λ1{ ZWp(ǐvOJlq|FXh" P Tc["6PbZqX{o`$+5"u pBP+T-l9Ie՜0!gy#bGrd`͒`eDYRW"Ǹ0 ~ һ"_َ!X`!N'> L]>% PZLn͂w"xdiƂû"E|!C=(QY|W5b)`f W0dΦ{{2=Əg]|f?\]_M⊝^;>8_OwCƁT0 Ze? #y4r^C` Y¼2>}Ya+-3"qAܐ'z0`,7 "^_kݤ/Y A 7txX4;jG!ߢ$ F,(Zv ,p%pŀ1H A^DAC{^i':`C@ 9r9ƒ_ 1e(bA.'|蕡Qlj3dVp/6 B+YP vCӄWdO> $LzThNϸ a4*>C|yt2@g4-" 8D-)_j>i8ZC^@ N.҇B8eZF,/`z _Es2tݍ }?Nݜ/fCh@aO=U$2R;|h7\#Z 0 9u¯P ZK cEzUEhuϦbxam5܌8D~T0̹j8p^@ ,̤ٺN>99Rhf`Xc)G $cacG>  5ݪQk7"t7|vcN_ ;v+"F"鑵_HF{qZґ=)7ɲ4J 7)!i]; 7xa@1wd7U:8%x}` 8ah38?0'&=|KQzŦG<+,R'qFnƗ),-(Hc1*3Ԙ,Ꮰz^.IB?h3 !zkZL9JS\xEޭ/B7[g CsAUF6;7jp%nHʷapk-sģѴBU=7iF7ivyQ̈bh(EgUq;xdp/V\XOs <Yw ;GIv_.(- 28%U8t AXZN`ha{>0[}- x:Rב8xCt>*98C?z[ZMoXHcK~pvsi{A]w90u?r* Gm\>t~X;fșq-  AA $"VqN_|@3BL[)..m2-vNJ -gxsI)mMr5+O2#3br$?)n"7e%:ՁkNT)_Y*֩e{\d~)N3oKT-ը$|:Vb.I>ȭ YC^| H[!ȃM|jAE7a(V,* FQ5TqR3N#˖>bn/O(&_+JlNaB@hs] K|7L0y0eԧ/\ MC"pgq7(jjYr; &YN")aowɓ"#!{6ppˠ*?A;-UcL6v[*K FH}͏OB Te%PRnNŠ%=Gb9(mU)h(*努pSZRҫ icZ,D`tekwʡeBmKרL-JGҭ.A諔jdV&rd Qjޠއق;!~L5nv3؟McC|l(UVdtvL/ 0(y;B213 ! v=ZVێ=26;ѷ_8j{B AVQ` tC=I(`#jx'. @VI-]c0mq?3WjoI VP_LOI$Ws4c(`@kL,ֿ{MڣMA'fDQY8jh>Ԝ}cy%{eaV ѳ'd}ArY֢rnT˄ B9u~6F/Q8{n6z,6Qץv h< n:Yya ·p]&}( *EE?ʜV~P/DEooe_}.}ݲ!R+whGYxԶcWgC/R~4 ]wUl^B ,'Ƕ'x sW;A Wl24.]qU&*!@%w-ڋ0n&-CEUN2\ǎ(Q7#ԝ'G#$&bsEZFgI\9v0oGadfγ 5*. \Y]tdNX|4fs`AitHqh\z1NU2HY;~2.=Up;@!;nVoPYC6l  0tZ$S. (/JRPk'vF ~MZKf{.Wwmz^i%,Qͭ/-C}f~`#G43ͼn_mڒ6VKTgn0$V52mC:j./Nxn{4ipZf|.@b m䙠Emgo<2/\i#Pl^ygGgd`&4lnk{Ȝ05\&EMD mpjupKt [Gꊢ)r0c8ܧjWV%usRc[d l>sG7P `Cf^3f?Fw) Obmk/-w`#wX`C5$XXSچi9%erpy9ω-kXo_gd13NwJ14\?hY[R?(( t"27gC5n(+@=^-DE4DYRǔpcuLB }8j(gsf EyY[&DVƦ+ /j;=6X.˝+z/EX1yE5XE&Ij mZ'J\9$G? @_7yd!ay˺٣tP4[67%>w#hE a9>czi*hnmR]Ǔ,]kۇP9Z-HyM'bǕ>^}PKZԧJ% CHANGELOG.mdUT ggux mTKo0 Wa-0{vҤV`: 2 lӱVE2D)J/À cߋt e.fŬȊ[\N2xWH;maݺWAW&UR{G@<|N9wʮ)$u%U.tZ%Ǜ9GΏW2OKɿ\ 5[A A»mеmOHBkǛQ&"C;M{>$"p^'ڞ 3,vtXr9VWJ^"[ \51Ź.{1w9FjWgԒyčg VeԹheRy)w~ܧ~w7,5<ڧ`B@=ֺ\J4l;r.kmoU{ VL6:qG̿lo۠5">5y/Kc:+?QK4F0M~`}nUфPKZ K extension_shim.shUT ggux Vmo6_qc.fn/ .h,KaTNQDNHY!Ͻ^IB 5,b3v B.)fq]Hj˔}b~9}ܮ_wWg< Go>G#!nkֲ#HI / [-YҢ46mU 2u!zCBJYB=Lk! J-hJf& 1gGjC'4:[ w_ }\~yO`B@_gKuUK)^j/Esu.Vc%O֫8IPq Bl}Q+RƤ6 ^%s*R(2yU l! /t mi~:j[B%qy=Ϣ)s >+֬~!xN!1;q>Iq=5)YM]Ӭy=~7㍔;0[0IXT2Ut wWIn#Sǜ`3nUyQ0Sjd;[*bZQ):Ujh4ݕ!@b^C .K{0T?١CfhE@?#,!2K " =zAC,pvp?D̴Ǖ]\Cqy97ŽB\f:NXJ`z3 Y{`0IˆN$ L }E-gyeN̓w9(_F[kkd?&?Q]ͨض>ZW'rz\9T*WMtL|&Mi&}S#gڤ_!St `@痣ph:R%b~{āyۯ_ j5C@D7f]K u{Qɰ\`UVH w|H{N5) ~M+fjm}A청ry3PKZL'vMUtils/__init__.pyUT ggux en0D}iWNړ긨T,A4IvIUw@B;r-"ܝno>~Oo$x1˖*gK.R5$H'B:YIT9nh:\!ZpD=-vz6FNIOW LFFAwkLR^㘛6ҭa\ߗmUo?(˃)FX4ǚRfNHgOpb׭0u6!ui8&ޔN3h]aQ(uY<|=XEu(5v{lv]y(wV J BT#5Sg5ֿ@[uC4ətQui~ PyePKZٺuUtils/constants.pyUT ggux e 0!w]PiA(v)H66?9};3Vw +;5#d2躌?<BDWwEod 2{Vʹ-Uvvby'B! 2yW&K4zې&]\07lCjؙtk7v3q8x=iIPKZթtZUtils/distroutils.pyUT ggux o6 :@Ցa eIbm 4Ȓ@IN}þGR%QKsM."hÜȏW7mfFW͖1+2R|5ajaS}|H8+΂'RQQ_FOɜ)atI/'rW}Ȁ?4WǖAx;ݤY4c$Yd* Th KqI  %+FظFā,;74 w'B-iAei7CQĈy%c暥8N^-N>3^Fnmn6L 00Nc} O:bdҘTG!]]vlI!a|qÈ/(ˇ"_z ;JAN0Fg`4Jeࢍ;DQon6ė Ii_96H]d="ISm 0dx@1TcG*0c0oĥ|冭C#_>6pNRӛ ^> 8pW9,߼>}w)碲T 78c!œ̛Iu'}sbIHuhM/l$++Ygnb65~!tI,Vo KLǘSo";v ?MJls&=Jt/2pE`4 Cp Ȱpࡄڔ61+XoF= pb;jRS>/QkC{`@R Nxz?9r/T##y\Cc^g(Em:6_÷AR@px\LX(t$ ㍓!kڸL%dB[/9P8Hc=1!OIFl?͊F`u>].!GQ32J4zk qג"! ذZ8 8Ϝ,u&s {$=r ]Hb5QwS^&fzҩQjVc(=TX @+t "H@yi~+md=R~sy6~X# ɮ~N9f7 4pХ߇LE/P!KuEi[xuj iwj~Ԥ^%WYN!k`! ,z'>RS`PIL=em$ݾW8 itl)4j0-73K#g\,25pMk Z]qhPNs4xYmQ)dj|iӿY\;g SL {lSd2hU5N2lq3V۷4xzvXv|O?E ?C:!^IlywVj#Y+QD7* :zȫ0U8:T#d/-kޢJzL xfx>c"S$Dj%: u [kgv1蜡u)6 ڀPKLahT^о%2E 7De= An1&>,a˳ֺhGQ'X?_ dž('\TvmeH\lKR(6uL}Qg xMR]d>@~}ե€t?i %X܌c-~Fbk] ?JO/n{}v~QN ytׂHީ`*i;+1ftJ78wx>*S+aH+axyCual/["6"!*#Y`O1}쪜hOUzB_ 4Rjo@v^َL'05{iBiz/wd&?un|禣q3sI H)jߤvq䏫FBڹ}/7rt@ P/Io]xG陁\:KCI5鎕dIې.]MV %r}bb[GaAY?ɶ]x;|>Z ;voߑqf4jve`Auv5y n|~.&_eq$oϚ(\rbi)קwZ}i[l7SGقs5o5ft3x ܈H'D;~Aqc:nsN"8?7gfx_ ?@w?]':}z muYW[j`9iX !pK \oZ ,˦fl/Uʓ5C SYk3ؾaY"Zng"Sq<_ cBQ,+\ e %0۲Wʐ7 ;IDj td jjU걯XR42[]9^c -q#<#c\&r ?)i&bHa,U2J)|#gu EU7$q+FI4fX !gD*)0Ј oR1&zmNwߜߞI&׏Ӭ  - pq v.1k-;&Ddx!]92[45ҙpNeoLe"V2F o HZݔ0[@V־i4^ b}aQQ x 9x͎2t<(*drj ;SռbJ.LJqH |"d(Oq2mw} ͬvn1҆V-v6dV-bUٳv o5Hb.y"S1n_gׁg@q+ jP(H* Ͼ p "(uAɼύΒYmG/7-Um2gI^k]~g ]|EC//x KGQh]Gy* bc_D:9uX2ә#r'J*hP]@JOlHTό8%"Ijhlѝ,g̭ H۸SP`g~E;tKwZ5!Jn k~rer<-Qyzw2 排xedo]wP_˾SՈx1ƺay<'`4+x"ڌ@SR 8nV0{bc>Z+z,0ҝynkF.3D zY#8b;;=,#6>CE"`|mong~:%mC;TnrE;h]vs},UeݧdIF~]3mN꟬.}8@0LoZ+΁[+:\UCeCքHLݗVZכ/;a,Q7+Bfs`Og "tnEp7G1d* -kq]esWoo?k1KG!?c|>1|^nZ(NSa"Xvpu#;Eq5=& Vi_&sJߓ*K{kH ]$A@njpyhжT ʴV\~v#⨻ $ buSB-Sc]MW %è~- tц{'LVEg|jĦd"Tc/]|ډJj^/N܄Y@oP,#~ OzO |]~,q)S$CP\[`iN{VlOQ|XhIK 0"6 z~>| 75o]d(L/Ճg@QAI{BvVO~4GHkj-JyTyµQ!{({|KۣG'a@.hkip [)X b)C MrDH3U"(8\ *2{Wۦ2LՑ6lI$ i>unA{GGo0/?Mh_atZ:U)_HQIq7i ?z_?#0"QA* lEӛ?X\^wV7O2W#!ۊPxÀpϣ_Rw\k kFUtils/handlerutil2.pyUT ggux ;SҿW͖8bu]O}C񁫠B2@< "k ܭrIfzz{z{z: ȩl: m^e ͜, aM6dZ%b[5rÞZj/4UT;>=erGmɶT%ExP7`XZsbKbZ.;Phi:%USh&Q,ckP #Ga ]e( Gd&;=T$3b%WtЩ\tn!Nt6l`u$Q!P 6>BzjLPf ٦Eֆs7&,:9 M"{q}zu'zi i\u~ omR>NY&Dש(1P ݙREi e򘒱Bm!Sjy*`5Cs68iB Bw%*ͶLMz|+"AC"^er`B:Z=O&UIj)OmKJqğ|1m0nb\̑r@$s'9k4AQA'`{P#R#R:M禫Ǜ7_Daۛ0-*}~cn27S[C ASZKss]|;8?5͚PN# es?[;[VF}xۨ__+ ;egxl=ks|ojN+ߞ?ܫn6Ng|iNiw?l,V^szp}~y2Y<ͫ݊瀞/]ڿ'~#SM0+.a{l@hK =Y Y {H64#ߪ~>Ww3ͯИi`N4@Z`]>s/1-`'GpC'jY cM@ܺ9j:zn &>ISٝ2ߘ(xj#Q9/K 87]aa{*u}5t4)>4y+BSviHo3#"l*Ԍa( 0O+"tU*&z= ]Ld~0^}[l쫆zh}6u継\T]殊Ö1} _ b b)+lt)-GE[RjWtĜr ِ\ypԶ-;Q|{[81G0ZS-)1e,H hSwvk* IŚ73)aGI3 ';qpH6BtmjFt<"CdL !A+ф` 1&ps*iS1j#KM=itCxǟ  DYRje#jHܢ%#bz!%U{*$&DLR! aS8f\F(UKj# dc;8bpHq H#T?;q#~&(å;#E5L5*_)`#z B"*F$*U,L"Ep)P2®eye78mC7hvtwZD[|(&ŧB7)1HLw ~v.AS7S uMg8ZLR'XBNІM8m $v4Kt>7ϨDDTVý]/i1]:h“Vw@1ۣM&mcuZ`{9uٳMm*2\B2!Ae.?ԧ 6 ;RUN]z1!gI`H&,eά S;62Cd` !j58oR$(Ur&;)KJJeKhHƲLEf>CbRS-ltʎ%7*j滳'e]Nc[uqWy𿐽o+(m31"L:] ]b$O M0E"ޟ2sr..%@2|\y #Z/cE[4cȱgF)JTD%|rqbf_)-@Ϝb "Z,}Urx~wf m(54z%Uǎi=+S`"&L*Dur 5eqcI]!Bʔˊ(' jX34Sz:&cI Ϡt'6(%D:E $+P XeaI)P yE/8ZsWd5tMeg f6Ǒ_(kˌꝌ<o0I@1shR7mOV65, ?GmYwR>`sQdL\nd<+Z*eRG6Ө~:s e%wI *B?V)++$)NÄ^&#кx=Wr{>b.?KuweI4>#d.e6𬔪[٪L0q/ pso_2gۛ"2 N.B fkYtTa~N~Ej*y2`R"Ʒ{.H^pq].+sěJʖޱW.}|ih2<LT[s0A[*_2Nh)vd p\i^p(]u &~Yw6PP/e 콐[5 C΍)IJмɻ*xbU#.,6E.mF~FqL`m܈aZE@tcތg͔3?#¬uw2wog,Q>hDXoJ#uI#\6#qIK<'v)y2ΰdyO%v~1ŃĿjort ]2 ULeO2]c->6 ptʯz8p R_E~SniB득q L ⪻|rZk9V09-YGe ƸG1¨3U&uaqTPKZ4K/`Utils/logger.pyUT ggux ]o6ݿpF=%,]HaZ:l$R8#)+ɻ}$BH^fiJ!I%. 6BIqJ~Fģ8Z_bDͿ b3؛Hh!fSՌUIf˄,?'Ƴ5_E aF1!KTd,- wNb%A'0%]K%"(HQߧw(J YÚ%#DF)ާ5FnVorciHe`e!y)!UnsB-Cۛ7_ᕛ$"EDф8\33b-d<;iz\MYgLJτH;Ƽl.߆sU+YU%=r ,ɺUA|6ql*]62`*_~B)Yh{dv]^GRH^T&= +($<=m(<[aGN0cMW|,i %+v`Vx 1u l=Rh9zEye\d¬;Hkz0@Sdյ `FdBIih5X`4LN n^k+6,wx >G,3O%]Trqno]%Ej&zsH"J>xRV5D{8l,z|֘+W߂[ ګ'hz~8N ُ {ZI↛^w[d"X\5w=ڳzlvls"02mP PhȆo\6PN ė /<Ln<(UiKC]`0L2n^nm+/+opS9B?*fGgREC%CJu/"FDU3EX7 Efd:^ݟFzɫ .qgc!Qe SEm-7z-8؊HL]BgG\%nƨk-8( ]\ɶd} klJ;@?Ej+.lwlT3ZM[EZ>eɼ~Ӝ=˥ïG_h6?59X {#)rM60K&L!@4 bC!cjSM] |1W<Հ)x@s($ӛD+ZTdN ԏ<$42F~}x}3m.}W+*jFEy E5ج}G Z%_H\X|uocjFD46*yVU9Y%3r7h<4dJF 3K3t"K+qP4/o2~[db;i\p:yy?WR,0GGB,h4N|J; iMc]6;]A7@ޔ`d4 ==V0G4U]eJ3Ml9 Ρ|(KgTU:ǝ].Q{R.B],m1e؛&w{_{Ut9t~n2z p I!LLaU5*T^r XKY#nB-*g.T#v!fUί0ZvWyZ[δCTq¾ȱ͆f 0f#,B+ҍj6 C@0V՚yv]놦C6-ז]?!OBlsi+cgHڋ?RS97 tgKHq̆{X{TUpʨgmv6d i{.hQ N xXL40p)Iݏ>g0rʄC7?K&>ѻs+e%?ڂV>`\187z/W:$U'%`0f (Hܓ˝q*?NsWD2/An  'Pg0d)/12Uum0&.`P$>a+c{)ChFz}'M%U~CazzME͹6Za+'C6e0[ RA@ šgx4yu޶Ea;VufH%/V)ρ7h. T}Qk4Vx]Y{REkw%ژD^G@RU/4-JACٚDP9Z6pspr'm{1F顉QQq[ݦ %MAwI!pNOyl( I3}rw2(w-kjvݴY/JjF@}cPq*xCm4ràHëmN-򘇂E0R$dRa*6q9h(xv!oVi m15ڨ? },m&x@[i(ty+MЌ]mWǟ)9u8ɺ]fҠ Ҿ04֕Ȏӗp<2gkdH,;ĩ+ 7K&#`F ?O(U7G_!ɮtXU,2oCDbJBu\PkQ]~T Ca~ WdE727' ^t s߰!hnvHi\\:C#*.[P} %ڦ%2g\IAh"YG-#$7KFyۑG-PKZQ resources/Ubuntu_defaultUTgux PKZO'*(resources/fedora_defaultUTgux PKZ.I resources/SuSE_defaultUTgux PKZQ resources/debian_defaultUTgux PKZO'*(resources/centos_defaultUTgux PKZO'*(resources/redhat_defaultUTgux PKZh &resources/defaultUTgux PKZOb +manifest.xmlUTgux PKZtR-HandlerManifest.jsonUTgux PKZC" J$ /README.mdUTgux PKZ̄Z b  ;vmaccess.pyUTgux PKZԧJ% VSCHANGELOG.mdUTgux PKZ K SVextension_shim.shUTgux PKZL'vMtZUtils/__init__.pyUTgux PKZٺu5\Utils/constants.pyUTgux PKZթtZ;]Utils/distroutils.pyUTgux PKZKQb 0lUtils/extensionutils.pyUTgux PKZQ>CyUtils/handlerutil2.pyUTgux PKZ4K/`VUtils/logger.pyUTgux PKZ_?HUtils/ovfutils.pyUTgux PKAzure-WALinuxAgent-a976115/tests/data/signing/incorrect_microsoft_root_cert.pem000066400000000000000000000015531510742556200277470ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIICWTCCAd+gAwIBAgIQZvI9r4fei7FK6gxXMQHC7DAKBggqhkjOPQQDAzBlMQsw CQYDVQQGEwJVUzEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMTYwNAYD VQQDEy1NaWNyb3NvZnQgRUNDIFJvb3QgQ2VydGlmaWNhdGUgQXV0aG9yaXR5IDIw MTcwHhcNMTkxMjE4MjMwNjQ1WhcNNDIwNzE4MjMxNjA0WjBlMQswCQYDVQQGEwJV UzEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMTYwNAYDVQQDEy1NaWNy b3NvZnQgRUNDIFJvb3QgQ2VydGlmaWNhdGUgQXV0aG9yaXR5IDIwMTcwdjAQBgcq hkjOPQIBBgUrgQQAIgNiAATUvD0CQnVBEyPNgASGAlEvaqiBYgtlzPbKnR5vSmZR ogPZnZH6thaxjG7efM3beaYvzrvOcS/lpaso7GMEZpn4+vKTEAXhgShC48Zo9OYb hGBKia/teQ87zvH2RPUBeMCjVDBSMA4GA1UdDwEB/wQEAwIBhjAPBgNVHRMBAf8E BTADAQH/MB0GA1UdDgQWBBTIy5lycFIM+Oa+sgRXKSrPQhDtNTAQBgkrBgEEAYI3 FQEEAwIBADAKBggqhkjOPQQDAwNoADBlAjBY8k3qDPlfXu5gKcs68tvWMoQZP3zV L8KxzJOuULsJMsbG7X7JNpQS5GiFBqIb0C8CMQCZ6Ra0DvpWSNSkMBaReNtUjGUB iudQZsIxtzm6uBoiB078a1QWIP8rtedMDE2mT3M= -----END CERTIFICATE----- Azure-WALinuxAgent-a976115/tests/data/signing/invalid_signature.txt000066400000000000000000000322601510742556200253560ustar00rootroot00000000000000MIInfgYJKoZIhvcNAQcCoIInbzCCJ2sCAQMxDTALBglghkgBZQMEAgIwCQYHgUuDSAcICaCCDXYwggX0MIID3KADAgECAhMzAAAEBGx0Bv9XKydyAAAAAAQEMA0GCSqGSIb3DQEBCwUAMH4xCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xKDAmBgNVBAMTH01pY3Jvc29mdCBDb2RlIFNpZ25pbmcgUENBIDIwMTEwHhcNMjQwOTEyMjAxMTE0WhcNMjUwOTExMjAxMTE0WjB0MQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMR4wHAYDVQQDExVNaWNyb3NvZnQgQ29ycG9yYXRpb24wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC0KDfaY50MDqsEGdlIzDHBd6CqIMRQWW9Af1LHDDTuFjfDsvna0nEuDSYJmNyzNB10jpbg0lhvkT1AzfX2TLITSXwS8D+mBzGCWMM/wTpciWBV/pbjSazbzoKvRrNoDV/u9omOM2Eawyo5JJJdNkM2d8qzkQ0bRuRd4HarmGunSouyb9NY7egWN5E5lUc3a2AROzAdHdYpObpCOdeAY2P5XqtJkk79aROpzw16wCjdSn8qMzCBzR7rvH2WVkvFHLIxZQET1yhPb6lRmpgBQNnzidHV2Ocxjc8wNiIDzgbDkmlx54QPfw7RwQi8p1fy4byhBrTjv568x8NGv3gwb0RbAgMBAAGjggFzMIIBbzAfBgNVHSUEGDAWBgorBgEEAYI3TAgBBggrBgEFBQcDAzAdBgNVHQ4EFgQU8huhNbETDU+ZWllL4DNMPCijEU4wRQYDVR0RBD4wPKQ6MDgxHjAcBgNVBAsTFU1pY3Jvc29mdCBDb3Jwb3JhdGlvbjEWMBQGA1UEBRMNMjMwMDEyKzUwMjkyMzAfBgNVHSMEGDAWgBRIbmTlUAXTgqoXNzcitW2oynUClTBUBgNVHR8ETTBLMEmgR6BFhkNodHRwOi8vd3d3Lm1pY3Jvc29mdC5jb20vcGtpb3BzL2NybC9NaWNDb2RTaWdQQ0EyMDExXzIwMTEtMDctMDguY3JsMGEGCCsGAQUFBwEBBFUwUzBRBggrBgEFBQcwAoZFaHR0cDovL3d3dy5taWNyb3NvZnQuY29tL3BraW9wcy9jZXJ0cy9NaWNDb2RTaWdQQ0EyMDExXzIwMTEtMDctMDguY3J0MAwGA1UdEwEB/wQCMAAwDQYJKoZIhvcNAQELBQADggIBAIjmD9IpQVvfB1QehvpCGe7QeTQkKQ7j3bmDMjwSqFL4ri6ae9IFTdpywn5smmtSIyKYDn3/nHtaEn0X1NBjL5oP0BjAy1sqxD+uy35B+V8wv5GrxhMDJP8l2QjLtH/UglSTIhLqyt8bUAqVfyfph4COMRvwwjTvChtCnUXXACuCXYHWalOoc0OU2oGN+mPJIJJxaNQc1sjBsMbGIWv3cmgSHkCEmrMv7yaidpePt6V+yPMik+eXw3IfZ5eNOiNgL1rZzgSJfTnvUqiaEQ0XdG1HbkDv9fv6CTq6m4Ty3IzLiwGSXYxRIXTxT4TYs5VxHy2uFjFXWVSL0J2ARTYLE4Oyl1wXDF1PX4bxg1yDMfKPHcE1Ijic5lx1KdK1SkaEJdto4hd++05J9Bf9TAmiu6EK6C9Oe5vRadroJCK26uCUI4zIjL/qG7mswW+qT0CW0gnR9JHkXCWNbo8ccMk1sJatmRoSAifbgzaYbUz8+lv+IXy5GFuAmLnNbGjacB3IMGpa+lbFgih57/fIhamq5VhxgaEmn/UjWyr+cPiAFWuTVIpfsOjbEAww75wURNM1Imp9NJKye1O24EspEHmbDmqCUcq7NqkOKIG4PVm3hDDED/WQpzJDkvu4FrIbvyTGVU01vKsg4UfcdiZ0fQ+/V0hf8yrtq9CkB8iIuk5bBxuPMIIHejCCBWKgAwIBAgIKYQ6Q0gAAAAAAAzANBgkqhkiG9w0BAQsFADCBiDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCldhc2hpbmd0b24xEDAOBgNVBAcTB1JlZG1vbmQxHjAcBgNVBAoTFU1pY3Jvc29mdCBDb3Jwb3JhdGlvbjEyMDAGA1UEAxMpTWljcm9zb2Z0IFJvb3QgQ2VydGlmaWNhdGUgQXV0aG9yaXR5IDIwMTEwHhcNMTEwNzA4MjA1OTA5WhcNMjYwNzA4MjEwOTA5WjB+MQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMSgwJgYDVQQDEx9NaWNyb3NvZnQgQ29kZSBTaWduaW5nIFBDQSAyMDExMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAq/D6chAcLq3YbqqCEE00uvK2WCGfQhsqa+laUKq4BjgaBEm6f8MMHt03a8YS2AvwOMKZBrDIOdUBFDFC04kNeWSHfpRgJGyvnkmc6Whe0t+bU7IKLMOv2akrrnoJr9eWWcpgGgXpZnboMlImEi/nqwhQz7NEt13YxC4Ddato88tt8zpcoRb0RrrgOGSsbmQ1eKagYw8t00CT+OPeBw3VXHmlSSnnDb6gE3e+lD3v++MrWhAfTVYoonpy4BI6t0le2O3tQ5GD2Xuye4Yb2T6xjF3oiU+EGvKhL1nkkDstrjNYxbc+/jLTswM9sbKvkjh+0p2ALPVOVpEhNSXDOW5kf1O6nA+tGSOEy/S6A4aN91/w0FK/jJSHvMAhdCVfGCi2zCcoOCWYOUo2z3yxkq4cI6epZuxhH2rhKEmdX4jiJV3TIUs+UsS1Vz8kA/DRelsv1SPjcF0PUUZ3s/gA4bysAoJf28AVs70b1FVL5zmhD+kjSbwYuER8ReTBw3J64HLnJN+/RpnF78IcV9uDjexNSTCnq47f7Fufr/zdsGbiwZeBe+3W7UvnSSmnEyimp31ngOaKYnhfsi+E11ecXL93KCjx7W3DKI8sj0A3T8HhhUSJxAlMxdSlQy90lfdu+HggWCwTXWCVmj5PM4TasIgX3p5O9JawvEagbJjS4NaIjAsCAwEAAaOCAe0wggHpMBAGCSsGAQQBgjcVAQQDAgEAMB0GA1UdDgQWBBRIbmTlUAXTgqoXNzcitW2oynUClTAZBgkrBgEEAYI3FAIEDB4KAFMAdQBiAEMAQTALBgNVHQ8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zAfBgNVHSMEGDAWgBRyLToCMZBDuRQFTuHqp8cx0SOJNDBaBgNVHR8EUzBRME+gTaBLhklodHRwOi8vY3JsLm1pY3Jvc29mdC5jb20vcGtpL2NybC9wcm9kdWN0cy9NaWNSb29DZXJBdXQyMDExXzIwMTFfMDNfMjIuY3JsMF4GCCsGAQUFBwEBBFIwUDBOBggrBgEFBQcwAoZCaHR0cDovL3d3dy5taWNyb3NvZnQuY29tL3BraS9jZXJ0cy9NaWNSb29DZXJBdXQyMDExXzIwMTFfMDNfMjIuY3J0MIGfBgNVHSAEgZcwgZQwgZEGCSsGAQQBgjcuAzCBgzA/BggrBgEFBQcCARYzaHR0cDovL3d3dy5taWNyb3NvZnQuY29tL3BraW9wcy9kb2NzL3ByaW1hcnljcHMuaHRtMEAGCCsGAQUFBwICMDQeMiAdAEwAZQBnAGEAbABfAHAAbwBsAGkAYwB5AF8AcwB0AGEAdABlAG0AZQBuAHQALiAdMA0GCSqGSIb3DQEBCwUAA4ICAQBn8oalmOBUeRou09h0ZyKbC5YR4WOSmUKWfdJ5DJDBZV8uLD74w3LRbYP+vj/oCso7v0epo/Np22O/IjWll11lhJB9i0ZQVdgMknzSGksc8zxCi1LQsP1r4z4HLimb5j0bpdS1HXeUOeLpZMlEPXh6I/MTfaaQdION9MsmAkYqwooQu6SpBQyb7Wj6aC6VoCo/KmtYSWMfCWluWpiW5IP0wI/zRive/DvQvTXvbiWu5a8n7dDd8w6vmSiXmE0OPQvyCInWH8MyGOLwxS3OW560STkKxgrCxq2u5bLZ2xWIUUVYODJxJxp/sfQn+N4sOiBpmLJZiWhub6e3dMNABQamASooPoI/E01mC8CzTfXhj38cbxV9Rad25UAqZaPDXVJihsMdYzaXht/a8/jyFqGaJ+HNpZfQ7l1jQeNbB5yHPgZ3BtEGsXUfFL5hYbXw3MYbBL7fQccOKO7eZS/sl/ahXJbYANahRr1Z85elCUtIEJmAH9AAKcWxm6U/RXceNcbSoqKfenoi+kiVH6v7RyOA9Z74v2u3S5fi63V4GuzqN5l5GEv/1rMjaHXmr/r8i+sLgOppO6/8MO0ETI7f33VtY5E90Z1WTk+/gFcioXgRMiF670EKsT/7qMykXcGhiJtXcVZOSEXAQsmbdlsKgEhr/Xmfwb1tbWrJUnMTDXpQzTGCGdAwghnMAgEBMIGVMH4xCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xKDAmBgNVBAMTH01pY3Jvc29mdCBDb2RlIFNpZ25pbmcgUENBIDIwMTECEzMAAAQEbHQG/1crJ3IAAAAABAQwCwYJYIZIAWUDBAICoFkwFgYJKoZIhvcNAQkDMQkGB4FLg0gHCAkwPwYJKoZIhvcNAQkEMTIEMOMp4+7ZIyK95QbHsg3yVM0Amf2ZGxTJzy6piXn/2dFArYiAtLZ3itHDM58yCm0kZDANBgkqhkiG9w0BAQEFAASCAQAwfS/AN9kbHk0/YvdfHc326F27sz8/6u0fAx7K1TroDRlIY2TY/hM+yzYcnkW/GJtJ7mWvsmZjiCxJR46MshOxcen0acCHYvcOZJ7qzn+TGCmNMC+deuKzYPtaXw0Rtu2ZcmAQsmgRwQnlK6qktOJALTeEGcEHjm5z3jrLm4Tq3BWztE0n+XTFSK7mjIX6aO3CLyUyf9WovyBVSSBhS8fXE0rivedklE0J2EX8Uk2eFzkHuppvB3yJlIHA1VAzZU+Vdyy9bshdBqsGk7xN83kTeN9i6BtllNTFHS4V3Odw8xXA9c4V6fWd7MbUsPNuIXWzlXD6xqIr3/AXJHuxMJkYoYIXsjCCF64GCyqGSIb3DQEJEAIOMYIXnTCCF5kGCSqGSIb3DQEHAqCCF4owgheGAgEDMQ8wDQYJYIZIAWUDBAIBBQAwggFsBgsqhkiG9w0BCRABBKCCAVsEggFXMIIBUwIBAQYKKwYBBAGEWQoDATAxMA0GCWCGSAFlAwQCAQUABCDIYsHn7suITR2ezBZwv9NA2q+ReH0UW9VetLg2U/Z86AIGZ4kD7OZOGBMyMDI1MDEyMzIyNTExMC43NzVaMASAAgH0AhhOXyf5G0c+hiep8hlj3ERcjt/D5k+avwKggdGkgc4wgcsxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xJTAjBgNVBAsTHE1pY3Jvc29mdCBBbWVyaWNhIE9wZXJhdGlvbnMxJzAlBgNVBAsTHm5TaGllbGQgVFNTIEVTTjpBMDAwLTA1RTAtRDk0NzElMCMGA1UEAxMcTWljcm9zb2Z0IFRpbWUtU3RhbXAgU2VydmljZaCCEe0wggcgMIIFCKADAgECAhMzAAAB6+AYbLW27zjtAAEAAAHrMA0GCSqGSIb3DQEBCwUAMHwxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xJjAkBgNVBAMTHU1pY3Jvc29mdCBUaW1lLVN0YW1wIFBDQSAyMDEwMB4XDTIzMTIwNjE4NDUzNFoXDTI1MDMwNTE4NDUzNFowgcsxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xJTAjBgNVBAsTHE1pY3Jvc29mdCBBbWVyaWNhIE9wZXJhdGlvbnMxJzAlBgNVBAsTHm5TaGllbGQgVFNTIEVTTjpBMDAwLTA1RTAtRDk0NzElMCMGA1UEAxMcTWljcm9zb2Z0IFRpbWUtU3RhbXAgU2VydmljZTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAMEVaCHaVuBXd4mnTWiqJoUG5hs1zuFIqaS28nXk2sH8MFuhSjDxY85M/FufuByYg4abAmR35PIXHso6fOvGegHeG6+/3V9m5S6AiwpOcC+DYFT+d83tnOf0qTWam4nbtLrFQMfih0WJfnUgJwqXoQbhzEqBwMCKeKFPzGuglZUBMvunxtt+fCxzWmKFmZy8i5gadvVNj22el0KFav0QBG4KjdOJEaMzYunimJPaUPmGd3dVoZN6k2rJqSmQIZXT5wrxW78eQhl2/L7PkQveiNN0Usvm8n0gCiBZ/dcC7d3tKkVpqh6LHR7WrnkAP3hnAM/6LOotp2wFHe3OOrZF+sI0v5OaL+NqVG2j8npuHh8+EcROcMLvxPXJ9dRB0a2Yn+60j8A3GLsdXyAA/OJ31NiMw9tiobzLnHP6Aj9IXKP5oq0cdaYrMRc+21fMBx7EnUQfvBu6JWTewSs8r0wuDVdvqEzkchYDSMQBmEoTJ3mEfZcyJvNqRunazYQlBZqxBzgMxoXUSxDULOAKUNghgbqtSG518juTwv0ooIS59FsrmV1Fg0Cp12v/JIl+5m/c9Lf6+0PpfqrUfhQ6aMMp2OhbeqzslExmYf1+QWQzNvphLOvp5fUuhibc+s7Ul5rjdJjOUHdPPzg6+5VJXs1yJ1W02qJl5ZalWN9q9H4mP8k5AgMBAAGjggFJMIIBRTAdBgNVHQ4EFgQUdJ4FrNZVzG7ipP07mNPYH6oB6uEwHwYDVR0jBBgwFoAUn6cVXQBeYl2D9OXSZacbUzUZ6XIwXwYDVR0fBFgwVjBUoFKgUIZOaHR0cDovL3d3dy5taWNyb3NvZnQuY29tL3BraW9wcy9jcmwvTWljcm9zb2Z0JTIwVGltZS1TdGFtcCUyMFBDQSUyMDIwMTAoMSkuY3JsMGwGCCsGAQUFBwEBBGAwXjBcBggrBgEFBQcwAoZQaHR0cDovL3d3dy5taWNyb3NvZnQuY29tL3BraW9wcy9jZXJ0cy9NaWNyb3NvZnQlMjBUaW1lLVN0YW1wJTIwUENBJTIwMjAxMCgxKS5jcnQwDAYDVR0TAQH/BAIwADAWBgNVHSUBAf8EDDAKBggrBgEFBQcDCDAOBgNVHQ8BAf8EBAMCB4AwDQYJKoZIhvcNAQELBQADggIBAIN03y+g93wL5VZk/f5bztz9Bt1tYrSw631niQQ5aeDsqaH5YPYuc8lMkogRrGeI5y33AyAnzJDLBHxYeAM69vCp2qwtRozg2t6u0joUj2uGOF5orE02cFnMdksPCWQv28IQN71FzR0ZJV3kGDcJaSdXe69Vq7XgXnkRJNYgE1pBL0KmjY6nPdxGABhV9osUZsCs1xG9Ja9JRt4jYgOpHELjEFtGI1D7WodcMI+fSEaxd8v7KcNmdwJ+zM2uWBlPbheCG9PNgwdxeKgtVij/YeTKjDp0ju5QslsrEtfzAeGyLCuJcgMKeMtWwbQTltHzZCByx4SHFtTZ3VFUdxC2RQTtb3PFmpnr+M+ZqiNmBdA7fdePE4dhhVr8Fdwi67xIzM+OMABu6PBNrClrMsG/33stEHRk5s1yQljJBCkRNJ+U3fqNb7PtH+cbImpFnce1nWVdbV/rMQIB4/713LqeZwKtVw6ptAdftmvxY9yCEckAAOWbkTE+HnGLW01GT6LoXZr1KlN5Cdlc/nTD4mhPEhJCru8GKPaeK0CxItpV4yqg+L41eVNQ1nY121sWvoiKv1kr259rPcXF+8Nmjfrm8s6jOZA579n6m7i9jnM+a02JUhxCcXLslk6JlUMjlsh3BBFqLaq4conqW1R2yLceM2eJ64TvZ9Ph5aHG2ac3kdgIMIIHcTCCBVmgAwIBAgITMwAAABXF52ueAptJmQAAAAAAFTANBgkqhkiG9w0BAQsFADCBiDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCldhc2hpbmd0b24xEDAOBgNVBAcTB1JlZG1vbmQxHjAcBgNVBAoTFU1pY3Jvc29mdCBDb3Jwb3JhdGlvbjEyMDAGA1UEAxMpTWljcm9zb2Z0IFJvb3QgQ2VydGlmaWNhdGUgQXV0aG9yaXR5IDIwMTAwHhcNMjEwOTMwMTgyMjI1WhcNMzAwOTMwMTgzMjI1WjB8MQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMSYwJAYDVQQDEx1NaWNyb3NvZnQgVGltZS1TdGFtcCBQQ0EgMjAxMDCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAOThpkzntHIhC3miy9ckeb0O1YLT/e6cBwfSqWxOdcjKNVf2AX9sSuDivbk+F2Az/1xPx2b3lVNxWuJ+Slr+uDZnhUYjDLWNE893MsAQGOhgfWpSg0S3po5GawcU88V29YZQ3MFEyHFcUTE3oAo4bo3t1w/YJlN8OWECesSq/XJprx2rrPY2vjUmZNqYO7oaezOtgFt+jBAcnVL+tuhiJdxqD89d9P6OU8/W7IVWTe/dvI2k45GPsjksUZzpcGkNyjYtcI4xyDUoveO0hyTD4MmPfrVUj9z6BVWYbWg7mka97aSueik3rMvrg0XnRm7KMtXAhjBcTyziYrLNueKNiOSWrAFKu75xqRdbZ2De+JKRHh09/SDPc31BmkZ1zcRfNN0Sidb9pSB9fvzZnkXftnIv231fgLrbqn427DZM9ituqBJR6L8FA6PRc6ZNN3SUHDSCD/AQ8rdHGO2n6Jl8P0zbr17C89XYcz1DTsEzOUyOArxCaC4Q6oRRRuLRvWoYWmEBc8pnol7XKHYC4jMYctenIPDC+hIK12NvDMk2ZItboKaDIV1fMHSRlJTYuVD5C4lh8zYGNRiER9vcG9H9stQcxWv2XFJRXRLbJbqvUAV6bMURHXLvjflSxIUXk8A8FdsaN8cIFRg/eKtFtvUeh17aj54WcmnGrnu3tz5q4i6tAgMBAAGjggHdMIIB2TASBgkrBgEEAYI3FQEEBQIDAQABMCMGCSsGAQQBgjcVAgQWBBQqp1L+ZMSavoKRPEY1Kc8Q/y8E7jAdBgNVHQ4EFgQUn6cVXQBeYl2D9OXSZacbUzUZ6XIwXAYDVR0gBFUwUzBRBgwrBgEEAYI3TIN9AQEwQTA/BggrBgEFBQcCARYzaHR0cDovL3d3dy5taWNyb3NvZnQuY29tL3BraW9wcy9Eb2NzL1JlcG9zaXRvcnkuaHRtMBMGA1UdJQQMMAoGCCsGAQUFBwMIMBkGCSsGAQQBgjcUAgQMHgoAUwB1AGIAQwBBMAsGA1UdDwQEAwIBhjAPBgNVHRMBAf8EBTADAQH/MB8GA1UdIwQYMBaAFNX2VsuP6KJcYmjRPZSQW9fOmhjEMFYGA1UdHwRPME0wS6BJoEeGRWh0dHA6Ly9jcmwubWljcm9zb2Z0LmNvbS9wa2kvY3JsL3Byb2R1Y3RzL01pY1Jvb0NlckF1dF8yMDEwLTA2LTIzLmNybDBaBggrBgEFBQcBAQROMEwwSgYIKwYBBQUHMAKGPmh0dHA6Ly93d3cubWljcm9zb2Z0LmNvbS9wa2kvY2VydHMvTWljUm9vQ2VyQXV0XzIwMTAtMDYtMjMuY3J0MA0GCSqGSIb3DQEBCwUAA4ICAQCdVX38Kq3hLB9nATEkW+Geckv8qW/qXBS2Pk5HZHixBpOXPTEztTnXwnE2P9pkbHzQdTltuw8x5MKP+2zRoZQYIu7pZmc6U03dmLq2HnjYNi6cqYJWAAOwBb6J6Gngugnue99qb74py27YP0h1AdkY3m2CDPVtI1TkeFN1JFe53Z/zjj3G82jfZfakVqr3lbYoVSfQJL1AoL8ZthISEV09J+BAljis9/kpicO8F7BUhUKz/AyeixmJ5/ALaoHCgRlCGVJ1ijbCHcNhcy4sa3tuPywJeBTpkbKpW99Jo3QMvOyRgNI95ko+ZjtPu4b6MhrZlvSP9pEB9s7GdP32THJvEKt1MMU0sHrYUP4KWN1APMdUbZ1jdEgssU5HLcEUBHG/ZPkkvnNtyo4JvbMBV0lUZNlz138eW0QBjloZkWsNn6Qo3GcZKCS6OEuabvshVGtqRRFHqfG3rsjoiV5PndLQTHa1V1QJsWkBRH58oWFsc/4Ku+xBZj1p/cvBQUl+fpO+y/g75LcVv7TOPqUxUYS8vwLBgqJ7Fx0ViY1w/ue10CgaiQuPNtq6TPmb/wrpNPgkNWcr4A245oyZ1uEi6vAnQj0llOZ0dFtq0Z4+7X6gMTN9vMvpe784cETRkPHIqzqKOghif9lwY1NNje6CbaUFEMFxBmoQtB1VM1izoXBm8qGCA1AwggI4AgEBMIH5oYHRpIHOMIHLMQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMSUwIwYDVQQLExxNaWNyb3NvZnQgQW1lcmljYSBPcGVyYXRpb25zMScwJQYDVQQLEx5uU2hpZWxkIFRTUyBFU046QTAwMC0wNUUwLUQ5NDcxJTAjBgNVBAMTHE1pY3Jvc29mdCBUaW1lLVN0YW1wIFNlcnZpY2WiIwoBATAHBgUrDgMCGgMVAIAGiXW7XDDBiBS1SjAyepi9u6XeoIGDMIGApH4wfDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCldhc2hpbmd0b24xEDAOBgNVBAcTB1JlZG1vbmQxHjAcBgNVBAoTFU1pY3Jvc29mdCBDb3Jwb3JhdGlvbjEmMCQGA1UEAxMdTWljcm9zb2Z0IFRpbWUtU3RhbXAgUENBIDIwMTAwDQYJKoZIhvcNAQELBQACBQDrPLw8MCIYDzIwMjUwMTIzMTMwMTQ4WhgPMjAyNTAxMjQxMzAxNDhaMHcwPQYKKwYBBAGEWQoEATEvMC0wCgIFAOs8vDwCAQAwCgIBAAICF3gCAf8wBwIBAAICErYwCgIFAOs+DbwCAQAwNgYKKwYBBAGEWQoEAjEoMCYwDAYKKwYBBAGEWQoDAqAKMAgCAQACAwehIKEKMAgCAQACAwGGoDANBgkqhkiG9w0BAQsFAAOCAQEAGZW5RMs1G0bq3jVYrQKnuOXpvAwWw8Bzn2iGQe3c+xF5wctd85X2gOA9EMxg5Kxuaog0EkOMDsEGQXBNjik7R19OlLl8Fv1VA7NnxL0aiFLqkxefwxOeItOeLVAMYReM6xn0gWlUO489NpmdOC1W+2Dt+TaFyu2Hm1/g8g8jKfwEcvVMMM4FOWsWf3KqopzwcvnhOYHpiyOn0fpvHZ4VTDL2Ag88gNEVeeob85fEYV8F5setxNY4EkwAnFLGI02BrfuA7gyb/fyNTKQ9Z2YoLGP57+8eYvGYW8EPRI1ZqbBUd33cc5ZSYPluGwjwCOejA1Vte6CeTiGoDqIi7UDGSjGCBA0wggQJAgEBMIGTMHwxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xJjAkBgNVBAMTHU1pY3Jvc29mdCBUaW1lLVN0YW1wIFBDQSAyMDEwAhMzAAAB6+AYbLW27zjtAAEAAAHrMA0GCWCGSAFlAwQCAQUAoIIBSjAaBgkqhkiG9w0BCQMxDQYLKoZIhvcNAQkQAQQwLwYJKoZIhvcNAQkEMSIEICR6ezNvrS2Y6+N5sUKN+a2lGrluKAKj+QBnFF8Z1/ypMIH6BgsqhkiG9w0BCRACLzGB6jCB5zCB5DCBvQQgzrdrvl9pA+F/xIENO2TYNJSAht2LNezPvqHUl2EsLPgwgZgwgYCkfjB8MQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMSYwJAYDVQQDEx1NaWNyb3NvZnQgVGltZS1TdGFtcCBQQ0EgMjAxMAITMwAAAevgGGy1tu847QABAAAB6zAiBCDDCRgKILNJntDIUp20oiFSC5SreKc73DPGDg1f1ER2fzANBgkqhkiG9w0BAQsFAASCAgBY7Roaz+sA6W+ua6abDsvn1uV7GJnQP4DknrUbHjuwRKbK5j8hvx0Hato2R5v7Fyxn0xV48U1D45oJ719HHzKmo78ex3nLrMFNgTBz+AWoy4Bk3IDgJaD8u9gLKGwWznVmo0JxNLYlb5tcDrudlPdXW1llDuqXmQiNXEMZXVBsh/wtJmMTnW59KUFHOv/YoDYqW8i/bm7n6zKMEoKknDwQlPAGvACdJVi7AJZpAFLwV9IeNJbxnjlw5uKY80ProaCN8gw9dIug9q5MW2nD9bozbmXSriEBgpQdUKkDUD7xpLvLzASoquyQu9FVLCIgeB/ZrBNoS4tTrYOlwjy5UUiSxVhzxoFFA8+GOJStaGv+J7we5qGUHJGOUL7k4XDFUot6qEJL8VTiLuGDnmZo1kFtoAwJnOsGP92U0vAh4eiTloaKz9cEhLzGs+Mv+P8bSd3GnLce9L5nIA/WWXimm6KZLVq4JigkTxoSmhNNBxMopiN3Wuc9vlpBuFtESSbOZ3K6oJGhO5zBemNgooUSAyrMwXO/gcGNgU65j+k7VM8n08VnDDHXIhheIcxCOmoHLLJJOZboYZs6QK60L89mUop0IX4XRGJjD1wmE9lEoupvaWiDeQAINjlupG7okF4CR/evQPl5uKYsg3Gm61AWdb9W3n9IOLir3YKUpf87IeRpgg==Azure-WALinuxAgent-a976115/tests/data/signing/vm_access_signature.txt000066400000000000000000000323411510742556200256730ustar00rootroot00000000000000MIInowYJKoZIhvcNAQcCoIInlDCCJ5ACAQMxDTALBglghkgBZQMEAgIwCQYHgUuDSAcICaCCDYUwggYDMIID66ADAgECAhMzAAAEA73VlV0POxitAAAAAAQDMA0GCSqGSIb3DQEBCwUAMH4xCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xKDAmBgNVBAMTH01pY3Jvc29mdCBDb2RlIFNpZ25pbmcgUENBIDIwMTEwHhcNMjQwOTEyMjAxMTEzWhcNMjUwOTExMjAxMTEzWjB0MQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMR4wHAYDVQQDExVNaWNyb3NvZnQgQ29ycG9yYXRpb24wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfdGddwIOnbRYUyg03O3iz19XXZPmuhEmW/5uyEN+8mgxl+HJGeLGBR8YButGVLVK38RxcVcPYyFGQXcKcxgih4w4y4zJi3GvawLYHlsNExQwz+v0jgY/aejBS2EJYoUhLVE+UzRihV8ooxoftsmKLb2xb7BoFS6UAo3Zz4afnOdqI7FGoi7g4vx/0MIdikwTn5N56TdIv3mwfkZCFmrsKpN0zR8HD8WYsvH3xKkG7u/xdqmhPPqMmnI2jOFw//n2aL8W7i1Pasja8PnRXH/QaVH0M1nanL+LI9TsMb/enWfXOW65Gne5cqMN9UofvENtdwwEmJ3bZrcI9u4LZAkujAgMBAAGjggGCMIIBfjAfBgNVHSUEGDAWBgorBgEEAYI3TAgBBggrBgEFBQcDAzAdBgNVHQ4EFgQU6m4qAkpz4641iK2irF8eWsSBcBkwVAYDVR0RBE0wS6RJMEcxLTArBgNVBAsTJE1pY3Jvc29mdCBJcmVsYW5kIE9wZXJhdGlvbnMgTGltaXRlZDEWMBQGA1UEBRMNMjMwMDEyKzUwMjkyNjAfBgNVHSMEGDAWgBRIbmTlUAXTgqoXNzcitW2oynUClTBUBgNVHR8ETTBLMEmgR6BFhkNodHRwOi8vd3d3Lm1pY3Jvc29mdC5jb20vcGtpb3BzL2NybC9NaWNDb2RTaWdQQ0EyMDExXzIwMTEtMDctMDguY3JsMGEGCCsGAQUFBwEBBFUwUzBRBggrBgEFBQcwAoZFaHR0cDovL3d3dy5taWNyb3NvZnQuY29tL3BraW9wcy9jZXJ0cy9NaWNDb2RTaWdQQ0EyMDExXzIwMTEtMDctMDguY3J0MAwGA1UdEwEB/wQCMAAwDQYJKoZIhvcNAQELBQADggIBAFFo/6E4LX51IqFuoKvUsi80QytGI5ASQ9zsPpBa0z78hutiJd6w154JkcIx/f7rEBK4NhD4DIFNfRiVdI7EacEs7OAS6QHF7Nt+eFRNOTtgHb9PExRy4EI/jnMwzQJVNokTxu2WgHr/fBsWs6G9AcIgvHjWNN3qRSrhsgEdqHc0bRDUf8UILAdEZOMBvKLCrmf+kJPEvPldgK7hFO/L9kmcVe67BnKejDKO73Sa56AJOhM7CkeATrJFxO9GLXosoKvrwBvynxAg18W+pagTAkJefzneuWSmniTurPCUE2JnvW7DalvONDOtG01sIVAB+ahO2wcUPa2Zm9AiDVBWTMz9XUoKMcvngi2oqbsDLhbK+pYrRUgRpNt0y1sxZsXOraGRF8lM2cWvtEkV5UL+TQM1ppv5unDHkW8JS+QnfPbB8dZVRyRmMQ4aY/tx5x5+sX6semJ//FbiclSMxSI+zINu1jYerdUwuCi+P6p7SmQmClhDM+6Q+btE2FtpsU0W+r6RdYFf/P+nK6j2otl9Nvr3tWLu+WXmz8MGM+18ynJ+lYbSmFWcAj7SYziAfT0sIwlQRFkyC71tsIZUhBHtxPliGUu362lIO0Lpe0DOrg8lspnEWOkHnCT5JEnWCbzuiVt8RX1IV07uIveNZuOBWLVCzWJjEGa+HhaEtavjy6i7MIIHejCCBWKgAwIBAgIKYQ6Q0gAAAAAAAzANBgkqhkiG9w0BAQsFADCBiDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCldhc2hpbmd0b24xEDAOBgNVBAcTB1JlZG1vbmQxHjAcBgNVBAoTFU1pY3Jvc29mdCBDb3Jwb3JhdGlvbjEyMDAGA1UEAxMpTWljcm9zb2Z0IFJvb3QgQ2VydGlmaWNhdGUgQXV0aG9yaXR5IDIwMTEwHhcNMTEwNzA4MjA1OTA5WhcNMjYwNzA4MjEwOTA5WjB+MQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMSgwJgYDVQQDEx9NaWNyb3NvZnQgQ29kZSBTaWduaW5nIFBDQSAyMDExMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAq/D6chAcLq3YbqqCEE00uvK2WCGfQhsqa+laUKq4BjgaBEm6f8MMHt03a8YS2AvwOMKZBrDIOdUBFDFC04kNeWSHfpRgJGyvnkmc6Whe0t+bU7IKLMOv2akrrnoJr9eWWcpgGgXpZnboMlImEi/nqwhQz7NEt13YxC4Ddato88tt8zpcoRb0RrrgOGSsbmQ1eKagYw8t00CT+OPeBw3VXHmlSSnnDb6gE3e+lD3v++MrWhAfTVYoonpy4BI6t0le2O3tQ5GD2Xuye4Yb2T6xjF3oiU+EGvKhL1nkkDstrjNYxbc+/jLTswM9sbKvkjh+0p2ALPVOVpEhNSXDOW5kf1O6nA+tGSOEy/S6A4aN91/w0FK/jJSHvMAhdCVfGCi2zCcoOCWYOUo2z3yxkq4cI6epZuxhH2rhKEmdX4jiJV3TIUs+UsS1Vz8kA/DRelsv1SPjcF0PUUZ3s/gA4bysAoJf28AVs70b1FVL5zmhD+kjSbwYuER8ReTBw3J64HLnJN+/RpnF78IcV9uDjexNSTCnq47f7Fufr/zdsGbiwZeBe+3W7UvnSSmnEyimp31ngOaKYnhfsi+E11ecXL93KCjx7W3DKI8sj0A3T8HhhUSJxAlMxdSlQy90lfdu+HggWCwTXWCVmj5PM4TasIgX3p5O9JawvEagbJjS4NaIjAsCAwEAAaOCAe0wggHpMBAGCSsGAQQBgjcVAQQDAgEAMB0GA1UdDgQWBBRIbmTlUAXTgqoXNzcitW2oynUClTAZBgkrBgEEAYI3FAIEDB4KAFMAdQBiAEMAQTALBgNVHQ8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zAfBgNVHSMEGDAWgBRyLToCMZBDuRQFTuHqp8cx0SOJNDBaBgNVHR8EUzBRME+gTaBLhklodHRwOi8vY3JsLm1pY3Jvc29mdC5jb20vcGtpL2NybC9wcm9kdWN0cy9NaWNSb29DZXJBdXQyMDExXzIwMTFfMDNfMjIuY3JsMF4GCCsGAQUFBwEBBFIwUDBOBggrBgEFBQcwAoZCaHR0cDovL3d3dy5taWNyb3NvZnQuY29tL3BraS9jZXJ0cy9NaWNSb29DZXJBdXQyMDExXzIwMTFfMDNfMjIuY3J0MIGfBgNVHSAEgZcwgZQwgZEGCSsGAQQBgjcuAzCBgzA/BggrBgEFBQcCARYzaHR0cDovL3d3dy5taWNyb3NvZnQuY29tL3BraW9wcy9kb2NzL3ByaW1hcnljcHMuaHRtMEAGCCsGAQUFBwICMDQeMiAdAEwAZQBnAGEAbABfAHAAbwBsAGkAYwB5AF8AcwB0AGEAdABlAG0AZQBuAHQALiAdMA0GCSqGSIb3DQEBCwUAA4ICAQBn8oalmOBUeRou09h0ZyKbC5YR4WOSmUKWfdJ5DJDBZV8uLD74w3LRbYP+vj/oCso7v0epo/Np22O/IjWll11lhJB9i0ZQVdgMknzSGksc8zxCi1LQsP1r4z4HLimb5j0bpdS1HXeUOeLpZMlEPXh6I/MTfaaQdION9MsmAkYqwooQu6SpBQyb7Wj6aC6VoCo/KmtYSWMfCWluWpiW5IP0wI/zRive/DvQvTXvbiWu5a8n7dDd8w6vmSiXmE0OPQvyCInWH8MyGOLwxS3OW560STkKxgrCxq2u5bLZ2xWIUUVYODJxJxp/sfQn+N4sOiBpmLJZiWhub6e3dMNABQamASooPoI/E01mC8CzTfXhj38cbxV9Rad25UAqZaPDXVJihsMdYzaXht/a8/jyFqGaJ+HNpZfQ7l1jQeNbB5yHPgZ3BtEGsXUfFL5hYbXw3MYbBL7fQccOKO7eZS/sl/ahXJbYANahRr1Z85elCUtIEJmAH9AAKcWxm6U/RXceNcbSoqKfenoi+kiVH6v7RyOA9Z74v2u3S5fi63V4GuzqN5l5GEv/1rMjaHXmr/r8i+sLgOppO6/8MO0ETI7f33VtY5E90Z1WTk+/gFcioXgRMiF670EKsT/7qMykXcGhiJtXcVZOSEXAQsmbdlsKgEhr/Xmfwb1tbWrJUnMTDXpQzTGCGeYwghniAgEBMIGVMH4xCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xKDAmBgNVBAMTH01pY3Jvc29mdCBDb2RlIFNpZ25pbmcgUENBIDIwMTECEzMAAAQDvdWVXQ87GK0AAAAABAMwCwYJYIZIAWUDBAICoFkwFgYJKoZIhvcNAQkDMQkGB4FLg0gHCAkwPwYJKoZIhvcNAQkEMTIEMPb4NDSina3q5LKsuziLX2j7dvkWGxqkMQUGK1UaDJ3M7z7AbRBCDUjEg1qruFwE/jANBgkqhkiG9w0BAQEFAASCAQCQZuzcSBlcOkoESLoiPpbfu4xRwHV/OQGelHdO+9X5SNOaoo7Jq1DJOTyeH07x0ycqd0fUxCcBgvCvPkDo46e2gBqHNvzc4q3Da5RcPzmwNBF89gmOBofC4DdwVzleQLX1X4VBq8qwHks00RQnbMhKDmSnWzRqfbI/OwL6mMcFimGTL7tAuBrQGtzMsXICsFcSyD73gPKk6hWZunXsXUlstKvF/0QfPEKB2fKMkL57TferCWCVDELkza3X/7KAdrKMdzlM1RYgx0I46NdbAKcfYb24sSRvLdWNpJHxTe9IXKu/O15hPSbuR/AjETgLvbr4cKjChV9A8IQo1X46zpLuoYIXyDCCF8QGCyqGSIb3DQEJEAIOMYIXszCCF68GCSqGSIb3DQEHAqCCF6AwghecAgEDMQ8wDQYJYIZIAWUDBAIBBQAwggF0BgsqhkiG9w0BCRABBKCCAWMEggFfMIIBWwIBAQYKKwYBBAGEWQoDATAxMA0GCWCGSAFlAwQCAQUABCBe3cNoNvnhCfCg9jFgD+G9cmDI7bdEKPGpaReen+qxrwIGZ7YvCVqsGBMyMDI1MDMzMTIyMDExMi45MDlaMASAAgH0AhhJBfaNVA33jUKm3f/faGCRGgF/HR/3VhyggdmkgdYwgdMxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xLTArBgNVBAsTJE1pY3Jvc29mdCBJcmVsYW5kIE9wZXJhdGlvbnMgTGltaXRlZDEnMCUGA1UECxMeblNoaWVsZCBUU1MgRVNOOjMyMUEtMDVFMC1EOTQ3MSUwIwYDVQQDExxNaWNyb3NvZnQgVGltZS1TdGFtcCBTZXJ2aWNloIIR+zCCBygwggUQoAMCAQICEzMAAAH4o6EmDAxASP4AAQAAAfgwDQYJKoZIhvcNAQELBQAwfDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCldhc2hpbmd0b24xEDAOBgNVBAcTB1JlZG1vbmQxHjAcBgNVBAoTFU1pY3Jvc29mdCBDb3Jwb3JhdGlvbjEmMCQGA1UEAxMdTWljcm9zb2Z0IFRpbWUtU3RhbXAgUENBIDIwMTAwHhcNMjQwNzI1MTgzMTA4WhcNMjUxMDIyMTgzMTA4WjCB0zELMAkGA1UEBhMCVVMxEzARBgNVBAgTCldhc2hpbmd0b24xEDAOBgNVBAcTB1JlZG1vbmQxHjAcBgNVBAoTFU1pY3Jvc29mdCBDb3Jwb3JhdGlvbjEtMCsGA1UECxMkTWljcm9zb2Z0IElyZWxhbmQgT3BlcmF0aW9ucyBMaW1pdGVkMScwJQYDVQQLEx5uU2hpZWxkIFRTUyBFU046MzIxQS0wNUUwLUQ5NDcxJTAjBgNVBAMTHE1pY3Jvc29mdCBUaW1lLVN0YW1wIFNlcnZpY2UwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDFHbeldicPYG44N15ezYK79PmQoj5sDDxxu03nQKb8UCuNfIvhFOox7qVpD8Kp4xPGByS9mvUmtbQyLgXXmvH9W94aEoGahvjkOY5xXnHLHuH1OTn00CXk80wBYoAhZ/bvRJYABbFBulUiGE9YKdVXei1W9qERp3ykyahJetPlns2TVGcHvQDZur0eTzAh4Le8G7ERfYTxfnQiAAezJpH2ugWrcSvNQQeVLxidKrfe6Lm4FysU5wU4Jkgu5UVVOASpKtfhSJfR62qLuNS0rKmAh+VplxXlwjlcj94LFjzAM2YGmuFgw2VjF2ZD1otENxMpa111amcm3KXl7eAe5iiPzG4NDRdk3LsRJHAkgrTf6tNmp9pjIzhdIrWzRpr6Y7r2+j82YnhH9/X4q5wE8njJR1uolYzfEy8HAtjJy+KAj9YriSA+iDRQE1zNpDANVelxT5Mxw69Y/wcFaZYlAiZNkicAWK9epRoFujfAB881uxCm800a7/XamDQXw78J1F+A8d86EhZDQPwAsJj4uyLBvNx6NutWXg31+fbA6DawNrxF82gPrXgjSkWPL+WrU2wGj1XgZkGKTNftmNYJGB3UUIFcal+kOKQeNDTlg6QBqR1YNPZsZJpRkkZVi16kik9MCzWB3+9SiBx2IvnWjuyG4ciUHpBJSJDbhdiFFttAIQIDAQABo4IBSTCCAUUwHQYDVR0OBBYEFL3OxnPPntCVPmeu3+iK0u/U5Du2MB8GA1UdIwQYMBaAFJ+nFV0AXmJdg/Tl0mWnG1M1GelyMF8GA1UdHwRYMFYwVKBSoFCGTmh0dHA6Ly93d3cubWljcm9zb2Z0LmNvbS9wa2lvcHMvY3JsL01pY3Jvc29mdCUyMFRpbWUtU3RhbXAlMjBQQ0ElMjAyMDEwKDEpLmNybDBsBggrBgEFBQcBAQRgMF4wXAYIKwYBBQUHMAKGUGh0dHA6Ly93d3cubWljcm9zb2Z0LmNvbS9wa2lvcHMvY2VydHMvTWljcm9zb2Z0JTIwVGltZS1TdGFtcCUyMFBDQSUyMDIwMTAoMSkuY3J0MAwGA1UdEwEB/wQCMAAwFgYDVR0lAQH/BAwwCgYIKwYBBQUHAwgwDgYDVR0PAQH/BAQDAgeAMA0GCSqGSIb3DQEBCwUAA4ICAQBh+TwbPOkRWcaXvLqhejK0JvjYfHpM4DT52RoEjfp+0MT20u5tRr/ExscHmtw2JGEUdn3dF590+lzj4UXQMCXmU/zEoA77b3dFY8oMU4UjGC1ljTy3wP1xJCmAZTPLDeURNl5s0sQDXsD8JOkDYX26HyPzgrKB4RuP5uJ1YOIR9rKgfYDn/nLAknEi4vMVUdpy9bFIIqgX2GVKtlIbl9dZLedqZ/i23r3RRPoAbJYsVZ7z3lygU/Gb+bRQgyOOn1VEUfudvc2DZDiA9L0TllMxnqcCWZSJwOPQ1cCzbBC5CudidtEAn8NBbfmoujsNrD0Cwi2qMWFsxwbryANziPvgvYph7/aCgEcvDNKflQN+1LUdkjRlGyqY0cjRNm+9RZf1qObpJ8sFMS2hOjqAs5fRQP/2uuEaN2SILDhLBTmiwKWCqCI0wrmd2TaDEWUNccLIunmoHoGg+lzzZGE7TILOg/2C/vO/YShwBYSyoTn7Raa7m5quZ+9zOIt9TVJjbjQ5lbyV3ixLx+fJuf+MMyYUCFrNXXMfRARFYSx8tKnCQ5doiZY0UnmWZyd/VVObpyZ9qxJxi0SWmOpn0aigKaTVcUCk5E+z887jchwWY9HBqC3TSJBLD6sF4gfTQpCr4UlP/rZIHvSD2D9HxNLqTpv/C3ZRaGqtb5DyXDpfOB7H9jCCB3EwggVZoAMCAQICEzMAAAAVxedrngKbSZkAAAAAABUwDQYJKoZIhvcNAQELBQAwgYgxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xMjAwBgNVBAMTKU1pY3Jvc29mdCBSb290IENlcnRpZmljYXRlIEF1dGhvcml0eSAyMDEwMB4XDTIxMDkzMDE4MjIyNVoXDTMwMDkzMDE4MzIyNVowfDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCldhc2hpbmd0b24xEDAOBgNVBAcTB1JlZG1vbmQxHjAcBgNVBAoTFU1pY3Jvc29mdCBDb3Jwb3JhdGlvbjEmMCQGA1UEAxMdTWljcm9zb2Z0IFRpbWUtU3RhbXAgUENBIDIwMTAwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDk4aZM57RyIQt5osvXJHm9DtWC0/3unAcH0qlsTnXIyjVX9gF/bErg4r25PhdgM/9cT8dm95VTcVrifkpa/rg2Z4VGIwy1jRPPdzLAEBjoYH1qUoNEt6aORmsHFPPFdvWGUNzBRMhxXFExN6AKOG6N7dcP2CZTfDlhAnrEqv1yaa8dq6z2Nr41JmTamDu6GnszrYBbfowQHJ1S/rboYiXcag/PXfT+jlPP1uyFVk3v3byNpOORj7I5LFGc6XBpDco2LXCOMcg1KL3jtIckw+DJj361VI/c+gVVmG1oO5pGve2krnopN6zL64NF50ZuyjLVwIYwXE8s4mKyzbnijYjklqwBSru+cakXW2dg3viSkR4dPf0gz3N9QZpGdc3EXzTdEonW/aUgfX782Z5F37ZyL9t9X4C626p+Nuw2TPYrbqgSUei/BQOj0XOmTTd0lBw0gg/wEPK3Rxjtp+iZfD9M269ewvPV2HM9Q07BMzlMjgK8QmguEOqEUUbi0b1qGFphAXPKZ6Je1yh2AuIzGHLXpyDwwvoSCtdjbwzJNmSLW6CmgyFdXzB0kZSU2LlQ+QuJYfM2BjUYhEfb3BvR/bLUHMVr9lxSUV0S2yW6r1AFemzFER1y7435UsSFF5PAPBXbGjfHCBUYP3irRbb1Hode2o+eFnJpxq57t7c+auIurQIDAQABo4IB3TCCAdkwEgYJKwYBBAGCNxUBBAUCAwEAATAjBgkrBgEEAYI3FQIEFgQUKqdS/mTEmr6CkTxGNSnPEP8vBO4wHQYDVR0OBBYEFJ+nFV0AXmJdg/Tl0mWnG1M1GelyMFwGA1UdIARVMFMwUQYMKwYBBAGCN0yDfQEBMEEwPwYIKwYBBQUHAgEWM2h0dHA6Ly93d3cubWljcm9zb2Z0LmNvbS9wa2lvcHMvRG9jcy9SZXBvc2l0b3J5Lmh0bTATBgNVHSUEDDAKBggrBgEFBQcDCDAZBgkrBgEEAYI3FAIEDB4KAFMAdQBiAEMAQTALBgNVHQ8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zAfBgNVHSMEGDAWgBTV9lbLj+iiXGJo0T2UkFvXzpoYxDBWBgNVHR8ETzBNMEugSaBHhkVodHRwOi8vY3JsLm1pY3Jvc29mdC5jb20vcGtpL2NybC9wcm9kdWN0cy9NaWNSb29DZXJBdXRfMjAxMC0wNi0yMy5jcmwwWgYIKwYBBQUHAQEETjBMMEoGCCsGAQUFBzAChj5odHRwOi8vd3d3Lm1pY3Jvc29mdC5jb20vcGtpL2NlcnRzL01pY1Jvb0NlckF1dF8yMDEwLTA2LTIzLmNydDANBgkqhkiG9w0BAQsFAAOCAgEAnVV9/Cqt4SwfZwExJFvhnnJL/Klv6lwUtj5OR2R4sQaTlz0xM7U518JxNj/aZGx80HU5bbsPMeTCj/ts0aGUGCLu6WZnOlNN3Zi6th542DYunKmCVgADsAW+iehp4LoJ7nvfam++Kctu2D9IdQHZGN5tggz1bSNU5HhTdSRXud2f8449xvNo32X2pFaq95W2KFUn0CS9QKC/GbYSEhFdPSfgQJY4rPf5KYnDvBewVIVCs/wMnosZiefwC2qBwoEZQhlSdYo2wh3DYXMuLGt7bj8sCXgU6ZGyqVvfSaN0DLzskYDSPeZKPmY7T7uG+jIa2Zb0j/aRAfbOxnT99kxybxCrdTDFNLB62FD+CljdQDzHVG2dY3RILLFORy3BFARxv2T5JL5zbcqOCb2zAVdJVGTZc9d/HltEAY5aGZFrDZ+kKNxnGSgkujhLmm77IVRrakURR6nxt67I6IleT53S0Ex2tVdUCbFpAUR+fKFhbHP+CrvsQWY9af3LwUFJfn6Tvsv4O+S3Fb+0zj6lMVGEvL8CwYKiexcdFYmNcP7ntdAoGokLjzbaukz5m/8K6TT4JDVnK+ANuOaMmdbhIurwJ0I9JZTmdHRbatGePu1+oDEzfbzL6Xu/OHBE0ZDxyKs6ijoIYn/ZcGNTTY3ugm2lBRDBcQZqELQdVTNYs6FwZvKhggNWMIICPgIBATCCAQGhgdmkgdYwgdMxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xLTArBgNVBAsTJE1pY3Jvc29mdCBJcmVsYW5kIE9wZXJhdGlvbnMgTGltaXRlZDEnMCUGA1UECxMeblNoaWVsZCBUU1MgRVNOOjMyMUEtMDVFMC1EOTQ3MSUwIwYDVQQDExxNaWNyb3NvZnQgVGltZS1TdGFtcCBTZXJ2aWNloiMKAQEwBwYFKw4DAhoDFQC2RC395tZJDkOcb5opHM8QsIUT0aCBgzCBgKR+MHwxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xJjAkBgNVBAMTHU1pY3Jvc29mdCBUaW1lLVN0YW1wIFBDQSAyMDEwMA0GCSqGSIb3DQEBCwUAAgUA65VmVzAiGA8yMDI1MDMzMTE5MDcwM1oYDzIwMjUwNDAxMTkwNzAzWjB0MDoGCisGAQQBhFkKBAExLDAqMAoCBQDrlWZXAgEAMAcCAQACAgevMAcCAQACAhQnMAoCBQDrlrfXAgEAMDYGCisGAQQBhFkKBAIxKDAmMAwGCisGAQQBhFkKAwKgCjAIAgEAAgMHoSChCjAIAgEAAgMBhqAwDQYJKoZIhvcNAQELBQADggEBAHVBiF0EsStnu+qemRm5aMdaDQ8yVyht7MIw86dpl4u7wuyc5aJzLnPmr3YNuwOThKGLQ5omwxKiC87IL20fcXO7O5/BV9mJg61MmnzWj2dYtuVa/0DubVXN/lqYjUYykZj6x6CRkgp5TfCzg5aXqhuq1ioBlMD0Cdb0KgitHkY0Dw6TN/2WDjQooKNWEp1pg5U1DvJWT72FoFjJrg1vh/u7gDgN1e6r8eNc6Yp8UKyMV3Q4pc76/jlMWHUMFfE/ROvVbB168ybhpOj9ofFAuPbMI2PBAVqgt3hL/HPYa7CmttVAz6jFXMwskhKRFFlqCQ5Mpx6XhfAV/pQKHmQVJOIxggQNMIIECQIBATCBkzB8MQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMSYwJAYDVQQDEx1NaWNyb3NvZnQgVGltZS1TdGFtcCBQQ0EgMjAxMAITMwAAAfijoSYMDEBI/gABAAAB+DANBglghkgBZQMEAgEFAKCCAUowGgYJKoZIhvcNAQkDMQ0GCyqGSIb3DQEJEAEEMC8GCSqGSIb3DQEJBDEiBCCn6q/X5VoCEPAsZljxvXWFYitI++S8MIAuWfSBQKh6+DCB+gYLKoZIhvcNAQkQAi8xgeowgecwgeQwgb0EIO/MM/JfDVSQBQVi3xtHhR2Mz3RC/nGdVqIoPcjRnPdaMIGYMIGApH4wfDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCldhc2hpbmd0b24xEDAOBgNVBAcTB1JlZG1vbmQxHjAcBgNVBAoTFU1pY3Jvc29mdCBDb3Jwb3JhdGlvbjEmMCQGA1UEAxMdTWljcm9zb2Z0IFRpbWUtU3RhbXAgUENBIDIwMTACEzMAAAH4o6EmDAxASP4AAQAAAfgwIgQgEr8hKsOqogYaMKQm2kkgZF/aHZbUJYuiSTfTV46RwlowDQYJKoZIhvcNAQELBQAEggIAPgPfe2lUEcU0rPNXkMMnJx44AZlgwH474NC8m5/f5BkijBLoAgzBM+z7oyJS/Mqwl2EUle6HFIxG8b3n5amEmrxVNC9XXaqpBq74jic8zH+ecyTEPhaVTgzmj9hFDzDbeFamSP6M+XEuG6iUpUWSxRwEu0ip3O6szA5lib8NaLXAMLW9M3N8sosqDHRwz/XC0qn18t4ACatUY+RkfbgmGkzDDsj8IQuWyEoiFfJmxVOaMpCMuvOrazHWqavh077Lx6cHwDxY9B1anDR9OsqJK27TYzopTsajZbEYPfk+Njb5S4ChBke+/kNdp/KBJyddFeNkM9DvW0aKOLZvSglDWepFdd902D2M5p2TcWEN6CHQyyfLpH8F2hrznBtLOVF1N0bys3l6HmKiRW1GMTMnBwngwzXC8JwHr2BPlM7CS4wzB4NbQd7B7iO/DWwqjL7VHIG4nBYrJbuhlnLJf/zzKmumtS867mxvWuhyZLTKgLnjRhMqdlRy1y6T4Sap2nhYo6BOr7i76a7b9goXVwl4jUI75tCLbSQQXQH9TflBWGGESoCqIWE2JgchzlbM3L+qXtJ7PvbFw7SXM5QleBm9L/39Wf1ySh0GZzuVUjnrJepnkfTL+86Rc+/zHM4JUZiqqlnStBdUYIO/yRsdRZvhF5imcfN+OflG9Xo3zPrnnuM= Azure-WALinuxAgent-a976115/tests/data/tenant_certificate_rotation/000077500000000000000000000000001510742556200252175ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/tenant_certificate_rotation/Certificates-incarnation-3.xml000066400000000000000000000123351510742556200330150ustar00rootroot00000000000000 2012-11-30 1 Pkcs7BlobWithPfxContents MIIOgwYJKoZIhvcNAQcDoIIOdDCCDnACAQIxggEwMIIBLAIBAoAU08PI+CBUqOd4 Nbte7MLw2qCYn1UwDQYJKoZIhvcNAQEBBQAEggEASTTfHNyY+9hdXd+Eqtqk+yPb RA7rRXWR8tQAJsdy3zAlu8WHymq945fnsf0bAW4mODIPYhhevmdo5VaI54AzAWhk EfJvtRQlZZEMGZVKgUSwP4AG6cFaSnJuAYbi27nffM45PgD26O2WjOhnmM7minEC 31/wUoxjxVOxIc8x+Ngo+TquyBeaK1iXcchwIUnbM0xRYMfccOAEhe/iytKFPzdg DJbDk+KbVGaUuUfhF+o4mMyJNezMUFxWkePcUgP12li57GTJSIyi8OQaFUu1qh0L KzQ2sYl8U0WmWQBhXqvuug47WI/6XrRDpKslIV1aV4XxD1Or6H3nf0fULjQZajCC DTUGCSqGSIb3DQEHATAUBggqhkiG9w0DBwQI+4Ch/cEogOSAgg0QvlelG9yDK2GE XX1wn8Xw0wCt+zIceXs8C6QuRSmZLEkZVv8Y+duMwi2A0tcg63HOmY2AfIPvTTt8 eto3YwIklrfF20jBvCg/pT3kfm6TICWmMNd5XesTq8UNmkqzJQQ84L3Kbs/ix2pG 9RaeXkrg0VO7FBDVH8b+jIT9IVDAEXgBQVefcCImVZ9L2hQWNABFrFXAQSTKjfFJ IEOfXUhTiH434V1RKJczhFiH5SNZ0kbaRjmaQkXqbXQ5kKoq8VNkmFc6vPCclTmq QJFfIUTepljWW/HuVkUycNYQQkblmWNF9FEwSx++x3Tz1FLR3UlzOkJCqr+tS3jv WFnI16VlOHaaHA++YKhW1PUujJcEdZaXBE0FC6JZF7IOAOjSdLSmRL9yU95erfgZ hRo2FB8EWVZitIG+DPU9vU59chGpqXYzZU4/aTpedGeWSZ9GFXRqwb6htmajjTWu l5fIME3hWt7kcejpuXCTDcdG4YcbngZu4hcepMrUhm9g2BdmIDb1YiB7290PMop8 4nNo97tSBvhzk300cg6+pfxy1iAv3++g/ggOI+Y/gFmgN88mmBMWm0+mocJ0SZGY 3+8K/8pDpJpfAAXSjayl7T2UXUdJe8fpOtetiHUr2zIbZXlM4IQw+0UMAVjTiaRT BIDGoPEcpCcxqPlSTTEie166uzzPXG9skVgennjN6YopwMC/WPaFRJu/eTlQOqlB EqvK9TKJG8u2yp00J04MGYXluY4l/o3/KLpT0mCOeOJm3KerfwQ/jU2oHHmvIATN XYy32ULqx/CjL+N3ax0Nu+UrgMQPcVhrTN/7lnZpFLYwXetGzH/4jdNfIfTc4yGn 0GlVT6cVgJyV8wyYpbqCxHtCW83II8vXLjTfeIffHBoJU0fMMPWEIxRuMQSksm0H F1u/rfGVSXnueshbJUD3pnvTiLPuWcOexSxP+B8BCNfi21jX5Ha+U9RKrKbHc4h9 PkiWxU6ZEqCBkdP9ssKnmMKMsrC7sZRoYziHNeqlZp/GFQmkI+DeFlqSPn3Lv9Or HF3bZokZCf0RGEkZDPrigaiEoL7PH/TtVZF8miL4JCLB0FVB08vWeeP5zjQT4H6J jSC2pw+5bA2UWGshgsKKAJJihYcOuybtzglh7nqmSSZcszz3GyuDhdR8KDrYwChU Hn13+rSWAbbqtxSyPc5fd22Q4Do2aD6PVdRadHjG0qeE7Dq46YHT3Z9KF0nQTLk8 uYq8hL5+jQEgTnUB0yJTKdEcg05TyrMfNHWuM1pru0bqpf25vpwP5t+Sd/vgWJNc XtRLWrMdYBuSG9zOyLaH7bj0rcMhN3ULisKej9IT/xHOWSXXZjNoe1P3q9fvtMbg ZXAale/xJ6rXq6mLvZXivJfQJkPbSV7fByPPKO6TMnHbNEgLOGO3XtHEwC24JKup C0ohq03QqQHEisS9Mk5LvWmSchXR3/7vCtJFyOemQom7nCy8cx4Y1JGmZ4SGSaEs QZs7GC7Ftb/X82LRuknvS19ApOVFEs4/8t+LviD3x7Z9quVv+fZvydhzNKGRR6kQ fYZwK7rqqkvuFKgXqNbzlrtlUqOUPXJgdO7QHOtU8z+k2NzBWfOp6j+Ef8rc3GDU HSVZZ/Lz0RWedxRC1zoZJSol7ckMxIGIpDhtb9xgDmaGKILWOR9k+wG6+7ywQ2LE PB3myDOclvKUDyb/DqwRS9ch9yyYSmz8WXTgdSeyOjp8QT2JQuuOOhoooHuKSxAk +7v/Fh5bNGtjHByuzMYSdLcWsLX+UohpDoc1heVgUA3R6EuIOJTA0nC653YmqIBp R5rsT+esub/EndweZTacmc2nDJxTKdZgMvdwhnsOZZBGsOaD7MXAS5vCsze+PQmY 4+VqqWPASaclV6CygN4qSxmww6mVgmAgWVmJqfa6vOyb3zhx68TkNEp9rxJFcJSJ NiTTvWe0nF+o2/a1HZ8rZFdf65KsqGSiqu/6HoUuFzWLxRCqSjB9RkfSqrDVAVim pwL46zGRsqZV+5xrRQlxINNUbg/D11zcp1zdhQvhDrpBoLMjK7AaxA5msPYFy6Gm KMRAG2kyi802W5CPZWkbiEoUA8vkiICuxN+Pdh146zk9Ngl4PC3YpNCMtXK11ifd hYxmWqEuQ2AcdVTckosaWrFMn5MqEcR0aAXZbnjIMgTZ6SMYJBZMWjzJhe/UQjTo vICK7KAH82chpW2hG2I67z7e1Nv930RyL6JbYI8mSqgccPBzOBUhpHvKDM59z8Nc eStEYDdOcMz8P+c/H3Bh4WsyMWMOwWvjyy6GX5Bpl5z94tWFRn6W4FK5iDqp+HHm v5W1+hlFBxXtuzBcSQntcj8LoExJ2mK6BhZkaeAESMqPvNeNFmhEVUGq0/+c7T4I L+1YkQPcm/nIpwW/ITmkGmi5n5VsvbJFDbQe+h9LI2aqvWtzA0YT5Ed77Glbdbgq qB8EyXdr1BsBb7s7bbXm4Wf8UJkCZESg8iQExkUk8HqMJRxjctjma0DyyKVi4j8Q +BA1EYBEX37641S+ZR9fYmQeuULGkf3d+w/ttgvm6YDZivsZYWkTscX+lUtoHhWN 5EOAfllI0/DaGX15mGONMV8YA1PoCNEX3yKJ5tVGkxxUPK+Op7ZHvJmtb1fPMRRY z+evQ+NTXTZZzdr3Kfs4yYbuXG4e1odm2v/zBKG7JF3yWPMtXZZiMks/BkaXTq1P LrB0VxGcMsLeQ5HbbWJtchyCWyy63CNNbfYNohjxru52DjaAQlDKQT9pOiSmGJzb 7+hNnKYnOfo6Du2ljz7C9C4mxnRJsRA2O9Cw66J5XPy1W+2+RmvP72jXwoFWYzPq jxNs2wxOYQjEDpXBTmCbW58F5cTbSTk3D15iCtYtf31tpuPpHEnz+2OvrX0WhygN esZJnln2Tu2ut1pVhAuJDLZTj24Y4MP0nmDINuLDAkFji0CwjACvW7M9SbIOLLYU +5JHHjB7wqaTXWFzpt/ZKXMXlwCzWjo3pDERbrpYbwS3GHqmtcyIZK4EA7Ulka5Y 7rLPWS5eKcjX3tp2FyX5pD52TpuUMPAk6vyefX+NznP7opvJpusHbkschojFVRDA zHIpIGeWjYcWLk5YTPagzH8o+4ci1OEk+OMc8i6PxkQDeBw1RiCAFfBnKPCSEtFk KJlw7fspk3/chA6mmvOHjkrQmUhUuDxAVGCVxl0K5LU3Y2IQxKGtCJk5YO4XD2e7 5b0Ub+wy4Bb0l+z8HjuqEypFXDpQTd80NbhStZBgf2cB01elsqmKD9sT9wpFGKbC VaatDLsLx4XrBG6ueoFKBgFL6l7afEPct8wuSoUrX5MAGlge5xzQYAD5spLlEa9G Dt2KiPCsZcqWiaHiw5vk849FXUcfFfGl+0rEKhzcfUn3zkL1mGfqZ8Nf7qjMXdMy dbUUQYMZXtMtK3fnYBnavgaUcu0bZ7Av+GVTQvDxfpzSeMW8lK7Ko6mINFQVC8dx TEKWX+eApFUnTb11vNNxwxdOB2l5N+kfNLnVMhuYd7l8IHQxMMQTcf8hYu0owry6 JkIdkhnF1kXVC2YWxo4VrDPwzkBWZE28ygBNhWgKCRhZnnbDEWPuqGP/IaLN4vww 1lqkZltqZDddXvOTXN/tZmkkQHt2uP264vqJB2BkGzxOll5UDQ8V3gXwheuUGxYc gVL4ZJSKfHnUp6oRafIBnQs5RBvqdj2wewzT8AyPWImRG6fkYvsub8qIFqG6mu4Y ixAQ9oTgg/KOXYNsfYuLGswu/aNnAqMEjfMerSx7dDu7teETkWb+IQJtodOdE/LI yO/puds1M+V2H0TD36zXRyvEnpfm5BTURkxM8dI6meR37/JGtObtjg+Gzjpu6HGm sIYyhG8bvV0Vkuip4bEgBB6T39dt/DeElHABthUmzFZe/QC8j7IJjyCz40JWDJSo 8wPtOoLnLeX0ynD8x8A5NsQk3W9fgEtv0WG6Uahs7P8GEZ5Uh9GPvWQpAkjKv7OZ XVHJdTBMJICbB1Bzr8Nl0qPfQrhFzTNBMjBEwyaBpzRiV1hdTB2YPJPbjQQtQGkO vT/EsAEWwSqDrQrDCfGRl7mhjdAsVFMjERdJE3/2TctY8VnLaRzUTSGkpCKxl+V4 CLrBi96N80pxer5eKYtt5gtLFw0gZeeeqb2VDj6ChVnUjJ9r0TXzyy8ztwpB8X5Y mZUDASD1acdZZOiEp69WA6juQR0EGKQT5phh+k0HbziW+bXMM+7YwiRJzwX4obnd wgF+wyHht3Rzaptv5JSZMkc1RGSFIdWUwEp+3Ik6DGywiTcVkU65TQ7CsQJjmmkL AChG7tUBI4KmolT9D0rj3A90//wl3ACkCFq94m0BZOFiimUXFjqux135P5i37XRJ /8wgWZ0nzmXdFyTkEJEessAMbCkMiDHwaT7Lbs+S0qFeobh4DD3tkONnqSNa7md4 945Z9MJiapzD3P33TvKhyQ0wHe5W0z4= Azure-WALinuxAgent-a976115/tests/data/tenant_certificate_rotation/Certificates-incarnation-4.xml000066400000000000000000000123351510742556200330160ustar00rootroot00000000000000 2012-11-30 1 Pkcs7BlobWithPfxContents MIIOgwYJKoZIhvcNAQcDoIIOdDCCDnACAQIxggEwMIIBLAIBAoAU08PI+CBUqOd4 Nbte7MLw2qCYn1UwDQYJKoZIhvcNAQEBBQAEggEAU1y8uuEyQMXa7eGlK/PB5F5+ ZEYBHRpBpSKlyIpTJhN+emNLtuPRlIJ0L0zlfkvjMmnoApXujUb91tnHVQu2tUV4 9Ws3goQjqIb6baQmxf8pctsL56vHts763Wl+AwiFLc7twoq/4FmmqwvFzxHE+c2o IyxxYY72ZNorN5sux0b+ghEeZHkdds6uR/DHtht+zCy/JP63Phf53dAoUoO4p9Ym WJhe2Mccv9t/yrtneVEIw/p1GqUPSY+tiGMNMxNvXlUrtdoaUzyzzXmqVbMXb6PB bWFtkkRJBCMYA8Ozh4La6y8Y1jgFj6vCkoxX3s9GVQbpeyon7leanAiHwArgejCC DTUGCSqGSIb3DQEHATAUBggqhkiG9w0DBwQIY87YJhlLuuSAgg0QnoSp+Z+aYRAI uNaSDIyvQ/1/xYMW6TCqp19yiOGRu5bzDNX0tKN5cCLIvRX5FZmLbLbApziZlMsV wrHCmVBnN8XYCdZsK+Wy39ORULAfurkjem6arn/NFnfN9DLiSEYwKbSC4VNegfkT lJlgnSVUs7Z6v86YUEuwBnmvyCDbIit3PbfKJzaCr9DSPXKwBFRZqTTsFWovBGaA cQvbuqxbbkm4cNYmwmT84TXhjYDuTfP8KEPgdBD1F8cqB+e6OuQSG3N+tBHKi7DH Gc+30IimJVcrwbPCNDlteHHTLxaeDM4g3eoyj7B6J+/kAMLdoWuH9kwdr75Dd5OJ SGY7utJ+v4A92SKc7G01tQnHZYOxn+JFKWQ4y/CR2lTtYfhh8pd9jSSHsg0jGtKs Zte/mpfrOHTpXd3K7F1/UiXTRNWbfy/7pBWPdqgSaOAuVH180VAHCDnaOtvf2w7L tJN74gesbcwPgQiAiD9um1eOqOMObu3gqXdeIkMksbhrTSOzLuO8c3t0R+8lL6QE 2K54t7PMDQ8ScmktNMWG9heBbZmAlkLZ2VK+jfGpbVEGRSWKRkpBMQOqLGh7iRkv EPtr44/F5cWwXVN6ofCg25aGwLrAaD9hlprGNByjGezjrFxj4NSDyKYmjhfF4+RA CfEN/j19OadJgY8ByH+L190VOOc3Xcf0aiFJPqV+MmTm0QcOmaOIPFfwRHjWiuS1 K5kzX15uDgIZED2NWvJtwyuJ+p8xcWtmdE0nGxhOHV+3ZZu8WZ9Qv7LU2eSJQ5De 5uzb5sDzVZI8zfQ6LX2nF7ilntzxzODcv5Eoor8NQAU5xPKvb66aRa5BV5xzCl8A /FY61ztGpCD4DfPHFpCldcHKCPk1qzu/7kL3LQ49DV5GcVwzzanHQaINWo5xhUu1 XaUcWe7LVOPYvqCrSF8v3dB56RHF1MJMxCNdZo1oVup3FjIU3N4ZUl5qX5Ixetp2 ftUZHsw3r+cotronsrne8R4gl3PejIc6rVmz7cpnPY6l1T70QEEtnxcHgqIFZeCB n3IHOBOlaS3DbtOVzclySUF3z1+Gtk8Entc1ksNX2MwknFUM2AjQWuvjVDm/ZKaY kPtbr52IDKURYzDecuBeTuZCq7ztaOqdc0D+sLFn4Z8CBzl0OdOrDU25h/wir/r7 DiCGFAGuPIVtsaO0C/aLCM0IJlDW9Lj9YMXy5jZ4ziRT6CmarmjO+BLBL9yHK7pR rCEJoYRZUyw6nAZNW3EkftxMWJNe00SkJyccMPLgQA6ORnuHC3wo4EBH62vBA4vq JszIKm8xselXbAQoyeRtXBVvFEV7gz/3US43K2HoHi+Z9N60LRw7V+aihz+nKTnC lioA+owDvgsJmVwuERse8ZaUwXigfKyCUnrbEAYFeSIQyvKs0TG6pAGm2ZjqFJw/ L0HLPQVUf6HLZY7HD/xCz21X3mL28VZ82Fr/luOqIk187M4CnyudmZX64tS/o+TR n9lSJhV4H6y5WCCTSnyGnjcLSm5lMg9H+4vwRB95qfKS9B8ZLSesBbk/VUwCw1fw IeR2S1S9PUO+J0lUKGWWrBjDNKIkR5vVLXyazO+BFz6HIq3U0Df9Gya3kng4BfZK a3X9ALP1PEdfFeRyH7T83NN20686Q1uSzkKIKmKYp5YRuUsZdrGSSIbgO5UlWayF YWQPIrpTy+v2lP9la9YLPdSWG0a/pMA5BFzovHgSJ733yowmw7sqn2wsZyiMTTOy lbF7im1hbB3bfzow6SA8IE7O5XiAIyIk35HNJswPMkJWQzzuwNGKIla3f+HfPaRO 7weJPIEeQr7jUdgiQLl9A9/kHdp8jMy2jwrys6LY9rwEMAodpaN/yXYF9oOFvBsC 75az848gx7OTB/OcBKFNkeKkdWYo3GYP0DwzTcV3sV+bIllKGzGhuN7KOyn7XLSN ZG2kEm/+s05DdxpagcGyAWKT6myDjuMo/lAll/A3bnmwrP/I5YO0bLn2cmEq6dGx AcWC5eELHoKo9hv6pjU9BszkHIgMq2B6Oe35xnAi36RlarOU8D4+xop3IqN2Jy65 eec15LopFUrCcVgSddf7h+qS0jQGiEPuUNZAuZBA0ZVmHzDtkHJqdSpSAXTvykVC GIPbCWce/0X9UxxrciJ7foXebz7A9b1dkEMI0UCNBkiO5kGVJBBxGcHOtYvzWc9+ oRhN68tOksmNFiNIuxTRG1iariPQrDocbsEy+yDDmSxJPZ4wNjPofjZ1XXaXkjs4 Q79ptA8JLwzHv7dRCsV+r3GUllIn5TOb9adbIowmZG+nSWq3vE1AoHgymwYo064p ZlcrtsZRfo9SeqMf3aAOgQtYDpCi2QhCipQYe0IFYWdShdQzxqXyCObm7zey6PnI 4LZ2J56Z8QXPloo8LfsmcqILWEMOxCc66k5+QFb/MKDV/lYtWZzTES/TFhRdNydw yCdizmdTWo2wfk9YU/pcwRZUAzhk+/JQJA0tef6kyUv+ozQue4JVw8UBRoWJRrXf mO4kGeEpoVu8Hlk3XVeEQTEMP8gre2t1WSQhgRuUPWHvsVMjRfn4K8rk4MxU94Op XselOgz+E0n3XpwHh9gcv43t+qd5YBpE3uAI11hUJpZqsjAo8AiAXppzXZQ9Xx66 duz3UZLobVZL5CwFuCiaE3b3rx5Qlt9SKNQA8aG6e6N1hwHzl69zT1BN2ZIvrSuL ihtQ4E7D6KlEWhPV2c12tMgiDs1CTbOyY5uX8Q+dMilp1Y/5iC6LwzAjJ8IvhtqY NniVsVocO9uyRe5cYPLM/F/4rcnnmoIeTbPeGiI91vGnLH+wrgZ/HSntN7C5nG6s oay685GW620S0Ac71IcRZajNTM7Rfc9JpCNzwb2WnZw4LKyybfXcHSStk4aqw8P+ oRsOLgRLO4m9CYnsJBcVX7oF+/IUWyPfL/4sAIUIF+7mXP+Z18paTmbZRIrvjwcA +QhctZXYVSeUQE4RtLu7pKxTYlZZesZqVhEXj733RMwgYuQecqCMTcF6StpEsKPs BUZDXZZrCl9kUMMB7m5bsnBGB3o/QbyS/hkNwI8pVmQHNIVKdKOcxH0cCRouKUH1 MzYxuZfVS1dvgkhVhPeySy1AZ2A/oBFFz2PWxzftKwaZ5KwDx4VI8x3yYaMuXmvK cyIWS+2s+Ky/ofOOAJPYiv2aaKtLnOjo+78oLyAm7NVNaQ31JFVPAxCbmEnIu4Ai GngAH4hmVp/f2/pfGq/OI/HFFeAwwsxUKWOsLu+Di7QcT81PrkHVFadmLXxA9iyc UmT5Oqg0h4V5PWwaGVfgDMFs7VO0dThZ+cjXLGWvC2bTWpvxJVsgq+J/MCIZsiSJ eECBhDvvsKCmigM9+qQ7iPjLWP2DL+CvbLXWLVuaj+rjwpoAx+2ALfWP0aRsetBk 3vbKm4Pm92401TyGmV8HJfpgMrjbScrmsdv+10ljj3eigaUGGzS0UImJIXEerbia 3m31u8IaYF0fFsONHa0+0RuEhFVhtgx3ojI9wN6OM4sxIgDMY+Iyrny/Dn4qlVJo bmW2hahljpIgT0x9KwZgflyM7VVckRIk+SzJDmqqYdEVk6CnxpKcVJgaD3z/Q4ez 0doYtQeeK7W4EWNJACosqMCFKnFZlOyMELE0gyhdeCgM1xXOU4nxzzUJXFAKukSi 6RQANERsNoXnkfYd6Pt39k4IaBkJ3/lmBVdONqoPDjwDJT887kyFo9GfxgOZ+ZAS KlVD9YiDSXkgq4/KGq8zNb0jZiZjd02uzzYVvLfKx/TGhVy5WEnf2IeC0gLZ3wNI jo0894/Ss0uXbbl5HoOhLdOQbYuZ5QB5S6W6TbcM5Mrt9S0rkJY7xYxnlmXTQ3A7 q+wfi5IIAIYuRd1uwZ/msCF6L2UM6y0+So5P0X8YVY4tT1Oq8AxjJVLVMZVBPq7b nQwChfVf5HOEfNehO52UwRA1C6IGH9/2T6lPrJOuZp7oxUE0CtVYNDbqcj9lbb7A cEcQjQzgYnH3xmj1ZjBpyQ9zL5o0g7ZTwAq8zA1LhMBjrgSlYd2s3947Ii4xBaof CCA8OVDeqHTqVxFQQk5rrHCDPOSHLCXAqqArXb5yl90Vk1wU7BnPe6iwScCcPbWd rkw8twZYLNp7sCDTZ5es77Zzs431R1sc8pL/SOwbv9o30cQfbW9FZAhboyI3o/ug RdKYlB72y8wN8ijh/UENo3W89MzHtbZ1XYMCauYn9zDUGci4Bnziqfpd/dV+CUeC Fs/DP5f2OkiinHRmf060xj7HN7Q3SWziFbMRVO85/e7jjUcNQyBqikHXBl3V2hpM hRPsObhPAoLVxz8fBVMYfxR1E7wTpv5KWzvWSPh4QUX+gRpCYL/h/WJ6qUqjeXMP 1u6vM7uX9+OjNkEAql9L9cPmm1GIam8yBoRsP/Om0VFKDZUvhTo1QC1Q3finiSm4 89s7tlobx0KafcD+yNKpSFtq/XUIv3Q= Azure-WALinuxAgent-a976115/tests/data/tenant_certificate_rotation/ExtensionsConfig-incarnation-3.xml000066400000000000000000000051231510742556200336720ustar00rootroot00000000000000 false false Prod https://umsafxxnhhpbmc0hwbmp.blob.core.windows.net/568bb00f-455e-32b8-8deb-0e1bf1636254/568bb00f-455e-32b8-8deb-0e1bf1636254_manifest.xml https://umsaqhczvcnhlz0kplll.blob.core.windows.net/568bb00f-455e-32b8-8deb-0e1bf1636254/568bb00f-455e-32b8-8deb-0e1bf1636254_manifest.xml 2.9.1.1 westeurope CRP https://z16.blob.storage.azure.net/$system/15b576b88462.status { "runtimeSettings": [ { "handlerSettings": { "protectedSettingsCertThumbprint": "F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9", "protectedSettings": "*** REDACTED ***" } } ] } https://z16.blob.storage.azure.net/$system/15b576b88462.vmSettings Azure-WALinuxAgent-a976115/tests/data/tenant_certificate_rotation/ExtensionsConfig-incarnation-4.xml000066400000000000000000000051241510742556200336740ustar00rootroot00000000000000 false false Prod https://umsafxxnhhpbmc0hwbmp.blob.core.windows.net/568bb00f-455e-32b8-8deb-0e1bf1636254/568bb00f-455e-32b8-8deb-0e1bf1636254_manifest.xml https://umsaqhczvcnhlz0kplll.blob.core.windows.net/568bb00f-455e-32b8-8deb-0e1bf1636254/568bb00f-455e-32b8-8deb-0e1bf1636254_manifest.xml 2.11.1.4 westeurope CRP https://z16.blob.storage.azure.net/$system/15b576b88462.status { "runtimeSettings": [ { "handlerSettings": { "protectedSettingsCertThumbprint": "C0EDFF1B408001B0FD14F8F615E567F7833822D0", "protectedSettings": "*** REDACTED ***" } } ] } https://z16.blob.storage.azure.net/$system/15b576b88462.vmSettings Azure-WALinuxAgent-a976115/tests/data/tenant_certificate_rotation/GoalState-incarnation-3.xml000066400000000000000000000035371510742556200322770ustar00rootroot00000000000000 2012-11-30 3 Started 300000 FALSE 7db6fcb6-9911-4da4-8687 00abac54-f426-4653-bcc6-0ed70cddda87._kvm03 Started http://168.63.129.16:80/machine/7db6fcb6-9911-4da4-8687/00abac64%2Df426%2D4653%2Dbaa6%2D0ed70cdcca87.%5Fkvm03?comp=config&type=hostingEnvironmentConfig&incarnation=3 http://168.63.129.16:80/machine/7db6fcb6-9911-4da4-8687/00abac64%2Df426%2D4653%2Dbaa6%2D0ed70cdcca87.%5Fkvm03?comp=config&type=sharedConfig&incarnation=3 http://168.63.129.16:80/machine/7db6fcb6-9911-4da4-8687/00abac64%2Df426%2D4653%2Dbaa6%2D0ed70cdcca87.%5Fkvm03?comp=config&type=extensionsConfig&incarnation=3 http://168.63.129.16:80/machine/7db6fcb6-9911-4da4-8687/00abac64%2Df426%2D4653%2Dbaa6%2D0ed70cdcca87.%5Fkvm03?comp=config&type=fullConfig&incarnation=3 http://168.63.129.16:80/machine/7db6fcb6-9911-4da4-8687/00abac64%2Df426%2D4653%2Dbaa6%2D0ed70cdcca87.%5Fkvm03?comp=certificates&incarnation=3 00abac54-f426-4653-bcc6-0ed70cddda87.2.00abac54-f426-4653-bcc6-0ed70cddda87.3._1.xml Azure-WALinuxAgent-a976115/tests/data/tenant_certificate_rotation/GoalState-incarnation-4.xml000066400000000000000000000036371510742556200323010ustar00rootroot00000000000000 2012-11-30 4 Started 300000 16001 FALSE 7db6fcb6-9911-4da4-8687 00abac54-f426-4653-bcc6-0ed70cddda87._kvm03 Started http://168.63.129.16:80/machine/7db6fcb6-9911-4da4-8687/00abac64%2Df426%2D4653%2Dbaa6%2D0ed70cdcca87.%5Fkvm03?comp=config&type=hostingEnvironmentConfig&incarnation=4 http://168.63.129.16:80/machine/7db6fcb6-9911-4da4-8687/00abac64%2Df426%2D4653%2Dbaa6%2D0ed70cdcca87.%5Fkvm03?comp=config&type=sharedConfig&incarnation=4 http://168.63.129.16:80/machine/7db6fcb6-9911-4da4-8687/00abac64%2Df426%2D4653%2Dbaa6%2D0ed70cdcca87.%5Fkvm03?comp=config&type=extensionsConfig&incarnation=4 http://168.63.129.16:80/machine/7db6fcb6-9911-4da4-8687/00abac64%2Df426%2D4653%2Dbaa6%2D0ed70cdcca87.%5Fkvm03?comp=config&type=fullConfig&incarnation=4 http://168.63.129.16:80/machine/7db6fcb6-9911-4da4-8687/00abac64%2Df426%2D4653%2Dbaa6%2D0ed70cdcca87.%5Fkvm03?comp=certificates&incarnation=4 00abac54-f426-4653-bcc6-0ed70cddda87.3.00abac54-f426-4653-bcc6-0ed70cddda87.4._1.xml Azure-WALinuxAgent-a976115/tests/data/tenant_certificate_rotation/TransportCert.pem000066400000000000000000000021471510742556200305400ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIDEzCCAfugAwIBAgIUToMqRt0z6FfqfiJhS1Hh+u2j3VEwDQYJKoZIhvcNAQEL BQAwGTEXMBUGA1UEAwwOTGludXhUcmFuc3BvcnQwHhcNMjQwODAxMTYwOTU2WhcN MjYwODAxMTYwOTU2WjAZMRcwFQYDVQQDDA5MaW51eFRyYW5zcG9ydDCCASIwDQYJ KoZIhvcNAQEBBQADggEPADCCAQoCggEBAMs8jttzIHATj1BNs3r4cCOAMuVaM1b7 Aw8D7Lz3rTxFieQCh1vLSFl1l9SQmO7rmh0OfEzIKK8jAU4wkLclgospKuYpB9ME 5QnXbLpXWYfW99V4safGvv9lGZztGKMd4ZT2it9QcpKEFFi6W7cjIyiUuyYMB0uI IvA6s6tGs8LgN89Lx7HSTSR86QNPvRtTw0jlrr8nfM7EkaT9Q6xu6GjCp89wCx+h IwcPtstSgfMo5P+3IO30L1wSM+CF1n+nD9M8E4wtcxhoWLuyAPhDsw5f7jKyHmRo Nm9RxToM0ON67SmN2906i0NxzXWtuttww6KE/O6BEZKNlnp9ja3bnM8CAwEAAaNT MFEwHQYDVR0OBBYEFNPDyPggVKjneDW7XuzC8NqgmJ9VMB8GA1UdIwQYMBaAFNPD yPggVKjneDW7XuzC8NqgmJ9VMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQEL BQADggEBAFuVgcimwPxgpwKNvyUKMY9VFa6UVZs/ky6FEEaxrKVAl2GZF9MoSTO5 vXMdWYHtSF+RWYxCz5pt7Bv97zuEXvbino/JvsLrE8f265Woe2CdDOPiBCHWBOlH +wM71Hoh0TX7V2TSumona6e0cqUPT7fbNdaNZm8ZHoUscbbPmamERH9Z9zUXWPLk mtjwz17bvRriAMrglA/Dm3xHiEYBJv3+4FnOqPGfg9vZH6xfmrRwrF1Moj5jEZz5 cN2N+vO8HCEqGMBCpSlsWq1c2r3NwLH0J3b6EL7X4jcVvpykKg3WmOZGdataYDk9 0IHy8VyGiX7g3EJOAbbf12FjgLAt4NM= -----END CERTIFICATE----- Azure-WALinuxAgent-a976115/tests/data/tenant_certificate_rotation/TransportPrivate.pem000066400000000000000000000032501510742556200312510ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIEvAIBADANBgkqhkiG9w0BAQEFAASCBKYwggSiAgEAAoIBAQDLPI7bcyBwE49Q TbN6+HAjgDLlWjNW+wMPA+y89608RYnkAodby0hZdZfUkJju65odDnxMyCivIwFO MJC3JYKLKSrmKQfTBOUJ12y6V1mH1vfVeLGnxr7/ZRmc7RijHeGU9orfUHKShBRY ulu3IyMolLsmDAdLiCLwOrOrRrPC4DfPS8ex0k0kfOkDT70bU8NI5a6/J3zOxJGk /UOsbuhowqfPcAsfoSMHD7bLUoHzKOT/tyDt9C9cEjPghdZ/pw/TPBOMLXMYaFi7 sgD4Q7MOX+4ysh5kaDZvUcU6DNDjeu0pjdvdOotDcc11rbrbcMOihPzugRGSjZZ6 fY2t25zPAgMBAAECggEAE9CAJxIW4AZwKwagUIVnPXbSv3ynU7weRLj/vD6zg5RO CM5cTw1HLP2jg2RjnKuYt2uBn+TF3qldh7eBbHG6RAIL/iuS6TZpdCeuII7CmlVR jVz6iR594Z2EPUH6bHDN3P2adYI84V8CMtJcfcLtuxehFWkHzwvjSCOY/8JhZUbV ebXXc3zPdSu+WmeManXnzs4VgE6QnSNdyk67fvE1Qxi18s49XXWBPTg01hn+v2yJ QVuv36UP2MgIRZJE/PI9NL6tqqiHmY5sCIJ41hQLRxd/mnRC8hdHrfNNhqHVlC9g JoQQwn/dD12EZwyiQyJyGZOmFDrfv7G3d2QQVJ4OLQKBgQDrxf3nRK28CWaV2evS J4MZjTWmZGiNzMiqEtfTgd0v3+rs73WYaNfQ79Iejj6KJfJq7vtdawqGW1bPNfgF KJCdr3yxjpv5GsHF7fiE8ZWcQ6d6FTWNuayLOEbHnPemYTqg5pd1wsPgIBoE9Zqm zo1iuGxmwHos2yQgif9vEU99wwKBgQDcq/+aDscOO1oimJjAbBl95I8bOtSxR0Ip pv/iaB8+rrS18jiAygXuo34tq+L0HmoniMCuuVg4zhgAxzgnohTlsJpyGnzkdkmo TTan76WkFAedmurzQSu96p5F9HOc0MgluQHtPhO5SsjWhUgXxAU0Zoe+JnTVq0X+ //8z1s64BQKBgEbanl4U7p0WuiSIc+0ZALX6EMhrXlxW0WsC9KdUXJNZmHER2WYv A8R/fca++p5rnvlxzkqZs3UDGAh3cIykTymEJlX5xHfNCbSgulHBhDOMxVTT8N8h kG/aPrMYQfhXOdZG1feGy3ScURVydcJxSl4DjFgouc6nIKlCr2fCbQAfAoGAVpez 3EtSNzZ5HzxMLK3+rtUihufmEI7K2rdqj/iV0i4SQZeELp2YCFXlrJxXmb3ZoBvc qHOYt+m/p4aFdZ/3nU5YvM/CFJCKRN3PxcSXdjRZ7LGe4se/F25an07Wk0GmWI8p v2Ptr3c2Kl/ws0q7VB2rxKUokbP86pygE0KGqdUCgYAf8G1QLDZMq57XsNBpiITY xmS/vnmu2jj/DaTAiJ/gPkUaemoJ4xqhuIko7KqaNOBYoOMrOadldygNtrH1c5YE LKdPYQ9/bASF59DnBotKAv79n2svHFHNXkpZA+kIoH7QwhgKpwo3vNwcJcKRIBB9 MjMnBzho1vIbdhoIHJ+Egw== -----END PRIVATE KEY----- VmSettings-etag-10016425637754081485.json000066400000000000000000000075751510742556200331640ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/tenant_certificate_rotation{ "hostGAPluginVersion": "1.0.8.151", "activityId": "ec144e92-ebf0-4512-8e1c-288ad24794c7", "correlationId": "165ed993-7252-4716-b959-4769254defb9", "inSvdSeqNo": 6, "extensionsLastModifiedTickCount": 638479963767702408, "extensionGoalStatesSource": "FastTrack", "statusUploadBlob": { "statusBlobType": "PageBlob", "value": "https://z16.blob.storage.azure.net/$system/15b576b88462.status" }, "gaFamilies": [ { "name": "Prod", "version": "2.10.0.8", "isVersionFromRSM": false, "isVMEnabledForRSMUpgrades": false, "uris": [ "https://umsal4z5pqr2dx3g5zdd.blob.core.windows.net/568bb00f-455e-32b8-8deb-0e1bf1636254/568bb00f-455e-32b8-8deb-0e1bf1636254_manifest.xml", "https://umsac44fqpt2lbfsgksz.blob.core.windows.net/568bb00f-455e-32b8-8deb-0e1bf1636254/568bb00f-455e-32b8-8deb-0e1bf1636254_manifest.xml" ] } ], "extensionGoalStates": [ { "name": "Microsoft.Azure.Monitor.AzureMonitorLinuxAgent", "version": "1.30.2", "location": "https://umsaqhczvcnhlz0kplll.blob.core.windows.net/bab0cbee-153b-ca2c-b8ec-eea35f48a1e8/bab0cbee-153b-ca2c-b8ec-eea35f48a1e8_manifest.xml", "failoverLocation": "https://umsagxbfcnkp2gcgbktm.blob.core.windows.net/bab0cbee-153b-ca2c-b8ec-eea35f48a1e8/bab0cbee-153b-ca2c-b8ec-eea35f48a1e8_manifest.xml", "additionalLocations": [ "https://umsasvdz0hgqp25wbldc.blob.core.windows.net/bab0cbee-153b-ca2c-b8ec-eea35f48a1e8/bab0cbee-153b-ca2c-b8ec-eea35f48a1e8_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false }, { "name": "Microsoft.OSTCExtensions.VMAccessForLinux", "version": "1.5.18", "location": "https://umsambj0h5gkjvp2jtlj.blob.core.windows.net/2bbece4f-0283-d415-b034-cc0adc6997a1/2bbece4f-0283-d415-b034-cc0adc6997a1_manifest.xml", "failoverLocation": "https://umsam0fgqtswdxrxl3k3.blob.core.windows.net/2bbece4f-0283-d415-b034-cc0adc6997a1/2bbece4f-0283-d415-b034-cc0adc6997a1_manifest.xml", "additionalLocations": [ "https://umsafrz2ts5fqvdzx4fh.blob.core.windows.net/2bbece4f-0283-d415-b034-cc0adc6997a1/2bbece4f-0283-d415-b034-cc0adc6997a1_manifest.xml" ], "state": "uninstall", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false }, { "name": "Microsoft.Azure.Extensions.CustomScript", "version": "2.0.7", "location": "https://umsavdddshpjtvhw3xdl.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "failoverLocation": "https://umsarc5lgv0zfkdmmhn0.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "additionalLocations": [ "https://umsav50t2vsvqhbjfw52.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml" ], "state": "enabled", "autoUpgrade": false, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false, "settings": [ { "protectedSettingsCertThumbprint": "F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9", "protectedSettings": "*** REDACTED ***" } ] } ] }VmSettings-etag-5410594052969431317.json000066400000000000000000000057671510742556200331060ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/tenant_certificate_rotation{ "hostGAPluginVersion": "1.0.8.151", "activityId": "5a0aae68-c723-4067-a696-db8dd6109b15", "correlationId": "1905a0ce-4647-4fe1-b4c4-35680ac4ef87", "inSvdSeqNo": 6, "extensionsLastModifiedTickCount": 638581165459535048, "extensionGoalStatesSource": "FastTrack", "statusUploadBlob": { "statusBlobType": "PageBlob", "value": "https://z16.blob.storage.azure.net/$system/15b576b88462.status" }, "gaFamilies": [ { "name": "Prod", "version": "2.11.1.4", "isVersionFromRSM": false, "isVMEnabledForRSMUpgrades": false, "uris": [ "https://umsaqhczvcnhlz0kplll.blob.core.windows.net/568bb00f-455e-32b8-8deb-0e1bf1636254/568bb00f-455e-32b8-8deb-0e1bf1636254_manifest.xml", "https://umsannq1fthghhbj40ml.blob.core.windows.net/568bb00f-455e-32b8-8deb-0e1bf1636254/568bb00f-455e-32b8-8deb-0e1bf1636254_manifest.xml" ] } ], "extensionGoalStates": [ { "name": "Microsoft.Azure.Monitor.AzureMonitorLinuxAgent", "version": "1.30.3", "location": "https://umsas2lcxxb22k1050cv.blob.core.windows.net/bab0cbee-153b-ca2c-b8ec-eea35f48a1e8/bab0cbee-153b-ca2c-b8ec-eea35f48a1e8_manifest.xml", "failoverLocation": "https://umsarnnfvjl3bbcrdndk.blob.core.windows.net/bab0cbee-153b-ca2c-b8ec-eea35f48a1e8/bab0cbee-153b-ca2c-b8ec-eea35f48a1e8_manifest.xml", "additionalLocations": [ "https://umsafxxnhhpbmc0hwbmp.blob.core.windows.net/bab0cbee-153b-ca2c-b8ec-eea35f48a1e8/bab0cbee-153b-ca2c-b8ec-eea35f48a1e8_manifest.xml" ], "state": "enabled", "autoUpgrade": false, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false }, { "name": "Microsoft.Azure.Extensions.CustomScript", "version": "2.0.7", "location": "https://umsajhv5phdtmpbzfm5q.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "failoverLocation": "https://umsajgk1fhwpwm4rh2kc.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "additionalLocations": [ "https://umsavdddshpjtvhw3xdl.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml" ], "state": "enabled", "autoUpgrade": false, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false, "settings": [ { "protectedSettingsCertThumbprint": "C0EDFF1B408001B0FD14F8F615E567F7833822D0", "protectedSettings": "*** REDACTED ***" } ] } ] }Azure-WALinuxAgent-a976115/tests/data/test_waagent.conf000066400000000000000000000072421510742556200230060ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Key / value handling test entries =Value0 FauxKey1= Value1 FauxKey2=Value2 Value2 FauxKey3=delalloc,rw,noatime,nobarrier,users,mode=777 # Enable extension handling Extensions.Enabled=y # Specify provisioning agent. Provisioning.Agent=auto # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=y # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=y # Supported values are "rsa", "dsa", "ecdsa", "ed25519", and "auto". # The "auto" option is supported on OpenSSH 5.9 (2011) and later. Provisioning.SshHostKeyPairType=rsa # An EOL comment that should be ignored # Monitor host name changes and publish changes via DHCP requests. Provisioning.MonitorHostName=y # Decode CustomData from Base64. Provisioning.DecodeCustomData=n#Another EOL comment that should be ignored # Execute CustomData after provisioning. Provisioning.ExecuteCustomData=n # Algorithm used by crypt when generating password hash. #Provisioning.PasswordCryptId=6 # Length of random salt used when generating password hash. #Provisioning.PasswordCryptSaltLength=10 # Allow reset password of sys user Provisioning.AllowResetSysUser=n # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=y # File system on the resource disk # Typically ext3 or ext4. FreeBSD images should use 'ufs2' here. ResourceDisk.Filesystem=ext4 # Mount point for the resource disk ResourceDisk.MountPoint=/mnt/resource # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=n # Use encrypted swap ResourceDisk.EnableSwapEncryption=n # Size of the swapfile. ResourceDisk.SwapSizeMB=0 # Comma-seperated list of mount options. See man(8) for valid options. ResourceDisk.MountOptions=None # Enable verbose logging (y|n) Logs.Verbose=n # Enable periodic log collection, default is y Logs.Collect=y # How frequently to collect logs, default is each hour Logs.CollectPeriod=3600 # Is FIPS enabled OS.EnableFIPS=y#Another EOL comment that should be ignored # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # If "None", the system default version is used. OS.OpensslPath=None # Set the SSH ClientAliveInterval OS.SshClientAliveInterval=42#Yet another EOL comment with a '#' that should be ignored # Set the path to SSH keys and configuration files OS.SshDir=/notareal/path # If set, agent will use proxy server to access internet #HttpProxy.Host=None #HttpProxy.Port=None # Detect Scvmm environment, default is n # DetectScvmmEnv=n # # Lib.Dir=/var/lib/waagent # # DVD.MountPoint=/mnt/cdrom/secure # # Pid.File=/var/run/waagent.pid # # Extension.LogDir=/var/log/azure # # OS.HomeDir=/home # Enable RDMA management and set up, should only be used in HPC images # OS.EnableRDMA=n # OS.UpdateRdmaDriver=n # OS.CheckRdmaDriver=n # Enable or disable goal state processing auto-update, default is enabled. # Deprecated now but keep it for backward compatibility # AutoUpdate.Enabled=y # Enable or disable goal state processing auto-update, default is enabled # AutoUpdate.UpdateToLatestVersion=y # Determine the update family, this should not be changed # AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is enabled # EnableOverProvisioning=y # Allow fallback to HTTP if HTTPS is unavailable # Note: Allowing HTTP (vs. HTTPS) may cause security risks # OS.AllowHTTP=n # Add firewall rules to protect access to Azure host node services # Note: # - The default is false to protect the state of existing VMs OS.EnableFirewall=n Debug.EnableExtensionPolicy=n Debug.EnableSignatureValidation=n Azure-WALinuxAgent-a976115/tests/data/wire/000077500000000000000000000000001510742556200204135ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/wire/certs.xml000066400000000000000000000123351510742556200222610ustar00rootroot00000000000000 2012-11-30 1 Pkcs7BlobWithPfxContents MIIOgwYJKoZIhvcNAQcDoIIOdDCCDnACAQIxggEwMIIBLAIBAoAU08PI+CBUqOd4 Nbte7MLw2qCYn1UwDQYJKoZIhvcNAQEBBQAEggEASTTfHNyY+9hdXd+Eqtqk+yPb RA7rRXWR8tQAJsdy3zAlu8WHymq945fnsf0bAW4mODIPYhhevmdo5VaI54AzAWhk EfJvtRQlZZEMGZVKgUSwP4AG6cFaSnJuAYbi27nffM45PgD26O2WjOhnmM7minEC 31/wUoxjxVOxIc8x+Ngo+TquyBeaK1iXcchwIUnbM0xRYMfccOAEhe/iytKFPzdg DJbDk+KbVGaUuUfhF+o4mMyJNezMUFxWkePcUgP12li57GTJSIyi8OQaFUu1qh0L KzQ2sYl8U0WmWQBhXqvuug47WI/6XrRDpKslIV1aV4XxD1Or6H3nf0fULjQZajCC DTUGCSqGSIb3DQEHATAUBggqhkiG9w0DBwQI+4Ch/cEogOSAgg0QvlelG9yDK2GE XX1wn8Xw0wCt+zIceXs8C6QuRSmZLEkZVv8Y+duMwi2A0tcg63HOmY2AfIPvTTt8 eto3YwIklrfF20jBvCg/pT3kfm6TICWmMNd5XesTq8UNmkqzJQQ84L3Kbs/ix2pG 9RaeXkrg0VO7FBDVH8b+jIT9IVDAEXgBQVefcCImVZ9L2hQWNABFrFXAQSTKjfFJ IEOfXUhTiH434V1RKJczhFiH5SNZ0kbaRjmaQkXqbXQ5kKoq8VNkmFc6vPCclTmq QJFfIUTepljWW/HuVkUycNYQQkblmWNF9FEwSx++x3Tz1FLR3UlzOkJCqr+tS3jv WFnI16VlOHaaHA++YKhW1PUujJcEdZaXBE0FC6JZF7IOAOjSdLSmRL9yU95erfgZ hRo2FB8EWVZitIG+DPU9vU59chGpqXYzZU4/aTpedGeWSZ9GFXRqwb6htmajjTWu l5fIME3hWt7kcejpuXCTDcdG4YcbngZu4hcepMrUhm9g2BdmIDb1YiB7290PMop8 4nNo97tSBvhzk300cg6+pfxy1iAv3++g/ggOI+Y/gFmgN88mmBMWm0+mocJ0SZGY 3+8K/8pDpJpfAAXSjayl7T2UXUdJe8fpOtetiHUr2zIbZXlM4IQw+0UMAVjTiaRT BIDGoPEcpCcxqPlSTTEie166uzzPXG9skVgennjN6YopwMC/WPaFRJu/eTlQOqlB EqvK9TKJG8u2yp00J04MGYXluY4l/o3/KLpT0mCOeOJm3KerfwQ/jU2oHHmvIATN XYy32ULqx/CjL+N3ax0Nu+UrgMQPcVhrTN/7lnZpFLYwXetGzH/4jdNfIfTc4yGn 0GlVT6cVgJyV8wyYpbqCxHtCW83II8vXLjTfeIffHBoJU0fMMPWEIxRuMQSksm0H F1u/rfGVSXnueshbJUD3pnvTiLPuWcOexSxP+B8BCNfi21jX5Ha+U9RKrKbHc4h9 PkiWxU6ZEqCBkdP9ssKnmMKMsrC7sZRoYziHNeqlZp/GFQmkI+DeFlqSPn3Lv9Or HF3bZokZCf0RGEkZDPrigaiEoL7PH/TtVZF8miL4JCLB0FVB08vWeeP5zjQT4H6J jSC2pw+5bA2UWGshgsKKAJJihYcOuybtzglh7nqmSSZcszz3GyuDhdR8KDrYwChU Hn13+rSWAbbqtxSyPc5fd22Q4Do2aD6PVdRadHjG0qeE7Dq46YHT3Z9KF0nQTLk8 uYq8hL5+jQEgTnUB0yJTKdEcg05TyrMfNHWuM1pru0bqpf25vpwP5t+Sd/vgWJNc XtRLWrMdYBuSG9zOyLaH7bj0rcMhN3ULisKej9IT/xHOWSXXZjNoe1P3q9fvtMbg ZXAale/xJ6rXq6mLvZXivJfQJkPbSV7fByPPKO6TMnHbNEgLOGO3XtHEwC24JKup C0ohq03QqQHEisS9Mk5LvWmSchXR3/7vCtJFyOemQom7nCy8cx4Y1JGmZ4SGSaEs QZs7GC7Ftb/X82LRuknvS19ApOVFEs4/8t+LviD3x7Z9quVv+fZvydhzNKGRR6kQ fYZwK7rqqkvuFKgXqNbzlrtlUqOUPXJgdO7QHOtU8z+k2NzBWfOp6j+Ef8rc3GDU HSVZZ/Lz0RWedxRC1zoZJSol7ckMxIGIpDhtb9xgDmaGKILWOR9k+wG6+7ywQ2LE PB3myDOclvKUDyb/DqwRS9ch9yyYSmz8WXTgdSeyOjp8QT2JQuuOOhoooHuKSxAk +7v/Fh5bNGtjHByuzMYSdLcWsLX+UohpDoc1heVgUA3R6EuIOJTA0nC653YmqIBp R5rsT+esub/EndweZTacmc2nDJxTKdZgMvdwhnsOZZBGsOaD7MXAS5vCsze+PQmY 4+VqqWPASaclV6CygN4qSxmww6mVgmAgWVmJqfa6vOyb3zhx68TkNEp9rxJFcJSJ NiTTvWe0nF+o2/a1HZ8rZFdf65KsqGSiqu/6HoUuFzWLxRCqSjB9RkfSqrDVAVim pwL46zGRsqZV+5xrRQlxINNUbg/D11zcp1zdhQvhDrpBoLMjK7AaxA5msPYFy6Gm KMRAG2kyi802W5CPZWkbiEoUA8vkiICuxN+Pdh146zk9Ngl4PC3YpNCMtXK11ifd hYxmWqEuQ2AcdVTckosaWrFMn5MqEcR0aAXZbnjIMgTZ6SMYJBZMWjzJhe/UQjTo vICK7KAH82chpW2hG2I67z7e1Nv930RyL6JbYI8mSqgccPBzOBUhpHvKDM59z8Nc eStEYDdOcMz8P+c/H3Bh4WsyMWMOwWvjyy6GX5Bpl5z94tWFRn6W4FK5iDqp+HHm v5W1+hlFBxXtuzBcSQntcj8LoExJ2mK6BhZkaeAESMqPvNeNFmhEVUGq0/+c7T4I L+1YkQPcm/nIpwW/ITmkGmi5n5VsvbJFDbQe+h9LI2aqvWtzA0YT5Ed77Glbdbgq qB8EyXdr1BsBb7s7bbXm4Wf8UJkCZESg8iQExkUk8HqMJRxjctjma0DyyKVi4j8Q +BA1EYBEX37641S+ZR9fYmQeuULGkf3d+w/ttgvm6YDZivsZYWkTscX+lUtoHhWN 5EOAfllI0/DaGX15mGONMV8YA1PoCNEX3yKJ5tVGkxxUPK+Op7ZHvJmtb1fPMRRY z+evQ+NTXTZZzdr3Kfs4yYbuXG4e1odm2v/zBKG7JF3yWPMtXZZiMks/BkaXTq1P LrB0VxGcMsLeQ5HbbWJtchyCWyy63CNNbfYNohjxru52DjaAQlDKQT9pOiSmGJzb 7+hNnKYnOfo6Du2ljz7C9C4mxnRJsRA2O9Cw66J5XPy1W+2+RmvP72jXwoFWYzPq jxNs2wxOYQjEDpXBTmCbW58F5cTbSTk3D15iCtYtf31tpuPpHEnz+2OvrX0WhygN esZJnln2Tu2ut1pVhAuJDLZTj24Y4MP0nmDINuLDAkFji0CwjACvW7M9SbIOLLYU +5JHHjB7wqaTXWFzpt/ZKXMXlwCzWjo3pDERbrpYbwS3GHqmtcyIZK4EA7Ulka5Y 7rLPWS5eKcjX3tp2FyX5pD52TpuUMPAk6vyefX+NznP7opvJpusHbkschojFVRDA zHIpIGeWjYcWLk5YTPagzH8o+4ci1OEk+OMc8i6PxkQDeBw1RiCAFfBnKPCSEtFk KJlw7fspk3/chA6mmvOHjkrQmUhUuDxAVGCVxl0K5LU3Y2IQxKGtCJk5YO4XD2e7 5b0Ub+wy4Bb0l+z8HjuqEypFXDpQTd80NbhStZBgf2cB01elsqmKD9sT9wpFGKbC VaatDLsLx4XrBG6ueoFKBgFL6l7afEPct8wuSoUrX5MAGlge5xzQYAD5spLlEa9G Dt2KiPCsZcqWiaHiw5vk849FXUcfFfGl+0rEKhzcfUn3zkL1mGfqZ8Nf7qjMXdMy dbUUQYMZXtMtK3fnYBnavgaUcu0bZ7Av+GVTQvDxfpzSeMW8lK7Ko6mINFQVC8dx TEKWX+eApFUnTb11vNNxwxdOB2l5N+kfNLnVMhuYd7l8IHQxMMQTcf8hYu0owry6 JkIdkhnF1kXVC2YWxo4VrDPwzkBWZE28ygBNhWgKCRhZnnbDEWPuqGP/IaLN4vww 1lqkZltqZDddXvOTXN/tZmkkQHt2uP264vqJB2BkGzxOll5UDQ8V3gXwheuUGxYc gVL4ZJSKfHnUp6oRafIBnQs5RBvqdj2wewzT8AyPWImRG6fkYvsub8qIFqG6mu4Y ixAQ9oTgg/KOXYNsfYuLGswu/aNnAqMEjfMerSx7dDu7teETkWb+IQJtodOdE/LI yO/puds1M+V2H0TD36zXRyvEnpfm5BTURkxM8dI6meR37/JGtObtjg+Gzjpu6HGm sIYyhG8bvV0Vkuip4bEgBB6T39dt/DeElHABthUmzFZe/QC8j7IJjyCz40JWDJSo 8wPtOoLnLeX0ynD8x8A5NsQk3W9fgEtv0WG6Uahs7P8GEZ5Uh9GPvWQpAkjKv7OZ XVHJdTBMJICbB1Bzr8Nl0qPfQrhFzTNBMjBEwyaBpzRiV1hdTB2YPJPbjQQtQGkO vT/EsAEWwSqDrQrDCfGRl7mhjdAsVFMjERdJE3/2TctY8VnLaRzUTSGkpCKxl+V4 CLrBi96N80pxer5eKYtt5gtLFw0gZeeeqb2VDj6ChVnUjJ9r0TXzyy8ztwpB8X5Y mZUDASD1acdZZOiEp69WA6juQR0EGKQT5phh+k0HbziW+bXMM+7YwiRJzwX4obnd wgF+wyHht3Rzaptv5JSZMkc1RGSFIdWUwEp+3Ik6DGywiTcVkU65TQ7CsQJjmmkL AChG7tUBI4KmolT9D0rj3A90//wl3ACkCFq94m0BZOFiimUXFjqux135P5i37XRJ /8wgWZ0nzmXdFyTkEJEessAMbCkMiDHwaT7Lbs+S0qFeobh4DD3tkONnqSNa7md4 945Z9MJiapzD3P33TvKhyQ0wHe5W0z4= Azure-WALinuxAgent-a976115/tests/data/wire/certs_format_not_pfx.xml000066400000000000000000000004741510742556200253670ustar00rootroot00000000000000 2012-11-30 12 CertificatesNonPfxPackage NotPFXData Azure-WALinuxAgent-a976115/tests/data/wire/certs_no_format_specified.xml000066400000000000000000000123051510742556200263350ustar00rootroot00000000000000 2012-11-30 1 MIIOgwYJKoZIhvcNAQcDoIIOdDCCDnACAQIxggEwMIIBLAIBAoAU08PI+CBUqOd4 Nbte7MLw2qCYn1UwDQYJKoZIhvcNAQEBBQAEggEASTTfHNyY+9hdXd+Eqtqk+yPb RA7rRXWR8tQAJsdy3zAlu8WHymq945fnsf0bAW4mODIPYhhevmdo5VaI54AzAWhk EfJvtRQlZZEMGZVKgUSwP4AG6cFaSnJuAYbi27nffM45PgD26O2WjOhnmM7minEC 31/wUoxjxVOxIc8x+Ngo+TquyBeaK1iXcchwIUnbM0xRYMfccOAEhe/iytKFPzdg DJbDk+KbVGaUuUfhF+o4mMyJNezMUFxWkePcUgP12li57GTJSIyi8OQaFUu1qh0L KzQ2sYl8U0WmWQBhXqvuug47WI/6XrRDpKslIV1aV4XxD1Or6H3nf0fULjQZajCC DTUGCSqGSIb3DQEHATAUBggqhkiG9w0DBwQI+4Ch/cEogOSAgg0QvlelG9yDK2GE XX1wn8Xw0wCt+zIceXs8C6QuRSmZLEkZVv8Y+duMwi2A0tcg63HOmY2AfIPvTTt8 eto3YwIklrfF20jBvCg/pT3kfm6TICWmMNd5XesTq8UNmkqzJQQ84L3Kbs/ix2pG 9RaeXkrg0VO7FBDVH8b+jIT9IVDAEXgBQVefcCImVZ9L2hQWNABFrFXAQSTKjfFJ IEOfXUhTiH434V1RKJczhFiH5SNZ0kbaRjmaQkXqbXQ5kKoq8VNkmFc6vPCclTmq QJFfIUTepljWW/HuVkUycNYQQkblmWNF9FEwSx++x3Tz1FLR3UlzOkJCqr+tS3jv WFnI16VlOHaaHA++YKhW1PUujJcEdZaXBE0FC6JZF7IOAOjSdLSmRL9yU95erfgZ hRo2FB8EWVZitIG+DPU9vU59chGpqXYzZU4/aTpedGeWSZ9GFXRqwb6htmajjTWu l5fIME3hWt7kcejpuXCTDcdG4YcbngZu4hcepMrUhm9g2BdmIDb1YiB7290PMop8 4nNo97tSBvhzk300cg6+pfxy1iAv3++g/ggOI+Y/gFmgN88mmBMWm0+mocJ0SZGY 3+8K/8pDpJpfAAXSjayl7T2UXUdJe8fpOtetiHUr2zIbZXlM4IQw+0UMAVjTiaRT BIDGoPEcpCcxqPlSTTEie166uzzPXG9skVgennjN6YopwMC/WPaFRJu/eTlQOqlB EqvK9TKJG8u2yp00J04MGYXluY4l/o3/KLpT0mCOeOJm3KerfwQ/jU2oHHmvIATN XYy32ULqx/CjL+N3ax0Nu+UrgMQPcVhrTN/7lnZpFLYwXetGzH/4jdNfIfTc4yGn 0GlVT6cVgJyV8wyYpbqCxHtCW83II8vXLjTfeIffHBoJU0fMMPWEIxRuMQSksm0H F1u/rfGVSXnueshbJUD3pnvTiLPuWcOexSxP+B8BCNfi21jX5Ha+U9RKrKbHc4h9 PkiWxU6ZEqCBkdP9ssKnmMKMsrC7sZRoYziHNeqlZp/GFQmkI+DeFlqSPn3Lv9Or HF3bZokZCf0RGEkZDPrigaiEoL7PH/TtVZF8miL4JCLB0FVB08vWeeP5zjQT4H6J jSC2pw+5bA2UWGshgsKKAJJihYcOuybtzglh7nqmSSZcszz3GyuDhdR8KDrYwChU Hn13+rSWAbbqtxSyPc5fd22Q4Do2aD6PVdRadHjG0qeE7Dq46YHT3Z9KF0nQTLk8 uYq8hL5+jQEgTnUB0yJTKdEcg05TyrMfNHWuM1pru0bqpf25vpwP5t+Sd/vgWJNc XtRLWrMdYBuSG9zOyLaH7bj0rcMhN3ULisKej9IT/xHOWSXXZjNoe1P3q9fvtMbg ZXAale/xJ6rXq6mLvZXivJfQJkPbSV7fByPPKO6TMnHbNEgLOGO3XtHEwC24JKup C0ohq03QqQHEisS9Mk5LvWmSchXR3/7vCtJFyOemQom7nCy8cx4Y1JGmZ4SGSaEs QZs7GC7Ftb/X82LRuknvS19ApOVFEs4/8t+LviD3x7Z9quVv+fZvydhzNKGRR6kQ fYZwK7rqqkvuFKgXqNbzlrtlUqOUPXJgdO7QHOtU8z+k2NzBWfOp6j+Ef8rc3GDU HSVZZ/Lz0RWedxRC1zoZJSol7ckMxIGIpDhtb9xgDmaGKILWOR9k+wG6+7ywQ2LE PB3myDOclvKUDyb/DqwRS9ch9yyYSmz8WXTgdSeyOjp8QT2JQuuOOhoooHuKSxAk +7v/Fh5bNGtjHByuzMYSdLcWsLX+UohpDoc1heVgUA3R6EuIOJTA0nC653YmqIBp R5rsT+esub/EndweZTacmc2nDJxTKdZgMvdwhnsOZZBGsOaD7MXAS5vCsze+PQmY 4+VqqWPASaclV6CygN4qSxmww6mVgmAgWVmJqfa6vOyb3zhx68TkNEp9rxJFcJSJ NiTTvWe0nF+o2/a1HZ8rZFdf65KsqGSiqu/6HoUuFzWLxRCqSjB9RkfSqrDVAVim pwL46zGRsqZV+5xrRQlxINNUbg/D11zcp1zdhQvhDrpBoLMjK7AaxA5msPYFy6Gm KMRAG2kyi802W5CPZWkbiEoUA8vkiICuxN+Pdh146zk9Ngl4PC3YpNCMtXK11ifd hYxmWqEuQ2AcdVTckosaWrFMn5MqEcR0aAXZbnjIMgTZ6SMYJBZMWjzJhe/UQjTo vICK7KAH82chpW2hG2I67z7e1Nv930RyL6JbYI8mSqgccPBzOBUhpHvKDM59z8Nc eStEYDdOcMz8P+c/H3Bh4WsyMWMOwWvjyy6GX5Bpl5z94tWFRn6W4FK5iDqp+HHm v5W1+hlFBxXtuzBcSQntcj8LoExJ2mK6BhZkaeAESMqPvNeNFmhEVUGq0/+c7T4I L+1YkQPcm/nIpwW/ITmkGmi5n5VsvbJFDbQe+h9LI2aqvWtzA0YT5Ed77Glbdbgq qB8EyXdr1BsBb7s7bbXm4Wf8UJkCZESg8iQExkUk8HqMJRxjctjma0DyyKVi4j8Q +BA1EYBEX37641S+ZR9fYmQeuULGkf3d+w/ttgvm6YDZivsZYWkTscX+lUtoHhWN 5EOAfllI0/DaGX15mGONMV8YA1PoCNEX3yKJ5tVGkxxUPK+Op7ZHvJmtb1fPMRRY z+evQ+NTXTZZzdr3Kfs4yYbuXG4e1odm2v/zBKG7JF3yWPMtXZZiMks/BkaXTq1P LrB0VxGcMsLeQ5HbbWJtchyCWyy63CNNbfYNohjxru52DjaAQlDKQT9pOiSmGJzb 7+hNnKYnOfo6Du2ljz7C9C4mxnRJsRA2O9Cw66J5XPy1W+2+RmvP72jXwoFWYzPq jxNs2wxOYQjEDpXBTmCbW58F5cTbSTk3D15iCtYtf31tpuPpHEnz+2OvrX0WhygN esZJnln2Tu2ut1pVhAuJDLZTj24Y4MP0nmDINuLDAkFji0CwjACvW7M9SbIOLLYU +5JHHjB7wqaTXWFzpt/ZKXMXlwCzWjo3pDERbrpYbwS3GHqmtcyIZK4EA7Ulka5Y 7rLPWS5eKcjX3tp2FyX5pD52TpuUMPAk6vyefX+NznP7opvJpusHbkschojFVRDA zHIpIGeWjYcWLk5YTPagzH8o+4ci1OEk+OMc8i6PxkQDeBw1RiCAFfBnKPCSEtFk KJlw7fspk3/chA6mmvOHjkrQmUhUuDxAVGCVxl0K5LU3Y2IQxKGtCJk5YO4XD2e7 5b0Ub+wy4Bb0l+z8HjuqEypFXDpQTd80NbhStZBgf2cB01elsqmKD9sT9wpFGKbC VaatDLsLx4XrBG6ueoFKBgFL6l7afEPct8wuSoUrX5MAGlge5xzQYAD5spLlEa9G Dt2KiPCsZcqWiaHiw5vk849FXUcfFfGl+0rEKhzcfUn3zkL1mGfqZ8Nf7qjMXdMy dbUUQYMZXtMtK3fnYBnavgaUcu0bZ7Av+GVTQvDxfpzSeMW8lK7Ko6mINFQVC8dx TEKWX+eApFUnTb11vNNxwxdOB2l5N+kfNLnVMhuYd7l8IHQxMMQTcf8hYu0owry6 JkIdkhnF1kXVC2YWxo4VrDPwzkBWZE28ygBNhWgKCRhZnnbDEWPuqGP/IaLN4vww 1lqkZltqZDddXvOTXN/tZmkkQHt2uP264vqJB2BkGzxOll5UDQ8V3gXwheuUGxYc gVL4ZJSKfHnUp6oRafIBnQs5RBvqdj2wewzT8AyPWImRG6fkYvsub8qIFqG6mu4Y ixAQ9oTgg/KOXYNsfYuLGswu/aNnAqMEjfMerSx7dDu7teETkWb+IQJtodOdE/LI yO/puds1M+V2H0TD36zXRyvEnpfm5BTURkxM8dI6meR37/JGtObtjg+Gzjpu6HGm sIYyhG8bvV0Vkuip4bEgBB6T39dt/DeElHABthUmzFZe/QC8j7IJjyCz40JWDJSo 8wPtOoLnLeX0ynD8x8A5NsQk3W9fgEtv0WG6Uahs7P8GEZ5Uh9GPvWQpAkjKv7OZ XVHJdTBMJICbB1Bzr8Nl0qPfQrhFzTNBMjBEwyaBpzRiV1hdTB2YPJPbjQQtQGkO vT/EsAEWwSqDrQrDCfGRl7mhjdAsVFMjERdJE3/2TctY8VnLaRzUTSGkpCKxl+V4 CLrBi96N80pxer5eKYtt5gtLFw0gZeeeqb2VDj6ChVnUjJ9r0TXzyy8ztwpB8X5Y mZUDASD1acdZZOiEp69WA6juQR0EGKQT5phh+k0HbziW+bXMM+7YwiRJzwX4obnd wgF+wyHht3Rzaptv5JSZMkc1RGSFIdWUwEp+3Ik6DGywiTcVkU65TQ7CsQJjmmkL AChG7tUBI4KmolT9D0rj3A90//wl3ACkCFq94m0BZOFiimUXFjqux135P5i37XRJ /8wgWZ0nzmXdFyTkEJEessAMbCkMiDHwaT7Lbs+S0qFeobh4DD3tkONnqSNa7md4 945Z9MJiapzD3P33TvKhyQ0wHe5W0z4= Azure-WALinuxAgent-a976115/tests/data/wire/ec-key.pem000066400000000000000000000003431510742556200222730ustar00rootroot00000000000000-----BEGIN EC PRIVATE KEY----- MHcCAQEEIEydYXZkSbZjdKaNEurW6x2W3dEOC5+yDxM/Wkq1m6lUoAoGCCqGSM49 AwEHoUQDQgAE8H1M+73QdzCyIDToTyU7OTMfi9cnIt8B4sz7e127ydNBVWjDwgGV bKXPNtuQSWNgkfGW8A3tf9S8VcKNFxXaZg== -----END EC PRIVATE KEY----- Azure-WALinuxAgent-a976115/tests/data/wire/ec-key.pub.pem000066400000000000000000000002621510742556200230600ustar00rootroot00000000000000-----BEGIN PUBLIC KEY----- MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE8H1M+73QdzCyIDToTyU7OTMfi9cn It8B4sz7e127ydNBVWjDwgGVbKXPNtuQSWNgkfGW8A3tf9S8VcKNFxXaZg== -----END PUBLIC KEY----- Azure-WALinuxAgent-a976115/tests/data/wire/encrypted.enc000066400000000000000000000010551510742556200231000ustar00rootroot00000000000000MIIBlwYJKoZIhvcNAQcDoIIBiDCCAYQCAQIxggEwMIIBLAIBAoAUW4P+tNXlmDXW H30raKBkpUhXYwUwDQYJKoZIhvcNAQEBBQAEggEAP0LpwacLdJyvNQVmSyXPGM0i mNJSHPQsAXLFFcmWmCAGiEsQWiHKV9mON/eyd6DjtgbTuhVNHPY/IDSDXfjgLxdX NK1XejuEaVTwdVtCJWl5l4luOeCMDueitoIgBqgkbFpteqV6s8RFwnv+a2HhM0lc TUwim6skx1bFs0csDD5DkM7R10EWxWHjdKox8R8tq/C2xpaVWRvJ52/DCVgeHOfh orV0GmBK0ue/mZVTxu8jz2BxQUBhHXNWjBuNuGNmUuZvD0VY1q2K6Fa3xzv32mfB xPKgt6ru/wG1Kn6P8yMdKS3bQiNZxE1D1o3epDujiygQahUby5cI/WXk7ryZ1DBL BgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECAxpp+ZE6rpAgChqxBVpU047fb4zinTV 5xaG7lN15YEME4q8CqcF/Ji3NbHPmdw1/gtf Azure-WALinuxAgent-a976115/tests/data/wire/ext_conf-no_encoded_signature.xml000066400000000000000000000034271510742556200271240ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-a976115/tests/data/wire/ext_conf-no_gs_metadata.xml000066400000000000000000000031421510742556200257050ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-a976115/tests/data/wire/ext_conf-vm_access_with_invalid_signature.xml000066400000000000000000000360001510742556200315240ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-a976115/tests/data/wire/ext_conf-vm_access_with_signature.xml000066400000000000000000000360241510742556200300240ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-a976115/tests/data/wire/ext_conf.xml000066400000000000000000000355071510742556200227540ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-a976115/tests/data/wire/ext_conf_additional_locations.xml000066400000000000000000000035011510742556200272040ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml http://mock-goal-state/additionalLocation/3/manifest.xml http://mock-goal-state/additionalLocation/4/manifest.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-a976115/tests/data/wire/ext_conf_aks_extension.xml000066400000000000000000000106461510742556200257030ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Enabling non-AKS"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Enabling AKSNode"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Enabling AKSBilling"} } } ] } https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-a976115/tests/data/wire/ext_conf_autoupgrade.xml000066400000000000000000000045071510742556200253500ustar00rootroot00000000000000 Win8 http://mock-goal-state/rdfepirv2hknprdstr03.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr04.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml Win7 http://mock-goal-state/rdfepirv2hknprdstr03.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr04.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-a976115/tests/data/wire/ext_conf_autoupgrade_internalversion.xml000066400000000000000000000045071510742556200306520ustar00rootroot00000000000000 Win8 http://mock-goal-state/rdfepirv2hknprdstr03.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr04.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml Win7 http://mock-goal-state/rdfepirv2hknprdstr03.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr04.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-a976115/tests/data/wire/ext_conf_dependencies_with_empty_settings.xml000066400000000000000000000052141510742556200316430ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-a976115/tests/data/wire/ext_conf_downgrade_rsm_version.xml000066400000000000000000000042201510742556200274200ustar00rootroot00000000000000 Prod 9.9.9.0 9.9.9.9 True True http://mock-goal-state/manifest_of_ga.xml Test 9.9.9.0 9.9.9.9 True True http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-a976115/tests/data/wire/ext_conf_in_vm_artifacts_profile.xml000066400000000000000000000041321510742556200277120ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo https://mock-goal-state/test.blob.core.windows.net/$system/test-cs12.test-cs12.test-cs12.vmSettings?sv=2016-05-31&sr=b&sk=system-1&sig=saskey;se=9999-01-01T00%3a00%3a00Z&sp=r Azure-WALinuxAgent-a976115/tests/data/wire/ext_conf_in_vm_empty_artifacts_profile.xml000066400000000000000000000036371510742556200311410ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-a976115/tests/data/wire/ext_conf_in_vm_metadata.xml000066400000000000000000000036551510742556200260030ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-a976115/tests/data/wire/ext_conf_internalversion.xml000066400000000000000000000045071510742556200262520ustar00rootroot00000000000000 Win8 http://mock-goal-state/rdfepirv2hknprdstr03.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr04.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml Win7 http://mock-goal-state/rdfepirv2hknprdstr03.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr04.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-a976115/tests/data/wire/ext_conf_invalid_and_valid_handlers.xml000066400000000000000000000056061510742556200303400ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-a976115/tests/data/wire/ext_conf_invalid_vm_metadata.xml000066400000000000000000000035241510742556200270160ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-a976115/tests/data/wire/ext_conf_missing_family.xml000066400000000000000000000015711510742556200260400ustar00rootroot00000000000000 Prod eastus https://walaautoasmeastus.blob.core.windows.net/vhds/walaautos73small.walaautos73small.walaautos73small.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=u%2BCA2Cxb7ticiEBRIW8HWgNW7gl2NPuOGQl0u95ApQE%3D Azure-WALinuxAgent-a976115/tests/data/wire/ext_conf_multiple_extensions.xml000066400000000000000000000216571510742556200271470ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIIB4AYJKoZIhvcNAQcDoIIB0TCCAc0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEH3vWjYIrceWQigVQwoS8z0wDQYJKoZIhvcNAQEBBQAEggEANYey5W0qDqC6RHZlVnpLp2dWrMr1Rt5TCFkOjq1jU4y2y1FPtsTTKq9Z5pdGb/IHQo9VcT+OFglO3bChMbqc1vgmk4wkTQkgJVD3C8Rq4nv3uvQIux+g8zsa1MPKT5fTwG/dcrBp9xqySJLexUiuJljmNJgorGc0KtLwjnad4HTSKudDSo5DGskSDLxxLZYx0VVtQvgekOOwT/0C0pN4+JS/766jdUAnHR3oOuD5Dx7/c6EhFSoiYXMA0bUzH7VZeF8j/rkP1xscLQRrCScCNV2Ox424Y4RBbcbP/p69lDxGURcIKLKrIUhQdC8CfUMkQUEmFDLcOtxutCTFBZYMJzBbBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECCuc0a4Gl8PAgDgcHekee/CivSTCXntJiCrltUDob8cX4YtIS6lq3H08Ar+2tKkpg5e3bOkdAo3q2GfIrGDm4MtVWw==","publicSettings":{"foo":"bar"}}}]} {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIIBwAYJKoZIhvcNAQcDoIIBsTCCAa0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEH3vWjYIrceWQigVQwoS8z0wDQYJKoZIhvcNAQEBBQAEggEABILhQPoMx3NEbd/sS0xAAE4rJXwzJSE0bWr4OaKpcGS4ePtaNW8XWm+psYR9CBlXuGCuDVlFEdPmO2Ai8NX8TvT7RVYYc6yVQKpNQqO6Q9g9O52XXX4tBSFSCfoTzd1kbGC1c2wbXDyeROGCjraWuGHd4C9s9gytpgAlYicZjOqV3deo30F4vXZ+ZhCNpMkOvSXcsNpzTzQ/mskwNubN8MPkg/jEAzTHRpiJl3tjGtTqm00GHMqFF8/31jnoLQeQnWSmY+FBpiTUhPzyjufIcoZ+ueGXZiJ77xyH2Rghh5wvQM8oTVy2dwFQGeqjHOVgdgRNi/HgfZhcdltaQ8kjYDA7BgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECHPM0ZKBn+aWgBiVPT7zlkJA8eGuH7bNMTQCtGoJezToa24=","publicSettings":{"foo":"bar"}}}]} {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIIB4AYJKoZIhvcNAQcDoIIB0TCCAc0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEH3vWjYIrceWQigVQwoS8z0wDQYJKoZIhvcNAQEBBQAEggEAGSKUDRN64DIB7FS7yKXa07OXaFPhmdNnNDOAOD3/WVFb9fQ2bztV46waq7iRO+lpz7LSerRzIe6Kod9zCfK7ryukRomVHIfTIBwPjQ+Otn8ZD2aVcrxR0EI95x/SGyiESJRQnOMbpoVSWSu2KJUCPfycQ4ODbaazDc61k0JCmmRy12rQ4ttyWKhYwpwI2OYFHGr39N/YYq6H8skHj5ve1605i4P9XpfEyIwF5BbX59tDOAFFQtX7jzQcz//LtaHHjwLmysmD9OG5XyvfbBICwSYJfMX9Jh1aahLwcjL8Bd0vYyGL1ItMQF5KfDwog4+HLcRGx+S02Yngm3/YKS9DmzBbBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECFGLNfK0bO5OgDgH90bRzqfgKK6EEh52XJfHz9G/ZL1mqP/ueWqo95PtEFo1gvI7z25V/pT0tBGibXgRhQXLFmwVTA==","publicSettings":{"foo":"bar"}}}]} {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIIEzAYJKoZIhvcNAQcDoIIEvTCCBLkCAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEH3vWjYIrceWQigVQwoS8z0wDQYJKoZIhvcNAQEBBQAEggEAFqLDBFGeuglluYmZb0Zw+ZlMiMIws9/LgmurVSRUTU/nSleIc9vOLcukfMeCpMativzHe23iDFy6p3XDkViNcuzqbhlPq5LQsXXg+xaUrrg8Xy+q7KUQdxzPdNBdpgkUh6yE2EFbqVLQ/7x+TkkSsw35uPT0nEqSj3yYFGH7X/NJ49fKU+ZvFDp/N+o54UbE6ZdxlHFtz6NJFxx5w4z5adQ8DgnUyS0bJ2denolknODfSW2D2alm00SXlI88CAjeHgEDkoLCduwkrDkSFAODcAiEHHX8oYCnfanatpjm7ZgSutS9y7+XUnGWxDYoujHDI9bbV0WpyDcx/DIrlZ+WcTCCA0UGCSqGSIb3DQEHATAUBggqhkiG9w0DBwQIrL18Lbp1qU6AggMgGklvozqr8HqYP+DwkvxdwHSpo+23QFxh70os+NJRtVgBv5NjPEziXo3FpXHMPvt0kp0IwXbwyy5vwnjCTA2sQOYgj77X6RmwF6+1gt2DIHDN1Q6jWzdcXZVHykSiF3gshbebRKO0hydfCaCyYL36HOZ8ugyCctOon5EflrnoOYDDHRbsr30DAxZCAwGOGZEeoU2+U+YdhuMvplnMryD1f6b8FQ7jXihe/zczAibX5/22NxhsVgALdsV5h6hwuTbspDt3V15/VU8ak7a4xxdBfXOX0HcQI86oqsFr7S7zIveoQHsW+wzlyMjwi6DRPFpz2wFkv5ivgFEvtCzDQP4aCqGI8VdqzR7aUDnuqiSCe/cbmv5mSmTYlDPTR03WS0IvgyeoNAzqCbYQe44AUBEZb/yT8Z3XxwW0GzcPMZQ0XjpcZiaKAueN9V8nJgNCEDPTJqpSjy+tEHmSgxn70+E57F0vzPvdQ3vOEeRj8zlBblHd4uVrhxdBMUuQ73JEQEha5rz0qcUy04Wmjld1rBuX6pdOqrArAYzTLJbIuLqDjlnYFsHLs9QBGvIEb9VFOlAm5JW8npBbIRHXqPfwZWs60+uNksTtsN3MxBxUWJPOByb4xRNx+nRpTOvfKKFlgq1ReK5bGSTCB7x0Ft3+T42LOQDrBPyxxtGzWs+aq05qFgI4n0h8X82wxJflK+kUdwvvG/ZY5MM+/le2zOrUeyzvxXsHoRetgg+DOk7v+v7VsuT1KuvTXvgzxoOFF3/T2pNPpE3h6bbP2BUqZ2yzPNziGFslywDLZ8W3OUZoQejGqobRePdgUoBi5q2um/sPnq81kOJ/qhIOVq581ZD4IQWLot8eK8vX0G/y7y71YelRR51cUfgR5WvZZf6LvYw+GpwOtSViugl9QxGCviSLgHTJSSEm0ijtbzKhwP4vEyydNDrz8+WYB8DNIV7K2Pc8JyxAM03FYX30CaaJ40pbEUuVQVEnkAD2E//29/ZzgNTf/LBMzMEP5j7wlL+QQpmPAtL/FlBrOJ4nDEqsOOhWzI1MN51xRZuv3e2RqzVPiSmrKtk=","publicSettings":{"foo":"bar"}}}]} {"runtimeSettings":[{"handlerSettings":{"publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-a976115/tests/data/wire/ext_conf_no_extensions-block_blob.xml000066400000000000000000000010231510742556200277770ustar00rootroot00000000000000 http://foo Azure-WALinuxAgent-a976115/tests/data/wire/ext_conf_no_extensions-no_status_blob.xml000066400000000000000000000007061510742556200307330ustar00rootroot00000000000000 Azure-WALinuxAgent-a976115/tests/data/wire/ext_conf_no_extensions-page_blob.xml000066400000000000000000000015221510742556200276250ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml http://sas_url Azure-WALinuxAgent-a976115/tests/data/wire/ext_conf_no_public.xml000066400000000000000000000126761510742556200250100ustar00rootroot00000000000000 Win8 http://mock-goal-state/rdfepirv2hknprdstr03.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr04.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr05.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr06.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr07.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr08.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr09.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr10.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr11.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr12.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/zrdfepirv2hk2prdstr01.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml Win7 http://mock-goal-state/rdfepirv2hknprdstr03.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr04.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr05.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr06.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr07.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr08.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr09.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr10.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr11.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr12.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/zrdfepirv2hk2prdstr01.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK"}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-a976115/tests/data/wire/ext_conf_no_settings.xml000066400000000000000000000122021510742556200253530ustar00rootroot00000000000000 Win8 http://mock-goal-state/rdfepirv2hknprdstr03.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr04.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr05.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr06.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr07.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr08.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr09.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr10.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr11.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr12.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/zrdfepirv2hk2prdstr01.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml Win7 http://mock-goal-state/rdfepirv2hknprdstr03.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr04.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr05.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr06.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr07.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr08.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr09.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr10.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr11.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr12.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/zrdfepirv2hk2prdstr01.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-a976115/tests/data/wire/ext_conf_redact.xml000066400000000000000000000130171510742556200242660ustar00rootroot00000000000000 false false Prod https://umsaq0ksgznjhnhvqvmj.blob.core.windows.net/568bb00f-455e-32b8-8deb-0e1bf1636254/568bb00f-455e-32b8-8deb-0e1bf1636254_manifest.xml 2.12.0.2 westus2 CRP https://md-z9z999zzzz99.z38.blob.storage.azure.net/$system/vm01.667ae9f0-zz99-99z9-9zz9-9z9z99999zz9.status?sv=2018-03-28&sr=b&sk=system-1&sig=9ZzZZzZZZZ999zzz%2bzz99zzZzZZZ99999zz%2b9ZZZ%3d&se=9999-01-01T00%3a00%3a00Z&sp=w https://umsalhsl5scrdmt03dql.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml https://umsajcsg1mkf30sqkz4v.blob.core.windows.net/60eb5f5d-256d-1d4b-3db7-8808756d9b09/60eb5f5d-256d-1d4b-3db7-8808756d9b09_manifest.xml https://umsarmtx2bjcqg0cj14s.blob.core.windows.net/4beefb67-5a1e-aa37-7ede-3180ba34ba0d/4beefb67-5a1e-aa37-7ede-3180ba34ba0d_manifest.xml { "runtimeSettings": [ { "handlerSettings": { "protectedSettingsCertThumbprint": "3CA9F246CB9F2D4D17A1BBF732CA5D70AE5CB810", "protectedSettings": "MIIC4AYJKoZIhvcNAQcWMicrosoft.Azure.Extensions.CustomScript/IsZAEZFidXVFB/8Ucc8OxGFWpqtXZdrFcuuSxOa6ib/+la5ukd7pXjzYCd/Sp9yGj7Rw3Kqf9Lpix/sPuqWjV6e2XgDhW4j2nRNTJU4yXxocQm/PhAr39Um7n0pgI2Cn28AabYtsHWjKqr8Al9LX6bKm8cnmnLjqTntphCw==" } } ] } { "runtimeSettings": [ { "handlerSettings": { "protectedSettingsCertThumbprint": "3CA9F246CB9F2D4D17A1BBF732CA5D70AE5CB810", "protectedSettings": "MIIC4AYJKoZIhvcNAQcWMicrosoft.Azure.AzureDefenderForServers.MDE.Linux/IsZAEZFidXVFB/8Ucc8OxGFWpqtXZdrFcuuSxOa6ib/+la5ukd7pXjzYCd/Sp9yGj7Rw3Kqf9Lpix/sPuqWjV6e2XgDhW4j2nRNTJU4yXxocQm/PhAr39Um7n0pgI2Cn28AabYtsHWjKqr8Al9LX6bKm8cnmnLjqTntphCw==", "publicSettings": {"azureResourceId":"/subscriptions/z99z9999/resourceGroups/RG/providers/Microsoft.Compute/virtualMachines/vm01","forceReOnboarding":false,"vNextEnabled":false,"autoUpdate":true} } } ] } { "runtimeSettings": [ { "handlerSettings": { "protectedSettingsCertThumbprint": "3CA9F246CB9F2D4D17A1BBF732CA5D70AE5CB810", "protectedSettings": "MIIC4AYJKoZIhvcNAQcWMicrosoft.CPlat.Core.RunCommandLinux/IsZAEZFidXVFB/8Ucc8OxGFWpqtXZdrFcuuSxOa6ib/+la5ukd7pXjzYCd/Sp9yGj7Rw3Kqf9Lpix/sPuqWjV6e2XgDhW4j2nRNTJU4yXxocQm/PhAr39Um7n0pgI2Cn28AabYtsHWjKqr8Al9LX6bKm8cnmnLjqTntphCw==", "publicSettings": {} } } ] } https://md-z9z999zzzz99.z38.blob.storage.azure.net/$system/vm01.667ae9f0-zz99-99z9-9zz9-9z9z99999zz9.vmSettings?sv=2018-03-28&sr=b&sk=system-1&sig=9ZzZZZ%2fZzz9zZzZZZZz9z9zZ99Zzz%3d&se=9999-01-01T00%3a00%3a00Z&sp=r Azure-WALinuxAgent-a976115/tests/data/wire/ext_conf_required_features.xml000066400000000000000000000041241510742556200265410ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml TestRequiredFeature1 TestRequiredFeature2 TestRequiredFeature3 3.0 {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-a976115/tests/data/wire/ext_conf_rsm_version.xml000066400000000000000000000041001510742556200253630ustar00rootroot00000000000000 Prod 9.9.9.10 True True http://mock-goal-state/manifest_of_ga.xml Test 9.9.9.10 True True http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-a976115/tests/data/wire/ext_conf_sequencing.xml000066400000000000000000000054451510742556200251730ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-a976115/tests/data/wire/ext_conf_settings_case_mismatch.xml000066400000000000000000000131461510742556200275470ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-a976115/tests/data/wire/ext_conf_upgradeguid.xml000066400000000000000000000035141510742556200253250ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-a976115/tests/data/wire/ext_conf_version_missing_in_agent_family.xml000066400000000000000000000037741510742556200314600ustar00rootroot00000000000000 Prod True True http://mock-goal-state/manifest_of_ga.xml Test True True http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-a976115/tests/data/wire/ext_conf_version_missing_in_manifest.xml000066400000000000000000000046351510742556200306240ustar00rootroot00000000000000 Prod 9.9.9.999 True True http://mock-goal-state/manifest_of_ga.xml Test 9.9.9.999 True True http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-a976115/tests/data/wire/ext_conf_version_not_from_rsm.xml000066400000000000000000000041021510742556200272700ustar00rootroot00000000000000 Prod 9.9.9.10 False True http://mock-goal-state/manifest_of_ga.xml Test 9.9.9.10 False True http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-a976115/tests/data/wire/ext_conf_vm_not_enabled_for_rsm_upgrades.xml000066400000000000000000000041041510742556200314160ustar00rootroot00000000000000 Prod 9.9.9.10 False False http://mock-goal-state/manifest_of_ga.xml Test 9.9.9.10 False False http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-a976115/tests/data/wire/ga_manifest.xml000066400000000000000000000032301510742556200234100ustar00rootroot00000000000000 1.0.0 http://mock-goal-state/ga-manifests/OSTCExtensions.WALinuxAgent__1.0.0 1.1.0 http://mock-goal-state/ga-manifests/OSTCExtensions.WALinuxAgent__1.1.0 1.2.0 http://mock-goal-state/ga-manifests/OSTCExtensions.WALinuxAgent__1.2.0 2.0.0http://mock-goal-state/ga-manifests/OSTCExtensions.WALinuxAgent__2.0.0 2.1.0http://mock-goal-state/ga-manifests/OSTCExtensions.WALinuxAgent__2.1.0 2.5.0http://mock-goal-state/ga-manifests/OSTCExtensions.WALinuxAgent__2.5.0 9.9.9.10 http://mock-goal-state/ga-manifests/OSTCExtensions.WALinuxAgent__9.9.9.10 99999.0.0.0http://mock-goal-state/ga-manifests/OSTCExtensions.WALinuxAgent__99999.0.0.0 Azure-WALinuxAgent-a976115/tests/data/wire/ga_manifest_no_upgrade.xml000066400000000000000000000014351510742556200256200ustar00rootroot00000000000000 1.0.0 http://mock-goal-state/ga-manifests/OSTCExtensions.WALinuxAgent__1.0.0 1.1.0 http://mock-goal-state/ga-manifests/OSTCExtensions.WALinuxAgent__1.1.0 1.2.0 http://mock-goal-state/ga-manifests/OSTCExtensions.WALinuxAgent__1.2.0 Azure-WALinuxAgent-a976115/tests/data/wire/ga_manifest_no_uris.xml000066400000000000000000000026211510742556200251510ustar00rootroot00000000000000 1.0.0 http://mock-goal-state/ga-manifests/OSTCExtensions.WALinuxAgent__1.0.0 1.1.0 http://mock-goal-state/ga-manifests/OSTCExtensions.WALinuxAgent__1.1.0 1.2.0 http://mock-goal-state/ga-manifests/OSTCExtensions.WALinuxAgent__1.2.0 2.0.0http://mock-goal-state/ga-manifests/OSTCExtensions.WALinuxAgent__2.0.0 2.1.0http://mock-goal-state/ga-manifests/OSTCExtensions.WALinuxAgent__2.1.0 9.9.9.10 http://mock-goal-state/ga-manifests/OSTCExtensions.WALinuxAgent__99999.0.0.0 99999.0.0.0 Azure-WALinuxAgent-a976115/tests/data/wire/goal_state.xml000066400000000000000000000035511510742556200232630ustar00rootroot00000000000000 2010-12-15 1 Started 16001 c6d5526c-5ac2-4200-b6e2-56f2b70c5ab2 b61f93d0-e1ed-40b2-b067-22c243233448.MachineRole_IN_0 Started http://168.63.129.16:80/machine/865a6683-91d8-450f-99ae/bc8b9d47%2Db5ed%2D4704%2D85d9%2Dfd74cc967ec2.%5Fcanary?comp=config&type=hostingEnvironmentConfig&incarnation=1 http://168.63.129.16:80/machine/865a6683-91d8-450f-99ae/bc8b9d47%2Db5ed%2D4704%2D85d9%2Dfd74cc967ec2.%5Fcanary?comp=config&type=sharedConfig&incarnation=1 http://168.63.129.16:80/machine/865a6683-91d8-450f-99ae/bc8b9d47%2Db5ed%2D4704%2D85d9%2Dfd74cc967ec2.%5Fcanary?comp=config&type=extensionsConfig&incarnation=1 http://168.63.129.16:80/machine/865a6683-91d8-450f-99ae/bc8b9d47%2Db5ed%2D4704%2D85d9%2Dfd74cc967ec2.%5Fcanary?comp=config&type=fullConfig&incarnation=1 http://168.63.129.16:80/machine/865a6683-91d8-450f-99ae/bc8b9d47%2Db5ed%2D4704%2D85d9%2Dfd74cc967ec2.%5Fcanary?comp=certificates&incarnation=1 bc8b9d47-b5ed-4704-85d9-fd74cc967ec2.5.bc8b9d47-b5ed-4704-85d9-fd74cc967ec2.5._canary.1.xml Azure-WALinuxAgent-a976115/tests/data/wire/goal_state_no_certs.xml000066400000000000000000000032571510742556200251620ustar00rootroot00000000000000 2010-12-15 1 Started 16001 c6d5526c-5ac2-4200-b6e2-56f2b70c5ab2 b61f93d0-e1ed-40b2-b067-22c243233448.MachineRole_IN_0 Started http://168.63.129.16:80/machine/865a6683-91d8-450f-99ae/bc8b9d47%2Db5ed%2D4704%2D85d9%2Dfd74cc967ec2.%5Fcanary?comp=config&type=hostingEnvironmentConfig&incarnation=1 http://168.63.129.16:80/machine/865a6683-91d8-450f-99ae/bc8b9d47%2Db5ed%2D4704%2D85d9%2Dfd74cc967ec2.%5Fcanary?comp=config&type=sharedConfig&incarnation=1 http://168.63.129.16:80/machine/865a6683-91d8-450f-99ae/bc8b9d47%2Db5ed%2D4704%2D85d9%2Dfd74cc967ec2.%5Fcanary?comp=config&type=extensionsConfig&incarnation=1 http://168.63.129.16:80/machine/865a6683-91d8-450f-99ae/bc8b9d47%2Db5ed%2D4704%2D85d9%2Dfd74cc967ec2.%5Fcanary?comp=config&type=fullConfig&incarnation=1 bc8b9d47-b5ed-4704-85d9-fd74cc967ec2.5.bc8b9d47-b5ed-4704-85d9-fd74cc967ec2.5._canary.1.xml Azure-WALinuxAgent-a976115/tests/data/wire/goal_state_no_ext.xml000066400000000000000000000032241510742556200246340ustar00rootroot00000000000000 2010-12-15 1 Started 16001 c6d5526c-5ac2-4200-b6e2-56f2b70c5ab2 b61f93d0-e1ed-40b2-b067-22c243233448.MachineRole_IN_0 Started http://168.63.129.16:80/machine/865a6683-91d8-450f-99ae/bc8b9d47%2Db5ed%2D4704%2D85d9%2Dfd74cc967ec2.%5Fcanary?comp=config&type=hostingEnvironmentConfig&incarnation=1 http://168.63.129.16:80/machine/865a6683-91d8-450f-99ae/bc8b9d47%2Db5ed%2D4704%2D85d9%2Dfd74cc967ec2.%5Fcanary?comp=config&type=sharedConfig&incarnation=1 http://168.63.129.16:80/machine/865a6683-91d8-450f-99ae/bc8b9d47%2Db5ed%2D4704%2D85d9%2Dfd74cc967ec2.%5Fcanary?comp=config&type=fullConfig&incarnation=1 http://168.63.129.16:80/machine/865a6683-91d8-450f-99ae/bc8b9d47%2Db5ed%2D4704%2D85d9%2Dfd74cc967ec2.%5Fcanary?comp=certificates&incarnation=1 bc8b9d47-b5ed-4704-85d9-fd74cc967ec2.5.bc8b9d47-b5ed-4704-85d9-fd74cc967ec2.5._canary.1.xml Azure-WALinuxAgent-a976115/tests/data/wire/goal_state_noop.xml000066400000000000000000000007261510742556200243170ustar00rootroot00000000000000 2010-12-15 1 Started 16001 c6d5526c-5ac2-4200-b6e2-56f2b70c5ab2 Azure-WALinuxAgent-a976115/tests/data/wire/goal_state_remote_access.xml000066400000000000000000000041141510742556200261530ustar00rootroot00000000000000 2010-12-15 1 Started 16001 c6d5526c-5ac2-4200-b6e2-56f2b70c5ab2 http://168.63.129.16:80/remoteaccessinfouri/ b61f93d0-e1ed-40b2-b067-22c243233448.MachineRole_IN_0 Started b61f93d0-e1ed-40b2-b067-22c243233448.1.b61f93d0-e1ed-40b2-b067-22c243233448.2.MachineRole_IN_0.xml http://168.63.129.16:80/machine/865a6683-91d8-450f-99ae/bc8b9d47%2Db5ed%2D4704%2D85d9%2Dfd74cc967ec2.%5Fcanary?comp=config&type=hostingEnvironmentConfig&incarnation=1 http://168.63.129.16:80/machine/865a6683-91d8-450f-99ae/bc8b9d47%2Db5ed%2D4704%2D85d9%2Dfd74cc967ec2.%5Fcanary?comp=config&type=sharedConfig&incarnation=1 http://168.63.129.16:80/machine/865a6683-91d8-450f-99ae/bc8b9d47%2Db5ed%2D4704%2D85d9%2Dfd74cc967ec2.%5Fcanary?comp=config&type=extensionsConfig&incarnation=1 http://168.63.129.16:80/machine/865a6683-91d8-450f-99ae/bc8b9d47%2Db5ed%2D4704%2D85d9%2Dfd74cc967ec2.%5Fcanary?comp=config&type=fullConfig&incarnation=1 http://168.63.129.16:80/machine/865a6683-91d8-450f-99ae/bc8b9d47%2Db5ed%2D4704%2D85d9%2Dfd74cc967ec2.%5Fcanary?comp=certificates&incarnation=1 bc8b9d47-b5ed-4704-85d9-fd74cc967ec2.5.bc8b9d47-b5ed-4704-85d9-fd74cc967ec2.5._canary.1.xml Azure-WALinuxAgent-a976115/tests/data/wire/hosting_env.xml000066400000000000000000000043251510742556200234640ustar00rootroot00000000000000 Azure-WALinuxAgent-a976115/tests/data/wire/in_vm_artifacts_profile.json000066400000000000000000000000221510742556200261700ustar00rootroot00000000000000{ "onHold": true }Azure-WALinuxAgent-a976115/tests/data/wire/incorrect-certs.xml000066400000000000000000000123351510742556200242470ustar00rootroot00000000000000 2012-11-30 5 Pkcs7BlobWithPfxContents MIIOgwYJKoZIhvcNAQcDoIIOdDCCDnACAQIxggEwMIIBLAIBAoAUiF8ZYMs9mMa8 QOEMxDaIhGza+0IwDQYJKoZIhvcNAQEBBQAEggEAQW7GyeRVEhHSU1/dzV0IndH0 rDQk+27MvlsWTcpNcgGFtfRYxu5bzmp0+DoimX3pRBlSFOpMJ34jpg4xs78EsSWH FRhCf3EGuEUBHo6yR8FhXDTuS7kZ0UmquiCI2/r8j8gbaGBNeP8IRizcAYrPMA5S E8l1uCrw7DHuLscbVni/7UglGaTfFS3BqS5jYbiRt2Qh3p+JPUfm51IG3WCIw/WS 2QHebmHxvMFmAp8AiBWSQJizQBEJ1lIfhhBMN4A7NadMWAe6T2DRclvdrQhJX32k amOiogbW4HJsL6Hphn7Frrw3CENOdWMAvgQBvZ3EjAXgsJuhBA1VIrwofzlDljCC DTUGCSqGSIb3DQEHATAUBggqhkiG9w0DBwQIxcvw9qx4y0qAgg0QrINXpC23BWT2 Fb9N8YS3Be9eO3fF8KNdM6qGf0kKR16l/PWyP2L+pZxCcCPk83d070qPdnJK9qpJ 6S1hI80Y0oQnY9VBFrdfkc8fGZHXqm5jNS9G32v/AxYpJJC/qrAQnWuOdLtOZaGL 94GEh3XRagvz1wifv8SRI8B1MzxrpCimeMxHkL3zvJFg9FjLGdrak868feqhr6Nb pqH9zL7bMq8YP788qTRELUnL72aDzGAM7HEj7V4yu2uD3i3Ryz3bqWaj9IF38Sa0 6rACBkiNfZBPgExoMUm2GNVyx8hTis2XKRgz4NLh29bBkKrArK9sYDncE9ocwrrX AQ99yn03Xv6TH8bRp0cSj4jzBXc5RFsUQG/LxzJVMjvnkDbwNE41DtFiYz5QVcv1 cMpTH16YfzSL34a479eNq/4+JAs/zcb2wjBskJipMUU4hNx5fhthvfKwDOQbLTqN HcP23iPQIhjdUXf6gpu5RGu4JZ0dAMHMHFKvNL6TNejwx/H6KAPp6rCRsYi6QhAb 42SXdZmhAyQsFpGD9U5ieJApqeCHfj9Xhld61GqLJA9+WLVhDPADjqHoAVvrOkKH OtPegId/lWnCB7p551klAjiEA2/DKxFBIAEhqZpiLl+juZfMXovkdmGxMP4gvNNF gbS2k5A0IJ8q51gZcH1F56smdAmi5kvhPnFdy/9gqeI/F11F1SkbPVLImP0mmrFi zQD5JGfEu1psUYvhpOdaYDkmAK5qU5xHSljqZFz5hXNt4ebvSlurHAhunJb2ln3g AJUHwtZnVBrtYMB0w6fdwYqMxXi4vLeqUiHtIQtbOq32zlSryNPQqG9H0iP9l/G1 t7oUfr9woI/B0kduaY9jd5Qtkqs1DoyfNMSaPNohUK/CWOTD51qOadzSvK0hJ+At 033PFfv9ilaX6GmzHdEVEanrn9a+BoBCnGnuysHk/8gdswj9OzeCemyIFJD7iObN rNex3SCf3ucnAejJOA0awaLx88O1XTteUjcFn26EUji6DRK+8JJiN2lXSyQokNeY ox6Z4hFQDmw/Q0k/iJqe9/Dq4zA0l3Krkpra0DZoWh5kzYUA0g5+Yg6GmRNRa8YG tuuD6qK1SBEzmCYff6ivjgsXV5+vFBSjEpx2dPEaKdYxtHMOjkttuTi1mr+19dVf hSltbzfISbV9HafX76dhwZJ0QwsUx+aOW6OrnK8zoQc5AFOXpe9BrrOuEX01qrM0 KX5tS8Zx5HqDLievjir194oi3r+nAiG14kYlGmOTHshu7keGCgJmzJ0iVG/i+TnV ZSLyd8OqV1F6MET1ijgR3OPL3kt81Zy9lATWk/DgKbGBkkKAnXO2HUw9U34JFyEy vEc81qeHci8sT5QKSFHiP3r8EcK8rT5k9CHpnbFmg7VWSMVD0/wRB/C4BiIw357a xyJ/q1NNvOZVAyYzIzf9TjwREtyeHEo5kS6hyWSn7fbFf3sNGO2I30veWOvE6kFA HMtF3NplOrTYcM7fAK5zJCBK20oU645TxI8GsICMog7IFidFMdRn4MaXpwAjEZO4 44m2M+4XyeRCAZhp1Fu4mDiHGqgd44mKtwvLACVF4ygWZnACDpI17X88wMnwL4uU vgehLZdAE89gvukSCsET1inVBnn/hVenCRbbZ++IGv2XoYvRfeezfOoNUcJXyawQ JFqN0CRB5pliuCesTO2urn4HSwGGoeBd507pGWZmOAjbNjGswlJJXF0NFnNW/zWw UFYy+BI9axuhWTSnCXbNbngdNQKHznKe1Lwit6AI3U9jS33pM3W+pwUAQegVdtpG XT01YgiMCBX+b8B/xcWTww0JbeUwKXudzKsPhQmaA0lubAo04JACMfON8jSZCeRV TyIzgacxGU6YbEKH4PhYTGl9srcWIT9iGSYD53V7Kyvjumd0Y3Qc3JLnuWZT6Oe3 uJ4xz9jJtoaTDvPJQNK3igscjZnWZSP8XMJo1/f7vbvD57pPt1Hqdirp1EBQNshk iX9CUh4fuGFFeHf6MtGxPofbXmvA2GYcFsOez4/2eOTEmo6H3P4Hrya97XHS0dmD zFSAjzAlacTrn1uuxtxFTikdOwvdmQJJEfyYWCB1lqWOZi97+7nzqyXMLvMgmwug ZF/xHFMhFTR8Wn7puuwf36JpPQiM4oQ/Lp66zkS4UlKrVsmSXIXudLMg8SQ5WqK8 DjevEZwsHHaMtfDsnCAhAdRc2jCpyHKKnmhCDdkcdJJEymWKILUJI5PJ3XtiMHnR Sa35OOICS0lTq4VwhUdkGwGjRoY1GsriPHd6LOt1aom14yJros1h7ta604hSCn4k zj9p7wY9gfgkXWXNfmarrZ9NNwlHxzgSva+jbJcLmE4GMX5OFHHGlRj/9S1xC2Wf MY9orzlooGM74NtmRi4qNkFj3dQCde8XRR4wh2IvPUCsr4j+XaoCoc3R5Rn/yNJK zIkccJ2K14u9X/A0BLXHn5Gnd0tBYcVOqP6dQlW9UWdJC/Xooh7+CVU5cZIxuF/s Vvg+Xwiv3XqekJRu3cMllJDp5rwe5EWZSmnoAiGKjouKAIszlevaRiD/wT6Zra3c Wn/1U/sGop6zRscHR7pgI99NSogzpVGThUs+ez7otDBIdDbLpMjktahgWoi1Vqhc fNZXjA6ob4zTWY/16Ys0YWxHO+MtyWTMP1dnsqePDfYXGUHe8yGxylbcjfrsVYta 4H6eYR86eU3eXB+MpS/iA4jBq4QYWR9QUkd6FDfmRGgWlMXhisPv6Pfnj384NzEV Emeg7tW8wzWR64EON9iGeGYYa2BBl2FVaayMEoUhthhFcDM1r3/Mox5xF0qnlys4 goWkMzqbzA2t97bC0KDGzkcHT4wMeiJBLDZ7S2J2nDAEhcTLY0P2zvOB4879pEWx Bd15AyG1DvNssA5ooaDzKi/Li6NgDuMJ8W7+tmsBwDvwuf2N3koqBeXfKhR4rTqu Wg1k9fX3+8DzDf0EjtDZJdfWZAynONi1PhZGbNbaMKsQ+6TflkCACInRdOADR5GM rL7JtrgF1a9n0HD9vk2WGZqKI71tfS8zODkOZDD8aAusD2DOSmVZl48HX/t4i4Wc 3dgi/gkCMrfK3wOujb8tL4zjnlVkM7kzKk0MgHuA1w81zFjeMFvigHes4IWhQVcz ek3l4bGifI2kzU7bGIi5e/019ppJzGsVcrOE/3z4GS0DJVk6fy7MEMIFx0LhJPlL T+9HMH85sSYb97PTiMWpfBvNw3FSC7QQT9FC3L8d/XtMY3NvZoc7Fz7cSGaj7NXG 1OgVnAzMunPa3QaduoxMF9346s+4a+FrpRxL/3bb4skojjmmLqP4dsbD1uz0fP9y xSifnTnrtjumYWMVi+pEb5kR0sTHl0XS7qKRi3SEfv28uh72KdvcufonIA5rnEb5 +yqAZiqW2OxVsRoVLVODPswP4VIDiun2kCnfkQygPzxlZUeDZur0mmZ3vwC81C1Q dZcjlukZcqUaxybUloUilqfNeby+2Uig0krLh2+AM4EqR63LeZ/tk+zCitHeRBW0 wl3Bd7ShBFg6kN5tCJlHf/G6suIJVr+A9BXfwekO9+//CutKakCwmJTUiNWbQbtN q3aNCnomyD3WjvUbitVO0CWYjZrmMLIsPtzyLQydpT7tjXpHgvwm5GYWdUGnNs4y NbA262sUl7Ku/GDw1CnFYXbxl+qxbucLtCdSIFR2xUq3rEO1MXlD/txdTxn6ANax hi9oBg8tHzuGYJFiCDCvbVVTHgWUSnm/EqfclpJzGmxt8g7vbaohW7NMmMQrLBFP G6qBypgvotx1iJWaHVLNNiXvyqQwTtelNPAUweRoNawBp/5KTwwy/tHeF0gsVQ7y mFX4umub9YT34Lpe7qUPKNxXzFcUgAf1SA6vyZ20UI7p42S2OT2PrahJ+uO6LQVD +REhtN0oyS3G6HzAmKkBgw7LcV3XmAr39iSR7mdmoHSJuI9bjveAPhniK+N6uuln xf17Qnw5NWfr9MXcLli7zqwMglU/1bNirkwVqf/ogi/zQ3JYCo6tFGf/rnGQAORJ hvOq2SEYXnizPPIH7VrpE16+jUXwgpiQ8TDyeLPmpZVuhXTXiCaJO5lIwmLQqkmg JqNiT9V44sksNFTGNKgZo5O9rEqfqX4dLjfv6pGJL+MFXD9if4f1JQiXJfhcRcDh Ff9B6HukgbJ1H96eLUUNj8sL1+WPOqawkS4wg7tVaERE8CW7mqk15dCysn9shSut I+7JU7+dZsxpj0ownrxuPAFuT8ZlcBPrFzPUwTlW1G0CbuEco8ijfy5IfbyGCn5s K/0bOfAuNVGoOpLZ1dMki2bGdBwQOQlkLKhAxYcCVQ0/urr1Ab+VXU9kBsIU8ssN GogKngYpuUV0PHmpzmobielOHLjNqA2v9vQSV3Ed48wRy5OCwLX1+vYmYlggMDGt wfl+7QbXYf+k5WnELf3IqYvh8ZWexa0= Azure-WALinuxAgent-a976115/tests/data/wire/invalid_config/000077500000000000000000000000001510742556200233665ustar00rootroot00000000000000ext_conf_multiple_depends_on_for_single_handler.xml000066400000000000000000000065151510742556200356620ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/wire/invalid_config Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo ext_conf_multiple_runtime_settings_same_plugin.xml000066400000000000000000000037651510742556200356320ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/wire/invalid_config Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo ext_conf_multiple_settings_for_same_handler.xml000066400000000000000000000041031510742556200350370ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/wire/invalid_config Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo ext_conf_plugin_settings_version_mismatch.xml000066400000000000000000000041211510742556200345640ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/wire/invalid_config Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo ext_conf_single_and_multi_config_settings_same_plugin.xml000066400000000000000000000040421510742556200370630ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/wire/invalid_config Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"F6ABAA61098A301EBB8A571C3C7CF77F355F7FA9","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-a976115/tests/data/wire/manifest.xml000066400000000000000000000061451510742556200227510ustar00rootroot00000000000000 1.0.0 http://mock-goal-state/foo.bar/zar/OSTCExtensions.ExampleHandlerLinux__1.0.0 1.1.0 http://mock-goal-state/foo.bar/zar/OSTCExtensions.ExampleHandlerLinux__1.1.0 1.1.1 http://mock-goal-state/foo.bar/zar/OSTCExtensions.ExampleHandlerLinux__1.1.1 1.2.0 http://mock-goal-state/foo.bar/zar/OSTCExtensions.ExampleHandlerLinux__1.2.0 2.0.0http://mock-goal-state/host/OSTCExtensions.ExampleHandlerLinux__2.0.0 2.1.0http://mock-goal-state/host/OSTCExtensions.ExampleHandlerLinux__2.1.0 True 2.1.1http://mock-goal-state/host/OSTCExtensions.ExampleHandlerLinux__2.1.1 2.2.0http://mock-goal-state/host/OSTCExtensions.ExampleHandlerLinux__2.2.0 3.0http://mock-goal-state/host/OSTCExtensions.ExampleHandlerLinux__3.0 3.1http://mock-goal-state/host/OSTCExtensions.ExampleHandlerLinux__3.1 4.0.0.0http://mock-goal-state/host/OSTCExtensions.ExampleHandlerLinux__3.0 4.0.0.1http://mock-goal-state/host/OSTCExtensions.ExampleHandlerLinux__3.1 4.1.0.0http://mock-goal-state/host/OSTCExtensions.ExampleHandlerLinux__3.1 1.3.0 http://mock-goal-state/foo.bar/zar/OSTCExtensions.ExampleHandlerLinux__1.3.0 2.3.0http://mock-goal-state/host/OSTCExtensions.ExampleHandlerLinux__2.3.0 2.4.0http://mock-goal-state/host/OSTCExtensions.ExampleHandlerLinux__2.3.0 Azure-WALinuxAgent-a976115/tests/data/wire/manifest_deletion.xml000066400000000000000000000006001510742556200246220ustar00rootroot00000000000000 1.0.0 http://mock-goal-state/foo.bar/zar/OSTCExtensions.ExampleHandlerLinux__1.0.0 Azure-WALinuxAgent-a976115/tests/data/wire/manifest_vm_access.xml000066400000000000000000000006121510742556200247650ustar00rootroot00000000000000 1.7.0 http://mock-goal-state/foo.bar/zar/Microsoft.OSTCExtensions.Edp.VMAccessForLinux__1.7.0 Azure-WALinuxAgent-a976115/tests/data/wire/multi-config/000077500000000000000000000000001510742556200230105ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/data/wire/multi-config/ext_conf_mc_disabled_extensions.xml000066400000000000000000000104301510742556200321220ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml eastus CRP MultipleExtensionsPerHandler https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.status?sv=2018-03-28&sr=b&sk=system-1&sig=1%2b%2f4nL3kZJyUb7EKxSVGQ%2fHLpXBZxCU8Zo4diPFPv5o%3d&se=9999-01-01T00%3a00%3a00Z&sp=w { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"Write-Host First: Hello World 1234!"},"parameters":[{"name":"extensionName","value":"firstRunCommand"}],"timeoutInSeconds":120} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Disabling secondExtension"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Enabling thirdExtension"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Enabling fourthExtension"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Enabling SingleConfig Extension"} } } ] } https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.vmSettings?sv=2018-03-28&sr=b&sk=system-1&sig=8YHwmibhasT0r9MZgL09QmFwL7ZV%2bg%2b49QP5Zwe4ksY%3d&se=9999-01-01T00%3a00%3a00Z&sp=r https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.vmHealth?sv=2018-03-28&sr=b&sk=system-1&sig=DQSxfPRZEoGBGIFl%2f4bFZ0LM9RNr9DbUEmmtkiQkWkE%3d&se=9999-01-01T00%3a00%3a00Z&sp=rwAzure-WALinuxAgent-a976115/tests/data/wire/multi-config/ext_conf_mc_update_extensions.xml000066400000000000000000000076551510742556200316540ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml eastus CRP MultipleExtensionsPerHandler https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.status?sv=2018-03-28&sr=b&sk=system-1&sig=1%2b%2f4nL3kZJyUb7EKxSVGQ%2fHLpXBZxCU8Zo4diPFPv5o%3d&se=9999-01-01T00%3a00%3a00Z&sp=w { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Disabling firstExtension"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Disabling secondExtension"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Enabling thirdExtension"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Enabling SingleConfig extension"} } } ] } https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.vmSettings?sv=2018-03-28&sr=b&sk=system-1&sig=8YHwmibhasT0r9MZgL09QmFwL7ZV%2bg%2b49QP5Zwe4ksY%3d&se=9999-01-01T00%3a00%3a00Z&sp=r https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.vmHealth?sv=2018-03-28&sr=b&sk=system-1&sig=DQSxfPRZEoGBGIFl%2f4bFZ0LM9RNr9DbUEmmtkiQkWkE%3d&se=9999-01-01T00%3a00%3a00Z&sp=rwAzure-WALinuxAgent-a976115/tests/data/wire/multi-config/ext_conf_multi_config_no_dependencies.xml000066400000000000000000000076471510742556200333160ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml eastus CRP MultipleExtensionsPerHandler https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.status?sv=2018-03-28&sr=b&sk=system-1&sig=1%2b%2f4nL3kZJyUb7EKxSVGQ%2fHLpXBZxCU8Zo4diPFPv5o%3d&se=9999-01-01T00%3a00%3a00Z&sp=w { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Enabling firstExtension"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Enabling secondExtension"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Enabling thirdExtension"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Enabling SingleConfig extension"} } } ] } https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.vmSettings?sv=2018-03-28&sr=b&sk=system-1&sig=8YHwmibhasT0r9MZgL09QmFwL7ZV%2bg%2b49QP5Zwe4ksY%3d&se=9999-01-01T00%3a00%3a00Z&sp=r https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.vmHealth?sv=2018-03-28&sr=b&sk=system-1&sig=DQSxfPRZEoGBGIFl%2f4bFZ0LM9RNr9DbUEmmtkiQkWkE%3d&se=9999-01-01T00%3a00%3a00Z&sp=rwAzure-WALinuxAgent-a976115/tests/data/wire/multi-config/ext_conf_with_disabled_multi_config.xml000066400000000000000000000205601510742556200327630ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml eastus CRP MultipleExtensionsPerHandler https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.status?sv=2018-03-28&sr=b&sk=system-1&sig=1%2b%2f4nL3kZJyUb7EKxSVGQ%2fHLpXBZxCU8Zo4diPFPv5o%3d&se=9999-01-01T00%3a00%3a00Z&sp=w { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"Write-Host First: Hello World 66!"},"parameters":[{"name":"extensionName","value":"firstRunCommand2"}],"timeoutInSeconds":120} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"Write-Host Second: Hello World 66!"},"parameters":[{"name":"extensionName","value":"secondRunCommand2"}],"timeoutInSeconds":120} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"Write-Host Third: Hello World 3!"},"parameters":[{"name":"extensionName","value":"thirdRunCommand"}],"timeoutInSeconds":120} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"fileUris":["https://test.blob.core.windows.net/windowsagent/VerifyAgentRunning.ps1"],"commandToExecute":"echo Hello 666"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {} } } ] } { "runtimeSettings": [ { "handlerSettings": { "protectedSettingsCertThumbprint": "EE22B5ECAC6A83B581A7CAC1772BEBD0E016649F", "protectedSettings": "MIIB0AYJKoZIhvcNAQcDoIIBwTCCAb0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEC9nmIRFUMGqQOFdBaBjrNswDQYJKoZIhvcNAQEBBQAEggEAUrwxmEAEZuDxhX1S8il8AWeaSFhMDag2DbFEw9v8VCpNM1nhikc21tVaxKfkcugczJ++x5cfM06sMj+8aBisNFUEvAKZzaUvqH91ty7DN8syaZrs0bzx34p8O42bOtGC/UoetqsAlpYrDJr+UYUgm8QFJY8UDCpBHdFZS4bO4FQdAmJ9AVwwTzYT4KI96GNDiUS8skJMycUyFT1IO08n5bSOJqo1b0qUDz28JOQdEAriR+7sCry7okyajhy76FcbD16zhwsZVNzHcQm1IqPU2htlRcWR13RqHz7FUFMFky7t+ZxKB5TAcleF04yo4xkSqLtPDWZuDRu5gEHqom0kQjBLBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECNOB8r5YkBJugCiXeeb4js/ehiWBX7qWgrPlgETtKjUq5N1Z0fxPCjPG7SQb+BqQqiX1", "publicSettings": {"UserName":"test1234"} } } ] } https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.vmSettings?sv=2018-03-28&sr=b&sk=system-1&sig=8YHwmibhasT0r9MZgL09QmFwL7ZV%2bg%2b49QP5Zwe4ksY%3d&se=9999-01-01T00%3a00%3a00Z&sp=r https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.vmHealth?sv=2018-03-28&sr=b&sk=system-1&sig=DQSxfPRZEoGBGIFl%2f4bFZ0LM9RNr9DbUEmmtkiQkWkE%3d&se=9999-01-01T00%3a00%3a00Z&sp=rwAzure-WALinuxAgent-a976115/tests/data/wire/multi-config/ext_conf_with_multi_config.xml000066400000000000000000000211521510742556200311320ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml eastus CRP MultipleExtensionsPerHandler https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.status?sv=2018-03-28&sr=b&sk=system-1&sig=1%2b%2f4nL3kZJyUb7EKxSVGQ%2fHLpXBZxCU8Zo4diPFPv5o%3d&se=9999-01-01T00%3a00%3a00Z&sp=w { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"Write-Host First: Hello World 1234!"},"parameters":[{"name":"extensionName","value":"firstRunCommand"}],"timeoutInSeconds":120} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"Write-Host First: Hello World 1234!"},"parameters":[{"name":"extensionName","value":"secondRunCommand"}],"timeoutInSeconds":120} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"Write-Host Third: Hello World 3!"},"parameters":[{"name":"extensionName","value":"thirdRunCommand"}],"timeoutInSeconds":120} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"fileUris":["https://test.blob.core.windows.net/windowsagent/VerifyAgentRunning.ps1"],"commandToExecute":"echo Hello 1234"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {} } } ] } { "runtimeSettings": [ { "handlerSettings": { "protectedSettingsCertThumbprint": "EE22B5ECAC6A83B581A7CAC1772BEBD0E016649F", "protectedSettings": "MIIByAYJKoZIhvcNAQcDoIIBuTCCAbUCAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEC9nmIRFUMGqQOFdBaBjrNswDQYJKoZIhvcNAQEBBQAEggEAylzH/UuK3909SCbqecUyrd+V6EqTJ7Xe7hzMtYtfVTI3TBDnDlFLLzazawgXpsmOV96II9Bk4Kpo7rvwDuZWZulYWuBWw2q8/XPIpZ+hQg2TaV5A2l9N4gBU6JQ/6axHjCsuq7CUOpK9/Yq019I9HP2SqK8Ao4lMEKLR4AGMnoc+x8aKuFvhn+ClYF/75Pz+h1kgVGvj11LMM5u9M87M6Fie6UlbpFjZmBuZaPiyhfAxQxWqsJBsZ8AaUYzy21Wh3YEC54Pqx3n5kOzMH9Q+G5SkgJQ1SBEhf0gLdzeIKX2jEwOAhrL1tVtglUPxHxK5I9NACCvxou7xRtJgvbsYATBDBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECE8EEkK33SSegCCJOYYNsSXz9s87019rTptS0oBvlZgijlB+NQzvNpzAow==", "publicSettings": {"UserName":"test1234"} } } ] } https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.vmSettings?sv=2018-03-28&sr=b&sk=system-1&sig=8YHwmibhasT0r9MZgL09QmFwL7ZV%2bg%2b49QP5Zwe4ksY%3d&se=9999-01-01T00%3a00%3a00Z&sp=r https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.vmHealth?sv=2018-03-28&sr=b&sk=system-1&sig=DQSxfPRZEoGBGIFl%2f4bFZ0LM9RNr9DbUEmmtkiQkWkE%3d&se=9999-01-01T00%3a00%3a00Z&sp=rwAzure-WALinuxAgent-a976115/tests/data/wire/multi-config/ext_conf_with_multi_config_dependencies.xml000066400000000000000000000127431510742556200336460ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml eastus CRP MultipleExtensionsPerHandler https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.status?sv=2018-03-28&sr=b&sk=system-1&sig=1%2b%2f4nL3kZJyUb7EKxSVGQ%2fHLpXBZxCU8Zo4diPFPv5o%3d&se=9999-01-01T00%3a00%3a00Z&sp=w { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Enabling firstExtension"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Enabling secondExtension"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Enabling thirdExtension"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Enabling dependent SingleConfig extension"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Enabling independent SingleConfig extension"} } } ] } https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.vmSettings?sv=2018-03-28&sr=b&sk=system-1&sig=8YHwmibhasT0r9MZgL09QmFwL7ZV%2bg%2b49QP5Zwe4ksY%3d&se=9999-01-01T00%3a00%3a00Z&sp=r https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.vmHealth?sv=2018-03-28&sr=b&sk=system-1&sig=DQSxfPRZEoGBGIFl%2f4bFZ0LM9RNr9DbUEmmtkiQkWkE%3d&se=9999-01-01T00%3a00%3a00Z&sp=rwAzure-WALinuxAgent-a976115/tests/data/wire/remote_access_10_accounts.xml000066400000000000000000000065631510742556200261620ustar00rootroot00000000000000 1 1 testAccount1 encryptedPasswordString 2019-01-01 Administrators RemoteDesktopUsers testAccount2 encryptedPasswordString 2019-01-01 Administrators RemoteDesktopUsers testAccount3 encryptedPasswordString 2019-01-01 Administrators RemoteDesktopUsers testAccount4 encryptedPasswordString 2019-01-01 Administrators RemoteDesktopUsers testAccount5 encryptedPasswordString 2019-01-01 Administrators RemoteDesktopUsers testAccount6 encryptedPasswordString 2019-01-01 Administrators RemoteDesktopUsers testAccount7 encryptedPasswordString 2019-01-01 Administrators RemoteDesktopUsers testAccount8 encryptedPasswordString 2019-01-01 Administrators RemoteDesktopUsers testAccount9 encryptedPasswordString 2019-01-01 Administrators RemoteDesktopUsers testAccount10 encryptedPasswordString 2019-01-01 Administrators RemoteDesktopUsers Azure-WALinuxAgent-a976115/tests/data/wire/remote_access_duplicate_accounts.xml000066400000000000000000000014501510742556200277020ustar00rootroot00000000000000 1 1 testAccount encryptedPasswordString 2019-01-01 Administrators RemoteDesktopUsers testAccount encryptedPasswordString 2019-01-01 Administrators RemoteDesktopUsers Azure-WALinuxAgent-a976115/tests/data/wire/remote_access_no_accounts.xml000066400000000000000000000002151510742556200263420ustar00rootroot00000000000000 1 1 Azure-WALinuxAgent-a976115/tests/data/wire/remote_access_single_account.xml000066400000000000000000000007401510742556200270270ustar00rootroot00000000000000 1 1 testAccount encryptedPasswordString 2019-01-01 Administrators RemoteDesktopUsers Azure-WALinuxAgent-a976115/tests/data/wire/remote_access_two_accounts.xml000066400000000000000000000014521510742556200265430ustar00rootroot00000000000000 1 1 testAccount1 encryptedPasswordString 2019-01-01 Administrators RemoteDesktopUsers testAccount2 encryptedPasswordString 2019-01-01 Administrators RemoteDesktopUsers Azure-WALinuxAgent-a976115/tests/data/wire/rsa-key.pem000066400000000000000000000032501510742556200224710ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQDe7cwx76yO+OjR hWHJrKt0L1ih9F/Bctyq7Ddi/v3CitVBvkQUve4k+xeT538mHyeoOuGI3QFs5mLh i535zbOFaHwfMMQI/CI4ZDtRrQh59XrJSsPytu0fXihsJ81IwNURuNDKwxYR0tKI KUuUN4YxsDSBeqvP5vjSKT05f90gniscuGvPJ6Zgyynmg56KQtSXKaetbyNzPW/4 QFmadyqsgdR7oZHEYj+1Tl6T9/tAPg/dgO55hT7WVdC8JxXeSiaDyRS1NRMFL0bC fcnLNsO4tni2WJsfuju9a4GTrWe3NQ3+vsQV5s59MtuOhoObuYNVcETYiEjBVVsf +shxRxL/AgMBAAECggEAfslt/eSbFoFYIHmkoQe0R5L57LpIj4QdHpTT91igyDkf ipGEtOtEewHXagYaWXsUmehLBwTy35W0HSTDxyQHetNu7GpWw+lqKPpQhmZL0Nkd aUg9Y1hISjPJ96E3bq5FQBwFm5wSfDaUCF68HmLpzm6xngY/mzF4yEYuDPq8r+RV SDhVtrovSImpwLbKmPdn634PqC6bPDgO5htkT/lL/TVkR3Sla3U/YYMu90m7DiAA 46DEblx0yt+zBB+mKR3TU4zIPSFiTWYs/Srsm6nUnNqjf5rvupvXFZt0/eDZat7/ L+/V5HPV0BxGIkCGt0Uv+qZYMGpC3eU+aEbByOr/wQKBgQDy+l4Rvgl0i+XzUPyw N6UrDDpxBVsZ/w48DrBEBMQqTbZxVDK77E2CeMK/JlYMFYFdIT/c9W0U7eWPqe35 kk9jVsPXc3xeoSiZvqK4CZeHEugE9OtJ4jJL1CfDXMcgPM+iSSj/QOJc5v7891QH 3gMOvmVk3Kk/I2MyBAEE6p6WHwKBgQDq4FvO77tsIZRkgmp3gPg4iImcTgwrgDxz aHqlSVc98o4jzWsUShbZTwRgfcZm+kD3eas+gkux8CevYhwjafWiukrnwu3xvUaO AKmgXU7ud/kS9bK/AT6ZpJsfoZzM/CQsConFbz0eXVb/tmipCBpyzi2yskLdk6SP pEZYISknIQKBgHwE9PzjXdoiChYekUu0q1aEoFPN4wkq2W4oJSoisKnTDrtbuaWX 4Jwm3WhJvgPe+i+55+n1T18uakzg9Hm9h03yHHYdGS8H3TxURKPhKXmlWc4l4O7O SNPRjxY1heHbiDOSWh2nVaMLuL0P1NFLLY5Z+lD4HF8AxgHib06+HoILAoGBALvg oa+jNhGlvrSzWYSkJmnaVfEwwS1e03whe9GRG/cSeb6Lx3agWSyUt1ST50tiLOuI aIGE6hW4m5X/7bAqRvFXASnoVDtFgxV91DHR0ZyRXSxcWxHMZg2yjN89gFa77hdI irHibEpIsZm0iH2FXNqusAE79J6XRlAcQKSoSenhAoGARAP9q1WaftXdK4X7L1Ut wnWJSVYMx6AsEo58SsJgNGqpbCl/vZMCwnSo6pdgO4xInu2tld3TKdPWZLoRCGCo PDYVM1GXj5SS8QPmq+h/6fxS65Gl0h0oHUcKXoPD+AxHn2MWWqWzxMdRuthUQATE MT+l5wgZPiEuiceY3Bp1hYk= -----END PRIVATE KEY----- Azure-WALinuxAgent-a976115/tests/data/wire/rsa-key.pub.pem000066400000000000000000000007031510742556200232560ustar00rootroot00000000000000-----BEGIN PUBLIC KEY----- MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA3u3MMe+sjvjo0YVhyayr dC9YofRfwXLcquw3Yv79worVQb5EFL3uJPsXk+d/Jh8nqDrhiN0BbOZi4Yud+c2z hWh8HzDECPwiOGQ7Ua0IefV6yUrD8rbtH14obCfNSMDVEbjQysMWEdLSiClLlDeG MbA0gXqrz+b40ik9OX/dIJ4rHLhrzyemYMsp5oOeikLUlymnrW8jcz1v+EBZmncq rIHUe6GRxGI/tU5ek/f7QD4P3YDueYU+1lXQvCcV3komg8kUtTUTBS9Gwn3JyzbD uLZ4tlibH7o7vWuBk61ntzUN/r7EFebOfTLbjoaDm7mDVXBE2IhIwVVbH/rIcUcS /wIDAQAB -----END PUBLIC KEY----- Azure-WALinuxAgent-a976115/tests/data/wire/sample.pem000066400000000000000000000032471510742556200224050ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQC3zCdThkBDYu83 M7ouc03caqyEwV6lioWbtYdnraoftbuCJrOhy+WipSCVAmhlu/tpaItuzwB9/VTw eSWfB/hB2sabVTKgU8gTQrI6ISy2ocLjqTIZuOETJuGlAIw6OXorhdUr8acZ8ohb ftZIbS9YKxbO7sQi+20sT2ugROJnO7IDGbb2vWhEhp2NAieJ8Nnq0SMv1+cZJZYk 6hiFVSl12g0egVFrRTJBvvTbPS7amLAQkauK/IxG28jZR61pMbHHX+xBg4Iayb2i qp8YnwK3qtf0stc0h9snnLnHSODva1Bo6qVBEcrkuXmtrHL2nUMsV/MgWG3HMgJJ 6Jf/wSFpAgMBAAECggEBALepsS6cvADajzK5ZPXf0NFOY6CxXnPLrWGAj5NCDftr 7bjMFbq7dngFzD46zrnClCOsDZEoF1TO3p8CYF6/Zwvfo5E7HMDrl8XvYwwFdJn3 oTlALMlZXsh1lQv+NSJFp1hwfylPbGzYV/weDeEIAkR3om4cWDCg0GJz5peb3iXK 5fimrZsnInhktloU2Ep20UepR8wbhS5WP7B2s32OULTlWiGdORUVrHJQbTN6O0NZ WzmAcsgfmW1KEBOR9sDFbAdldt8/WcLJVIfWOdFVbCbOaxrnRnZ8j8tsafziVncD QFRpNeyOHZR5S84oAPo2EIVeFCLLeo3Wit/O3IFmhhUCgYEA5jrs0VSowb/xU/Bw wm1cKnSqsub3p3GLPL4TdODYMHH56Wv8APiwcW9O1+oRZoM9M/8KXkDlfFOz10tY bMYvF8MzFKIzzi5TxaWqSWsNeXpoqtFqUed7KRh3ybncIqFAAauTwmAhAlEmGR/e AY7Oy4b2lnRU1ssIOd0VnSnAqTcCgYEAzF6746DhsInlFIQGsUZBOmUtwyu0k1kc gkWhJt5SyQHZtX1SMV2RI6CXFpUZcjv31jM30GmXdvkuj2dIHaDZB5V5BlctPJZq FH0RFxmFHXk+npLJnKKSX1H3/2PxTUsSBcFHEaPCgvIz3720bX7fqRIFtVdrcbQA cB9DARbjWl8CgYBKADyoWCbaB+EA0vLbe505RECtulF176gKgSnt0muKvsfOQFhC 06ya+WUFP4YSRjLA6MQjYYahvKG8nMoyRE1UvPhJNI2kQv3INKSUbqVpG3BTH3am Ftpebi/qliPsuZnCL60RuCZEAWNWhgisxYMwphPSblfqpl3hg290EbyMZwKBgQCs mypHQ166EozW+fcJDFQU9NVkrGoTtMR+Rj6oLEdxG037mb+sj+EAXSaeXQkj0QAt +g4eyL+zLRuk5E8lLu9+F0EjGMfNDyDC8ypW/yfNT9SSa1k6IJhNR1aUbZ2kcU3k bGwQuuWSYOttAbT8cZaHHgCSOyY03xkrmUunBOS6MwKBgBK4D0Uv7ZDf3Y38A07D MblDQj3wZeFu6IWi9nVT12U3WuEJqQqqxWnWmETa+TS/7lhd0GjTB+79+qOIhmls XSAmIS/rBUGlk5f9n+vBjQkpbqAvcXV7I/oQASpVga1xB9EuMvXc9y+x/QfmrYVM zqxRWJIMASPLiQr79V0zXGXP -----END PRIVATE KEY-----Azure-WALinuxAgent-a976115/tests/data/wire/shared_config.xml000066400000000000000000000046351510742556200237400ustar00rootroot00000000000000 Azure-WALinuxAgent-a976115/tests/data/wire/sshd_config000066400000000000000000000047771510742556200226430ustar00rootroot00000000000000# Package generated configuration file # See the sshd_config(5) manpage for details # What ports, IPs and protocols we listen for Port 22 # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key HostKey /etc/ssh/ssh_host_ecdsa_key HostKey /etc/ssh/ssh_host_ed25519_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 1024 # Logging SyslogFacility AUTH LogLevel INFO # Authentication: LoginGraceTime 120 PermitRootLogin without-password StrictModes yes RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords #PasswordAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding yes X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no #MaxStartups 10:30:60 #Banner /etc/issue.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. UsePAM yes Match group root Azure-WALinuxAgent-a976115/tests/data/wire/trans_cert000066400000000000000000000021471510742556200225060ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIDEzCCAfugAwIBAgIUToMqRt0z6FfqfiJhS1Hh+u2j3VEwDQYJKoZIhvcNAQEL BQAwGTEXMBUGA1UEAwwOTGludXhUcmFuc3BvcnQwHhcNMjQwODAxMTYwOTU2WhcN MjYwODAxMTYwOTU2WjAZMRcwFQYDVQQDDA5MaW51eFRyYW5zcG9ydDCCASIwDQYJ KoZIhvcNAQEBBQADggEPADCCAQoCggEBAMs8jttzIHATj1BNs3r4cCOAMuVaM1b7 Aw8D7Lz3rTxFieQCh1vLSFl1l9SQmO7rmh0OfEzIKK8jAU4wkLclgospKuYpB9ME 5QnXbLpXWYfW99V4safGvv9lGZztGKMd4ZT2it9QcpKEFFi6W7cjIyiUuyYMB0uI IvA6s6tGs8LgN89Lx7HSTSR86QNPvRtTw0jlrr8nfM7EkaT9Q6xu6GjCp89wCx+h IwcPtstSgfMo5P+3IO30L1wSM+CF1n+nD9M8E4wtcxhoWLuyAPhDsw5f7jKyHmRo Nm9RxToM0ON67SmN2906i0NxzXWtuttww6KE/O6BEZKNlnp9ja3bnM8CAwEAAaNT MFEwHQYDVR0OBBYEFNPDyPggVKjneDW7XuzC8NqgmJ9VMB8GA1UdIwQYMBaAFNPD yPggVKjneDW7XuzC8NqgmJ9VMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQEL BQADggEBAFuVgcimwPxgpwKNvyUKMY9VFa6UVZs/ky6FEEaxrKVAl2GZF9MoSTO5 vXMdWYHtSF+RWYxCz5pt7Bv97zuEXvbino/JvsLrE8f265Woe2CdDOPiBCHWBOlH +wM71Hoh0TX7V2TSumona6e0cqUPT7fbNdaNZm8ZHoUscbbPmamERH9Z9zUXWPLk mtjwz17bvRriAMrglA/Dm3xHiEYBJv3+4FnOqPGfg9vZH6xfmrRwrF1Moj5jEZz5 cN2N+vO8HCEqGMBCpSlsWq1c2r3NwLH0J3b6EL7X4jcVvpykKg3WmOZGdataYDk9 0IHy8VyGiX7g3EJOAbbf12FjgLAt4NM= -----END CERTIFICATE----- Azure-WALinuxAgent-a976115/tests/data/wire/trans_prv000066400000000000000000000032501510742556200223540ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIEvAIBADANBgkqhkiG9w0BAQEFAASCBKYwggSiAgEAAoIBAQDLPI7bcyBwE49Q TbN6+HAjgDLlWjNW+wMPA+y89608RYnkAodby0hZdZfUkJju65odDnxMyCivIwFO MJC3JYKLKSrmKQfTBOUJ12y6V1mH1vfVeLGnxr7/ZRmc7RijHeGU9orfUHKShBRY ulu3IyMolLsmDAdLiCLwOrOrRrPC4DfPS8ex0k0kfOkDT70bU8NI5a6/J3zOxJGk /UOsbuhowqfPcAsfoSMHD7bLUoHzKOT/tyDt9C9cEjPghdZ/pw/TPBOMLXMYaFi7 sgD4Q7MOX+4ysh5kaDZvUcU6DNDjeu0pjdvdOotDcc11rbrbcMOihPzugRGSjZZ6 fY2t25zPAgMBAAECggEAE9CAJxIW4AZwKwagUIVnPXbSv3ynU7weRLj/vD6zg5RO CM5cTw1HLP2jg2RjnKuYt2uBn+TF3qldh7eBbHG6RAIL/iuS6TZpdCeuII7CmlVR jVz6iR594Z2EPUH6bHDN3P2adYI84V8CMtJcfcLtuxehFWkHzwvjSCOY/8JhZUbV ebXXc3zPdSu+WmeManXnzs4VgE6QnSNdyk67fvE1Qxi18s49XXWBPTg01hn+v2yJ QVuv36UP2MgIRZJE/PI9NL6tqqiHmY5sCIJ41hQLRxd/mnRC8hdHrfNNhqHVlC9g JoQQwn/dD12EZwyiQyJyGZOmFDrfv7G3d2QQVJ4OLQKBgQDrxf3nRK28CWaV2evS J4MZjTWmZGiNzMiqEtfTgd0v3+rs73WYaNfQ79Iejj6KJfJq7vtdawqGW1bPNfgF KJCdr3yxjpv5GsHF7fiE8ZWcQ6d6FTWNuayLOEbHnPemYTqg5pd1wsPgIBoE9Zqm zo1iuGxmwHos2yQgif9vEU99wwKBgQDcq/+aDscOO1oimJjAbBl95I8bOtSxR0Ip pv/iaB8+rrS18jiAygXuo34tq+L0HmoniMCuuVg4zhgAxzgnohTlsJpyGnzkdkmo TTan76WkFAedmurzQSu96p5F9HOc0MgluQHtPhO5SsjWhUgXxAU0Zoe+JnTVq0X+ //8z1s64BQKBgEbanl4U7p0WuiSIc+0ZALX6EMhrXlxW0WsC9KdUXJNZmHER2WYv A8R/fca++p5rnvlxzkqZs3UDGAh3cIykTymEJlX5xHfNCbSgulHBhDOMxVTT8N8h kG/aPrMYQfhXOdZG1feGy3ScURVydcJxSl4DjFgouc6nIKlCr2fCbQAfAoGAVpez 3EtSNzZ5HzxMLK3+rtUihufmEI7K2rdqj/iV0i4SQZeELp2YCFXlrJxXmb3ZoBvc qHOYt+m/p4aFdZ/3nU5YvM/CFJCKRN3PxcSXdjRZ7LGe4se/F25an07Wk0GmWI8p v2Ptr3c2Kl/ws0q7VB2rxKUokbP86pygE0KGqdUCgYAf8G1QLDZMq57XsNBpiITY xmS/vnmu2jj/DaTAiJ/gPkUaemoJ4xqhuIko7KqaNOBYoOMrOadldygNtrH1c5YE LKdPYQ9/bASF59DnBotKAv79n2svHFHNXkpZA+kIoH7QwhgKpwo3vNwcJcKRIBB9 MjMnBzho1vIbdhoIHJ+Egw== -----END PRIVATE KEY----- Azure-WALinuxAgent-a976115/tests/data/wire/trans_pub000066400000000000000000000007031510742556200223330ustar00rootroot00000000000000-----BEGIN PUBLIC KEY----- MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAyzyO23MgcBOPUE2zevhw I4Ay5VozVvsDDwPsvPetPEWJ5AKHW8tIWXWX1JCY7uuaHQ58TMgoryMBTjCQtyWC iykq5ikH0wTlCddsuldZh9b31Xixp8a+/2UZnO0Yox3hlPaK31BykoQUWLpbtyMj KJS7JgwHS4gi8Dqzq0azwuA3z0vHsdJNJHzpA0+9G1PDSOWuvyd8zsSRpP1DrG7o aMKnz3ALH6EjBw+2y1KB8yjk/7cg7fQvXBIz4IXWf6cP0zwTjC1zGGhYu7IA+EOz Dl/uMrIeZGg2b1HFOgzQ43rtKY3b3TqLQ3HNda2623DDooT87oERko2Wen2Nrduc zwIDAQAB -----END PUBLIC KEY----- Azure-WALinuxAgent-a976115/tests/data/wire/version_info.xml000066400000000000000000000003361510742556200236370ustar00rootroot00000000000000 2012-11-30 2010-12-15 2010-28-10 Azure-WALinuxAgent-a976115/tests/ga/000077500000000000000000000000001510742556200171235ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/ga/__init__.py000066400000000000000000000011651510742556200212370ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # Azure-WALinuxAgent-a976115/tests/ga/test_agent_update_handler.py000066400000000000000000001337571510742556200247110ustar00rootroot00000000000000import contextlib import json import os import random import time from azurelinuxagent.common import conf from azurelinuxagent.common.event import WALAEventOperation from azurelinuxagent.common.exception import AgentUpgradeExitException from azurelinuxagent.common.future import ustr, httpclient from azurelinuxagent.common.protocol.restapi import VMAgentUpdateStatuses from azurelinuxagent.common.protocol.util import ProtocolUtil from azurelinuxagent.common.version import CURRENT_VERSION, AGENT_NAME from azurelinuxagent.ga.agent_update_handler import get_agent_update_handler from azurelinuxagent.ga.guestagent import GuestAgent, INITIAL_UPDATE_STATE_FILE, RSM_UPDATE_STATE_FILE from tests.ga.test_update import UpdateTestCase from tests.lib.http_request_predicates import HttpRequestPredicates from tests.lib.mock_wire_protocol import mock_wire_protocol, MockHttpResponse from tests.lib.wire_protocol_data import DATA_FILE from tests.lib.tools import clear_singleton_instances, load_bin_data, patch class TestAgentUpdate(UpdateTestCase): def setUp(self): UpdateTestCase.setUp(self) # Since ProtocolUtil is a singleton per thread, we need to clear it to ensure that the test cases do not # reuse a previous state clear_singleton_instances(ProtocolUtil) @contextlib.contextmanager def _get_agent_update_handler(self, test_data=None, autoupdate_frequency=0.001, autoupdate_enabled=True, initial_update_attempted=True, rsm_update_attempted=False, protocol_get_error=False, mock_get_header=None, mock_put_header=None, mock_random_update_time=True): # Default to DATA_FILE of test_data parameter raises the pylint warning # W0102: Dangerous default value DATA_FILE (builtins.dict) as argument (dangerous-default-value) test_data = DATA_FILE if test_data is None else test_data with mock_wire_protocol(test_data) as protocol: def get_handler(url, **kwargs): if HttpRequestPredicates.is_agent_package_request(url): if not protocol_get_error: agent_pkg = load_bin_data(self._get_agent_file_name(), self._agent_zip_dir) return MockHttpResponse(status=httpclient.OK, body=agent_pkg) else: return MockHttpResponse(status=httpclient.SERVICE_UNAVAILABLE) return protocol.mock_wire_data.mock_http_get(url, **kwargs) def put_handler(url, *args, **_): if HttpRequestPredicates.is_host_plugin_status_request(url): # Skip reading the HostGA request data as its encoded return MockHttpResponse(status=500) protocol.aggregate_status = json.loads(args[0]) return MockHttpResponse(status=201) http_get_handler = mock_get_header if mock_get_header else get_handler http_put_handler = mock_put_header if mock_put_header else put_handler protocol.set_http_handlers(http_get_handler=http_get_handler, http_put_handler=http_put_handler) if initial_update_attempted: open(os.path.join(conf.get_lib_dir(), INITIAL_UPDATE_STATE_FILE), "a").close() if rsm_update_attempted: open(os.path.join(conf.get_lib_dir(), RSM_UPDATE_STATE_FILE), "a").close() original_randint = random.randint def _mock_random_update_time(a, b): if mock_random_update_time: # update should occur immediately return 0 if b == 1: # handle tests where the normal or hotfix frequency is mocked to be very short (e.g., 1 second). Returning a very small delay (0.001 seconds) ensures the logic is tested without introducing significant waiting time return 0.001 return original_randint(a, b) + 10 # If none of the above conditions are met, the function returns additional 10-seconds delay. This might represent a normal delay for updates in scenarios where updates are not expected immediately with patch("azurelinuxagent.common.conf.get_autoupdate_enabled", return_value=autoupdate_enabled): with patch("azurelinuxagent.common.conf.get_autoupdate_frequency", return_value=autoupdate_frequency): with patch("azurelinuxagent.ga.self_update_version_updater.random.randint", side_effect=_mock_random_update_time): with patch("azurelinuxagent.common.conf.get_autoupdate_gafamily", return_value="Prod"): with patch("azurelinuxagent.common.conf.get_enable_ga_versioning", return_value=True): with patch("azurelinuxagent.common.event.EventLogger.add_event") as mock_telemetry: agent_update_handler = get_agent_update_handler(protocol) agent_update_handler._protocol = protocol yield agent_update_handler, mock_telemetry def _assert_agent_directories_available(self, versions): for version in versions: self.assertTrue(os.path.exists(self.agent_dir(version)), "Agent directory {0} not found".format(version)) def _assert_agent_directories_exist_and_others_dont_exist(self, versions): self._assert_agent_directories_available(versions=versions) other_agents = [agent_dir for agent_dir in self.agent_dirs() if agent_dir not in [self.agent_dir(version) for version in versions]] self.assertFalse(any(other_agents), "All other agents should be purged from agent dir: {0}".format(other_agents)) def _assert_agent_rsm_version_in_goal_state(self, mock_telemetry, inc=1, version="9.9.9.10"): upgrade_event_msgs = [kwarg['message'] for _, kwarg in mock_telemetry.call_args_list if 'New agent version:{0} requested by RSM in Goal state incarnation_{1}'.format(version, inc) in kwarg['message'] and kwarg[ 'op'] == WALAEventOperation.AgentUpgrade] self.assertEqual(1, len(upgrade_event_msgs), "Did not find the event indicating that the agent requested version found. Got: {0}".format( mock_telemetry.call_args_list)) def _assert_update_discovered_from_agent_manifest(self, mock_telemetry, inc=1, version="9.9.9.10"): upgrade_event_msgs = [kwarg['message'] for _, kwarg in mock_telemetry.call_args_list if 'Self-update is ready to upgrade the new agent: {0} now before processing the goal state: incarnation_{1}'.format(version, inc) in kwarg['message'] and kwarg[ 'op'] == WALAEventOperation.AgentUpgrade] self.assertEqual(1, len(upgrade_event_msgs), "Did not find the event indicating that the new version found. Got: {0}".format( mock_telemetry.call_args_list)) def _assert_no_agent_package_telemetry_emitted(self, mock_telemetry, version="9.9.9.10"): upgrade_event_msgs = [kwarg['message'] for _, kwarg in mock_telemetry.call_args_list if 'No matching package found in the agent manifest for version: {0}'.format(version) in kwarg['message'] and kwarg[ 'op'] == WALAEventOperation.AgentUpgrade] self.assertEqual(1, len(upgrade_event_msgs), "Did not find the event indicating that the agent package not found. Got: {0}".format( mock_telemetry.call_args_list)) def _assert_agent_exit_process_telemetry_emitted(self, message): self.assertIn("Current Agent {0} completed all update checks, exiting current process".format(CURRENT_VERSION), message) def test_it_should_not_update_when_autoupdate_disabled(self): self.prepare_agents(count=1) with self._get_agent_update_handler(autoupdate_enabled=False) as (agent_update_handler, mock_telemetry): agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self._assert_agent_directories_exist_and_others_dont_exist(versions=[str(CURRENT_VERSION)]) self.assertEqual(0, len([kwarg['message'] for _, kwarg in mock_telemetry.call_args_list if "requesting a new agent version" in kwarg['message'] and kwarg[ 'op'] == WALAEventOperation.AgentUpgrade]), "should not check for rsm version") def test_it_should_update_to_largest_version_if_ga_versioning_disabled(self): self.prepare_agents(count=1) data_file = DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" with self._get_agent_update_handler(test_data=data_file) as (agent_update_handler, mock_telemetry): with patch.object(conf, "get_enable_ga_versioning", return_value=False): with self.assertRaises(AgentUpgradeExitException) as context: agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self._assert_update_discovered_from_agent_manifest(mock_telemetry, version="99999.0.0.0") self._assert_agent_directories_exist_and_others_dont_exist(versions=[str(CURRENT_VERSION), "99999.0.0.0"]) self._assert_agent_exit_process_telemetry_emitted(ustr(context.exception.reason)) def test_it_should_not_update_to_largest_version_if_manifest_download_time_not_elapsed(self): self.prepare_agents(count=1) data_file = DATA_FILE.copy() data_file["ga_manifest"] = "wire/ga_manifest_no_uris.xml" with self._get_agent_update_handler(test_data=data_file, autoupdate_frequency=10) as (agent_update_handler, _): agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self.assertFalse(os.path.exists(self.agent_dir("99999.0.0.0")), "New agent directory should not be found") agent_update_handler._protocol.mock_wire_data.set_ga_manifest("wire/ga_manifest.xml") agent_update_handler._protocol.mock_wire_data.set_incarnation(2) agent_update_handler._protocol.client.update_goal_state() agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self.assertFalse(os.path.exists(self.agent_dir("99999.0.0.0")), "New agent directory should not be found") def test_it_should_not_do_self_update_if_update_time_is_not_elapsed(self): self.prepare_agents(count=1) data_file = DATA_FILE.copy() data_file["ga_manifest"] = "wire/ga_manifest_no_uris.xml" with self._get_agent_update_handler(test_data=data_file, mock_random_update_time=False) as (agent_update_handler, _): agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self.assertFalse(os.path.exists(self.agent_dir("99999.0.0.0")), "New agent directory should not be found") agent_update_handler._protocol.mock_wire_data.set_ga_manifest("wire/ga_manifest.xml") agent_update_handler._protocol.mock_wire_data.set_incarnation(2) agent_update_handler._protocol.client.update_goal_state() agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self.assertFalse(os.path.exists(self.agent_dir("99999.0.0.0")), "New agent directory should not be found") def test_it_should_update_to_largest_version_after_time_window_elapsed(self): self.prepare_agents(count=1) data_file = DATA_FILE.copy() data_file["ga_manifest"] = "wire/ga_manifest_no_uris.xml" with patch("azurelinuxagent.common.conf.get_self_update_hotfix_frequency", return_value=1): with patch("azurelinuxagent.common.conf.get_self_update_regular_frequency", return_value=1): with self._get_agent_update_handler(test_data=data_file, mock_random_update_time=False) as (agent_update_handler, mock_telemetry): with self.assertRaises(AgentUpgradeExitException) as context: agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self.assertFalse(os.path.exists(self.agent_dir("99999.0.0.0")), "New agent directory should not be found") agent_update_handler._protocol.mock_wire_data.set_ga_manifest("wire/ga_manifest.xml") agent_update_handler._protocol.mock_wire_data.set_incarnation(2) agent_update_handler._protocol.client.update_goal_state() # sleeping for update window to elapse time.sleep(0.1) agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self._assert_update_discovered_from_agent_manifest(mock_telemetry, inc=2, version="99999.0.0.0") self._assert_agent_directories_exist_and_others_dont_exist(versions=[str(CURRENT_VERSION), "99999.0.0.0"]) self._assert_agent_exit_process_telemetry_emitted(ustr(context.exception.reason)) def test_it_should_not_allow_update_if_largest_version_below_current_version(self): self.prepare_agents(count=1) data_file = DATA_FILE.copy() data_file["ga_manifest"] = "wire/ga_manifest_no_upgrade.xml" with self._get_agent_update_handler(test_data=data_file) as (agent_update_handler, _): agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self._assert_agent_directories_exist_and_others_dont_exist(versions=[str(CURRENT_VERSION)]) def test_it_should_update_to_largest_version_if_rsm_version_not_available(self): self.prepare_agents(count=1) data_file = DATA_FILE.copy() data_file['ext_conf'] = "wire/ext_conf.xml" with self._get_agent_update_handler(test_data=data_file) as (agent_update_handler, mock_telemetry): with self.assertRaises(AgentUpgradeExitException) as context: agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self._assert_update_discovered_from_agent_manifest(mock_telemetry, version="99999.0.0.0") self._assert_agent_directories_exist_and_others_dont_exist(versions=[str(CURRENT_VERSION), "99999.0.0.0"]) self._assert_agent_exit_process_telemetry_emitted(ustr(context.exception.reason)) def test_it_should_not_download_manifest_again_if_last_attempted_download_time_not_elapsed(self): self.prepare_agents(count=1) data_file = DATA_FILE.copy() data_file['ext_conf'] = "wire/ext_conf.xml" with self._get_agent_update_handler(test_data=data_file, autoupdate_frequency=10, protocol_get_error=True) as (agent_update_handler, _): # making multiple agent update attempts agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) mock_wire_data = agent_update_handler._protocol.mock_wire_data self.assertEqual(1, mock_wire_data.call_counts['manifest_of_ga.xml'], "Agent manifest should not be downloaded again") def test_it_should_download_manifest_if_last_attempted_download_time_is_elapsed(self): self.prepare_agents(count=1) data_file = DATA_FILE.copy() data_file['ext_conf'] = "wire/ext_conf.xml" with self._get_agent_update_handler(test_data=data_file, autoupdate_frequency=0.00001, protocol_get_error=True) as (agent_update_handler, _): # making multiple agent update attempts agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) mock_wire_data = agent_update_handler._protocol.mock_wire_data self.assertEqual(3, mock_wire_data.call_counts['manifest_of_ga.xml'], "Agent manifest should be downloaded in all attempts") def test_it_should_not_agent_update_if_rsm_version_is_same_as_current_version(self): data_file = DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" # Set the test environment by adding 20 random agents to the agent directory self.prepare_agents() self.assertEqual(20, self.agent_count(), "Agent directories not set properly") with self._get_agent_update_handler(test_data=data_file) as (agent_update_handler, mock_telemetry): agent_update_handler._protocol.mock_wire_data.set_version_in_agent_family( str(CURRENT_VERSION)) agent_update_handler._protocol.mock_wire_data.set_incarnation(2) agent_update_handler._protocol.client.update_goal_state() agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self.assertEqual(0, len([kwarg['message'] for _, kwarg in mock_telemetry.call_args_list if "requesting a new agent version" in kwarg['message'] and kwarg[ 'op'] == WALAEventOperation.AgentUpgrade]), "rsm version should be same as current version") self.assertFalse(os.path.exists(self.agent_dir("99999.0.0.0")), "New agent directory should not be found") def test_it_should_upgrade_agent_if_rsm_version_is_available_greater_than_current_version(self): data_file = DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" # Set the test environment by adding 20 random agents to the agent directory self.prepare_agents() self.assertEqual(20, self.agent_count(), "Agent directories not set properly") with self._get_agent_update_handler(test_data=data_file) as (agent_update_handler, mock_telemetry): with self.assertRaises(AgentUpgradeExitException) as context: agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self._assert_agent_rsm_version_in_goal_state(mock_telemetry, version="9.9.9.10") self._assert_agent_directories_exist_and_others_dont_exist(versions=["9.9.9.10", str(CURRENT_VERSION)]) self._assert_agent_exit_process_telemetry_emitted(ustr(context.exception.reason)) def test_it_should_downgrade_agent_if_rsm_version_is_available_less_than_current_version(self): data_file = DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_downgrade_rsm_version.xml" # Set the test environment by adding 20 random agents to the agent directory self.prepare_agents() self.assertEqual(20, self.agent_count(), "Agent directories not set properly") downgrade_version = "2.5.0" with self._get_agent_update_handler(test_data=data_file) as (agent_update_handler, mock_telemetry): agent_update_handler._protocol.mock_wire_data.set_version_in_agent_family(downgrade_version) agent_update_handler._protocol.mock_wire_data.set_from_version_in_agent_family(str(CURRENT_VERSION)) agent_update_handler._protocol.mock_wire_data.set_incarnation(2) agent_update_handler._protocol.client.update_goal_state() with self.assertRaises(AgentUpgradeExitException) as context: agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self._assert_agent_rsm_version_in_goal_state(mock_telemetry, inc=2, version=downgrade_version) self._assert_agent_directories_exist_and_others_dont_exist( versions=[downgrade_version, str(CURRENT_VERSION)]) self._assert_agent_exit_process_telemetry_emitted(ustr(context.exception.reason)) def test_it_should_not_allow_rsm_downgrade_if_from_version_different_from_current_version(self): data_file = DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_downgrade_rsm_version.xml" # Set the test environment by adding 20 random agents to the agent directory self.prepare_agents() self.assertEqual(20, self.agent_count(), "Agent directories not set properly") downgrade_version = "2.5.0" from_version = "3.0.0" with self._get_agent_update_handler(test_data=data_file) as (agent_update_handler, mock_telemetry): agent_update_handler._protocol.mock_wire_data.set_version_in_agent_family(downgrade_version) agent_update_handler._protocol.mock_wire_data.set_from_version_in_agent_family(from_version) agent_update_handler._protocol.mock_wire_data.set_incarnation(2) agent_update_handler._protocol.client.update_goal_state() agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self.assertFalse(os.path.exists(self.agent_dir(downgrade_version)),"New agent directory should not be found") self.assertEqual(1, len([kwarg['message'] for _, kwarg in mock_telemetry.call_args_list if "downgrade {0} is not allowed to update from {1}".format(downgrade_version, from_version) in kwarg['message'] and kwarg[ 'op'] == WALAEventOperation.AgentUpgrade]), "downgrade should not be allowed") vm_agent_update_status = agent_update_handler.get_vmagent_update_status() self.assertEqual(1, vm_agent_update_status.code) self.assertEqual(VMAgentUpdateStatuses.Error, vm_agent_update_status.status) self.assertIn("downgrade {0} is not allowed to update from {1}".format(downgrade_version, from_version), vm_agent_update_status.message) def test_it_should_not_allow_rsm_downgrade_if_from_version_missing(self): data_file = DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" # Set the test environment by adding 20 random agents to the agent directory self.prepare_agents() self.assertEqual(20, self.agent_count(), "Agent directories not set properly") downgrade_version = "2.5.0" with self._get_agent_update_handler(test_data=data_file) as (agent_update_handler, _): agent_update_handler._protocol.mock_wire_data.set_version_in_agent_family(downgrade_version) agent_update_handler._protocol.mock_wire_data.set_incarnation(2) agent_update_handler._protocol.client.update_goal_state() agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self.assertFalse(os.path.exists(self.agent_dir(downgrade_version)), "New agent directory should not be found") def test_it_should_not_do_rsm_update_if_gs_not_updated_in_next_attempt(self): self.prepare_agents(count=1) data_file = DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" version = "9.9.9.999" with self._get_agent_update_handler(test_data=data_file, autoupdate_frequency=10) as (agent_update_handler, mock_telemetry): agent_update_handler._protocol.mock_wire_data.set_version_in_agent_family(version) agent_update_handler._protocol.mock_wire_data.set_incarnation(2) agent_update_handler._protocol.client.update_goal_state() agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self._assert_agent_rsm_version_in_goal_state(mock_telemetry, inc=2, version=version) self._assert_no_agent_package_telemetry_emitted(mock_telemetry, version=version) # Now we shouldn't check for download if update not allowed(GS not updated).This run should not add new logs agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), False) self._assert_agent_rsm_version_in_goal_state(mock_telemetry, inc=2, version=version) self._assert_no_agent_package_telemetry_emitted(mock_telemetry, version=version) def test_it_should_not_downgrade_below_daemon_version(self): data_file = DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" # Set the test environment by adding 20 random agents to the agent directory self.prepare_agents() self.assertEqual(20, self.agent_count(), "Agent directories not set properly") downgrade_version = "1.2.0" with self._get_agent_update_handler(test_data=data_file) as (agent_update_handler, mock_telemetry): agent_update_handler._protocol.mock_wire_data.set_version_in_agent_family(downgrade_version) agent_update_handler._protocol.mock_wire_data.set_incarnation(2) agent_update_handler._protocol.client.update_goal_state() agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self.assertFalse(os.path.exists(self.agent_dir(downgrade_version)), "New agent directory should not be found") self.assertEqual(1, len([kwarg['message'] for _, kwarg in mock_telemetry.call_args_list if "new version {0} is below than daemon version".format(downgrade_version) in kwarg['message'] and kwarg[ 'op'] == WALAEventOperation.AgentUpgrade]), "downgrade should not be allowed below daemon version") vm_agent_update_status = agent_update_handler.get_vmagent_update_status() self.assertEqual(1, vm_agent_update_status.code) self.assertEqual(VMAgentUpdateStatuses.Error, vm_agent_update_status.status) self.assertIn("new version {0} is below than daemon version".format(downgrade_version), vm_agent_update_status.message) def test_it_should_update_to_largest_version_if_vm_not_enabled_for_rsm_upgrades(self): self.prepare_agents(count=1) data_file = DATA_FILE.copy() data_file['ext_conf'] = "wire/ext_conf_vm_not_enabled_for_rsm_upgrades.xml" with self._get_agent_update_handler(test_data=data_file) as (agent_update_handler, mock_telemetry): with self.assertRaises(AgentUpgradeExitException) as context: agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self._assert_update_discovered_from_agent_manifest(mock_telemetry, version="99999.0.0.0") self._assert_agent_directories_exist_and_others_dont_exist(versions=[str(CURRENT_VERSION), "99999.0.0.0"]) self._assert_agent_exit_process_telemetry_emitted(ustr(context.exception.reason)) def test_it_should_not_update_to_version_if_version_not_from_rsm(self): self.prepare_agents(count=1) data_file = DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_version_not_from_rsm.xml" downgrade_version = "2.5.0" with self._get_agent_update_handler(test_data=data_file) as (agent_update_handler, _): agent_update_handler._protocol.mock_wire_data.set_version_in_agent_family(downgrade_version) agent_update_handler._protocol.mock_wire_data.set_incarnation(2) agent_update_handler._protocol.client.update_goal_state() agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self._assert_agent_directories_exist_and_others_dont_exist( versions=[str(CURRENT_VERSION)]) self.assertFalse(os.path.exists(self.agent_dir(downgrade_version)), "New agent directory should not be found") def test_handles_if_rsm_version_not_found_in_pkgs_to_download(self): data_file = DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" # Set the test environment by adding 20 random agents to the agent directory self.prepare_agents() self.assertEqual(20, self.agent_count(), "Agent directories not set properly") version = "9.9.9.999" with self._get_agent_update_handler(test_data=data_file) as (agent_update_handler, mock_telemetry): agent_update_handler._protocol.mock_wire_data.set_version_in_agent_family(version) agent_update_handler._protocol.mock_wire_data.set_incarnation(2) agent_update_handler._protocol.client.update_goal_state() agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self._assert_agent_rsm_version_in_goal_state(mock_telemetry, inc=2, version=version) self.assertFalse(os.path.exists(self.agent_dir(version)), "New agent directory should not be found") self._assert_no_agent_package_telemetry_emitted(mock_telemetry, version=version) def test_handles_missing_agent_family(self): data_file = DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_missing_family.xml" # Set the test environment by adding 20 random agents to the agent directory self.prepare_agents() self.assertEqual(20, self.agent_count(), "Agent directories not set properly") with self._get_agent_update_handler(test_data=data_file) as (agent_update_handler, mock_telemetry): agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self.assertFalse(os.path.exists(self.agent_dir("99999.0.0.0")), "New agent directory should not be found") self.assertEqual(1, len([kwarg['message'] for _, kwarg in mock_telemetry.call_args_list if "No manifest links found for agent family" in kwarg[ 'message'] and kwarg[ 'op'] == WALAEventOperation.AgentUpgrade]), "Agent manifest should not be in GS") # making multiple agent update attempts and assert only one time logged agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), False) agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), False) self.assertEqual(1, len([kwarg['message'] for _, kwarg in mock_telemetry.call_args_list if "No manifest links found for agent family" in kwarg[ 'message'] and kwarg[ 'op'] == WALAEventOperation.AgentUpgrade]), "Agent manifest error should be logged once if it's same goal state") def test_it_should_report_update_status_with_success(self): data_file = DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" with self._get_agent_update_handler(test_data=data_file) as (agent_update_handler, _): agent_update_handler._protocol.mock_wire_data.set_version_in_agent_family( str(CURRENT_VERSION)) agent_update_handler._protocol.mock_wire_data.set_incarnation(2) agent_update_handler._protocol.client.update_goal_state() agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) vm_agent_update_status = agent_update_handler.get_vmagent_update_status() self.assertEqual(VMAgentUpdateStatuses.Success, vm_agent_update_status.status) self.assertEqual(0, vm_agent_update_status.code) self.assertEqual(str(CURRENT_VERSION), vm_agent_update_status.expected_version) def test_it_should_not_report_update_status_when_self_update_used(self): self.prepare_agents(count=1) data_file = DATA_FILE.copy() data_file['ext_conf'] = "wire/ext_conf.xml" with self._get_agent_update_handler(test_data=data_file) as (agent_update_handler, _): with self.assertRaises(AgentUpgradeExitException): agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) vm_agent_update_status = agent_update_handler.get_vmagent_update_status() self.assertIsNone(vm_agent_update_status, "VM Agent Update Status should not be set when self-update is used") def test_it_should_report_update_with_error_if_auto_update_is_disabled_and_rsm_update_used(self): data_file = DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" with self._get_agent_update_handler(test_data=data_file, rsm_update_attempted=True) as (agent_update_handler, _): with patch("azurelinuxagent.common.conf.get_auto_update_to_latest_version", return_value=False): agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) vm_agent_update_status = agent_update_handler.get_vmagent_update_status() self.assertEqual(1, vm_agent_update_status.code) self.assertEqual(VMAgentUpdateStatuses.Error, vm_agent_update_status.status) self.assertIn("Auto update is disabled, skipping agent update", vm_agent_update_status.message) def test_it_should_report_update_status_with_error_on_download_fail(self): data_file = DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" with self._get_agent_update_handler(test_data=data_file, protocol_get_error=True) as (agent_update_handler, _): agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) vm_agent_update_status = agent_update_handler.get_vmagent_update_status() self.assertEqual(VMAgentUpdateStatuses.Error, vm_agent_update_status.status) self.assertEqual(1, vm_agent_update_status.code) self.assertEqual(str(CURRENT_VERSION), vm_agent_update_status.expected_version) self.assertIn("Failed to download WALinuxAgent-9.9.9.10 from all URIs", vm_agent_update_status.message) def test_it_should_not_report_error_status_if_new_rsm_version_is_same_as_current_after_last_update_attempt_failed(self): data_file = DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" with self._get_agent_update_handler(test_data=data_file, protocol_get_error=True) as (agent_update_handler, _): agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) vm_agent_update_status = agent_update_handler.get_vmagent_update_status() self.assertEqual(VMAgentUpdateStatuses.Error, vm_agent_update_status.status) self.assertEqual(1, vm_agent_update_status.code) # we report current agent version running self.assertEqual(str(CURRENT_VERSION), vm_agent_update_status.expected_version) self.assertIn("Failed to download WALinuxAgent-9.9.9.10 from all URIs", vm_agent_update_status.message) # Send same version GS after last update attempt failed agent_update_handler._protocol.mock_wire_data.set_version_in_agent_family( str(CURRENT_VERSION)) agent_update_handler._protocol.mock_wire_data.set_incarnation(2) agent_update_handler._protocol.client.update_goal_state() agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) vm_agent_update_status = agent_update_handler.get_vmagent_update_status() self.assertEqual(VMAgentUpdateStatuses.Success, vm_agent_update_status.status) self.assertEqual(0, vm_agent_update_status.code) self.assertEqual(str(CURRENT_VERSION), vm_agent_update_status.expected_version) def test_it_should_report_update_status_with_missing_rsm_version_error(self): data_file = DATA_FILE.copy() data_file['ext_conf'] = "wire/ext_conf_version_missing_in_agent_family.xml" with self._get_agent_update_handler(test_data=data_file, protocol_get_error=True, rsm_update_attempted=True) as (agent_update_handler, _): agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) vm_agent_update_status = agent_update_handler.get_vmagent_update_status() self.assertEqual(VMAgentUpdateStatuses.Error, vm_agent_update_status.status) self.assertEqual(1, vm_agent_update_status.code) self.assertIn("missing version property. So, skipping agent update", vm_agent_update_status.message) def test_it_should_not_log_same_error_next_hours(self): data_file = DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_missing_family.xml" # Set the test environment by adding 20 random agents to the agent directory self.prepare_agents() self.assertEqual(20, self.agent_count(), "Agent directories not set properly") with self._get_agent_update_handler(test_data=data_file) as (agent_update_handler, mock_telemetry): agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self.assertFalse(os.path.exists(self.agent_dir("99999.0.0.0")), "New agent directory should not be found") self.assertEqual(1, len([kwarg['message'] for _, kwarg in mock_telemetry.call_args_list if "No manifest links found for agent family" in kwarg[ 'message'] and kwarg[ 'op'] == WALAEventOperation.AgentUpgrade]), "Agent manifest should not be in GS") agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self.assertEqual(1, len([kwarg['message'] for _, kwarg in mock_telemetry.call_args_list if "No manifest links found for agent family" in kwarg[ 'message'] and kwarg[ 'op'] == WALAEventOperation.AgentUpgrade]), "Agent manifest should not be in GS") def test_it_should_save_rsm_state_of_the_most_recent_goal_state(self): data_file = DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" with self._get_agent_update_handler(test_data=data_file) as (agent_update_handler, _): with self.assertRaises(AgentUpgradeExitException): agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) state_file = os.path.join(conf.get_lib_dir(), RSM_UPDATE_STATE_FILE) self.assertTrue(os.path.exists(state_file), "The rsm state file was not saved (can't find {0})".format(state_file)) # check if state gets updated if most recent goal state has different values agent_update_handler._protocol.mock_wire_data.set_extension_config_is_vm_enabled_for_rsm_upgrades("False") agent_update_handler._protocol.mock_wire_data.set_incarnation(2) agent_update_handler._protocol.client.update_goal_state() with self.assertRaises(AgentUpgradeExitException): agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self.assertFalse(os.path.exists(state_file), "The rsm file should be removed (file: {0})".format(state_file)) def test_it_should_not_update_to_latest_if_flag_is_disabled(self): self.prepare_agents(count=1) data_file = DATA_FILE.copy() data_file['ext_conf'] = "wire/ext_conf.xml" with self._get_agent_update_handler(test_data=data_file) as (agent_update_handler, _): with patch("azurelinuxagent.common.conf.get_auto_update_to_latest_version", return_value=False): agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self._assert_agent_directories_exist_and_others_dont_exist(versions=[str(CURRENT_VERSION)]) def test_it_should_continue_with_update_if_number_of_update_attempts_less_than_3(self): data_file = DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" latest_version = self.prepare_agents(count=2) self.expand_agents() latest_path = os.path.join(self.tmp_dir, "{0}-{1}".format(AGENT_NAME, latest_version)) agent = GuestAgent.from_installed_agent(latest_path) # marking agent as bad agent on first attempt agent.mark_failure(is_fatal=True) agent.inc_update_attempt_count() self.assertTrue(agent.is_blacklisted, "Agent should be blacklisted") self.assertEqual(1, agent.get_update_attempt_count(), "Agent update attempts should be 1") with self._get_agent_update_handler(test_data=data_file) as (agent_update_handler, mock_telemetry): # Rest 2 attempts it should continue with update even agent is marked as bad agent in first attempt for i in range(2): with self.assertRaises(AgentUpgradeExitException): agent_update_handler._protocol.mock_wire_data.set_version_in_agent_family( str(latest_version)) agent_update_handler._protocol.mock_wire_data.set_version_in_ga_manifest(str(latest_version)) agent_update_handler._protocol.mock_wire_data.set_incarnation(i+2) agent_update_handler._protocol.client.update_goal_state() agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self._assert_agent_directories_exist_and_others_dont_exist(versions=[str(CURRENT_VERSION), str(latest_version)]) agent = GuestAgent.from_installed_agent(latest_path) self.assertFalse(agent.is_blacklisted, "Agent should not be blacklisted") self.assertEqual(i+2, agent.get_update_attempt_count(), "Agent update attempts should be {0}".format(i+2)) # check if next update is not attempted agent.mark_failure(is_fatal=True) agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) agent = GuestAgent.from_installed_agent(latest_path) self.assertTrue(agent.is_blacklisted, "Agent should be blacklisted") self.assertEqual(3, agent.get_update_attempt_count(), "Agent update attempts should be 3") self.assertEqual(1, len([kwarg['message'] for _, kwarg in mock_telemetry.call_args_list if "Attempted enough update retries for version: {0} but still agent not recovered from bad state".format(latest_version) in kwarg[ 'message'] and kwarg[ 'op'] == WALAEventOperation.AgentUpgrade]), "Update is not allowed after 3 attempts") def test_it_should_fail_the_update_if_agent_pkg_is_invalid(self): agent_uri = 'https://foo.blob.core.windows.net/bar/OSTCExtensions.WALinuxAgent__9.9.9.10' def http_get_handler(uri, *_, **__): if uri in (agent_uri, 'http://168.63.129.16:32526/extensionArtifact'): response = load_bin_data("ga/WALinuxAgent-9.9.9.10-no_manifest.zip") return MockHttpResponse(status=httpclient.OK, body=response) return None self.prepare_agents(count=1) data_file = DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" with self._get_agent_update_handler(test_data=data_file, mock_get_header=http_get_handler) as (agent_update_handler, mock_telemetry): agent_update_handler._protocol.mock_wire_data.set_version_in_agent_family("9.9.9.10") agent_update_handler._protocol.mock_wire_data.set_incarnation(2) agent_update_handler._protocol.client.update_goal_state() agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self._assert_agent_directories_exist_and_others_dont_exist(versions=[str(CURRENT_VERSION)]) self.assertEqual(1, len([kwarg['message'] for _, kwarg in mock_telemetry.call_args_list if "Downloaded agent package: WALinuxAgent-9.9.9.10 is missing agent handler manifest file" in kwarg['message'] and kwarg[ 'op'] == WALAEventOperation.AgentUpgrade]), "Agent update should fail") def test_it_should_use_self_update_for_first_update_always(self): self.prepare_agents(count=1) # mock the goal state as vm enrolled into RSM data_file = DATA_FILE.copy() data_file['ext_conf'] = "wire/ext_conf_rsm_version.xml" with self._get_agent_update_handler(test_data=data_file, initial_update_attempted=False) as (agent_update_handler, mock_telemetry): with self.assertRaises(AgentUpgradeExitException) as context: agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) # Verifying agent used self-update for initial update self._assert_update_discovered_from_agent_manifest(mock_telemetry, version="99999.0.0.0") self._assert_agent_directories_exist_and_others_dont_exist(versions=[str(CURRENT_VERSION), "99999.0.0.0"]) self._assert_agent_exit_process_telemetry_emitted(ustr(context.exception.reason)) state_file = os.path.join(conf.get_lib_dir(), INITIAL_UPDATE_STATE_FILE) self.assertTrue(os.path.exists(state_file), "The first update state file was not saved (can't find {0})".format(state_file)) def test_it_should_honor_any_update_type_after_first_update(self): self.prepare_agents(count=1) data_file = DATA_FILE.copy() data_file['ext_conf'] = "wire/ext_conf_rsm_version.xml" # mocking initial update attempt as true with self._get_agent_update_handler(test_data=data_file, initial_update_attempted=True) as (agent_update_handler, mock_telemetry): with self.assertRaises(AgentUpgradeExitException) as context: agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) # Verifying agent honored RSM update self._assert_agent_rsm_version_in_goal_state(mock_telemetry, version="9.9.9.10") self._assert_agent_directories_exist_and_others_dont_exist(versions=["9.9.9.10", str(CURRENT_VERSION)]) self._assert_agent_exit_process_telemetry_emitted(ustr(context.exception.reason)) Azure-WALinuxAgent-a976115/tests/ga/test_cgroupapi.py000066400000000000000000001350751510742556200225400ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.4+ and Openssl 1.0+ # from __future__ import print_function import os import subprocess import tempfile from azurelinuxagent.common.exception import CGroupsException from azurelinuxagent.ga.cgroupapi import SystemdCgroupApiv1, SystemdCgroupApiv2, CGroupUtil, create_cgroup_api, \ InvalidCgroupMountpointException, CgroupV1, CgroupV2 from azurelinuxagent.ga.cgroupstelemetry import CGroupsTelemetry from azurelinuxagent.common.osutil import systemd from azurelinuxagent.common.utils import fileutil from azurelinuxagent.ga.cpucontroller import CpuControllerV1, CpuControllerV2 from azurelinuxagent.ga.memorycontroller import MemoryControllerV1, MemoryControllerV2 from tests.lib.mock_cgroup_environment import mock_cgroup_v1_environment, mock_cgroup_v2_environment, \ mock_cgroup_hybrid_environment from tests.lib.mock_environment import MockCommand from tests.lib.tools import AgentTestCase, patch, mock_sleep from tests.lib.cgroups_tools import CGroupsTools class _MockedFileSystemTestCase(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) self.cgroups_file_system_root = os.path.join(self.tmp_dir, "cgroup") os.mkdir(self.cgroups_file_system_root) os.mkdir(os.path.join(self.cgroups_file_system_root, "cpu")) os.mkdir(os.path.join(self.cgroups_file_system_root, "memory")) self.mock_cgroups_file_system_root = patch("azurelinuxagent.ga.cgroupapi.CGROUP_FILE_SYSTEM_ROOT", self.cgroups_file_system_root) self.mock_cgroups_file_system_root.start() def tearDown(self): self.mock_cgroups_file_system_root.stop() AgentTestCase.tearDown(self) class CGroupUtilTestCase(AgentTestCase): def test_cgroups_should_be_supported_only_on_ubuntu16plus_centos8_redhat8_rhel9_azurelinux3(self): test_cases = [ (['ubuntu', '16.04', 'xenial'], True), (['ubuntu', '16.10', 'yakkety'], True), (['ubuntu', '18.04', 'bionic'], True), (['ubuntu', '18.10', 'cosmic'], True), (['ubuntu', '20.04', 'focal'], True), (['ubuntu', '20.10', 'groovy'], True), (['centos', '7.4', 'Source'], False), (['redhat', '7.4', 'Maipo'], False), (['centos', '7.5', 'Source'], False), (['centos', '7.3', 'Maipo'], False), (['redhat', '7.2', 'Maipo'], False), (['centos', '7.8', 'Source'], False), (['redhat', '7.8', 'Maipo'], False), (['redhat', '7.9.1908', 'Core'], False), (['centos', '8.1', 'Source'], True), (['redhat', '8.2', 'Maipo'], True), (['redhat', '8.2.2111', 'Core'], True), (['redhat', '9.1', 'Core'], False), (['centos', '9.1', 'Source'], False), (['bigip', '15.0.1', 'Final'], False), (['gaia', '273.562', 'R80.30'], False), (['debian', '9.1', ''], False), (['rhel', '9.5', "source"], True), (['rhel', '9.0', "core"], True), (['rhel', '10.9', "core"], False), (['mariner', '1.0', ''], False), (['mariner', '2.2', ''], False), (['azurelinux', '3.0', ''], True), (['azurelinux', '3.10', ''], True) ] for (distro, supported) in test_cases: with patch("azurelinuxagent.ga.cgroupapi.get_distro", return_value=distro): self.assertEqual(CGroupUtil.distro_supported(), supported, "cgroups_supported() failed on {0}".format(distro)) class SystemdCgroupsApiTestCase(AgentTestCase): def test_create_cgroup_api_raises_exception_when_systemd_mountpoint_does_not_exist(self): with mock_cgroup_v1_environment(self.tmp_dir): # Mock os.path.exists to return False for the os.path.exists(CGROUP_FILE_SYSTEM_ROOT) check with patch("os.path.exists", return_value=False): with self.assertRaises(InvalidCgroupMountpointException) as context: create_cgroup_api() self.assertTrue("Expected cgroup filesystem to be mounted at '/sys/fs/cgroup', but it is not" in str(context.exception)) def test_create_cgroup_api_is_v2_when_v2_in_use(self): with mock_cgroup_v2_environment(self.tmp_dir): self.assertIsInstance(create_cgroup_api(), SystemdCgroupApiv2) def test_create_cgroup_api_raises_exception_when_hybrid_in_use_and_controllers_available_in_unified_hierarchy(self): with mock_cgroup_hybrid_environment(self.tmp_dir): # Mock /sys/fs/cgroup/unified/cgroup.controllers file to have available controllers with patch("os.path.exists", return_value=True): with patch('azurelinuxagent.common.utils.fileutil.read_file', return_value="cpu memory"): with self.assertRaises(CGroupsException) as context: create_cgroup_api() self.assertTrue("Detected hybrid cgroup mode, but there are controllers available to be enabled in unified hierarchy: cpu memory" in str(context.exception)) def test_create_cgroup_api_raises_exception_when_v1_in_use_and_controllers_have_non_sytemd_mountpoints(self): with mock_cgroup_v1_environment(self.tmp_dir): # Mock /sys/fs/cgroup/unified/cgroup.controllers file to have available controllers with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv1.are_mountpoints_systemd_created', return_value=False): with self.assertRaises(InvalidCgroupMountpointException) as context: create_cgroup_api() self.assertTrue("Expected cgroup controllers to be mounted at '/sys/fs/cgroup', but at least one is not." in str(context.exception)) def test_create_cgroup_api_is_v1_when_v1_in_use(self): with mock_cgroup_v1_environment(self.tmp_dir): self.assertIsInstance(create_cgroup_api(), SystemdCgroupApiv1) def test_create_cgroup_api_is_v1_when_hybrid_in_use(self): with mock_cgroup_hybrid_environment(self.tmp_dir): # Mock os.path.exists to return True for the os.path.exists('/sys/fs/cgroup/cgroup.controllers') check with patch("os.path.exists", return_value=True): self.assertIsInstance(create_cgroup_api(), SystemdCgroupApiv1) def test_create_cgroup_api_raises_exception_when_cgroup_mode_cannot_be_determined(self): unknown_cgroup_type = "unknown_cgroup_type" with patch('azurelinuxagent.common.utils.shellutil.run_command', return_value=unknown_cgroup_type): with self.assertRaises(CGroupsException) as context: create_cgroup_api() self.assertTrue("/sys/fs/cgroup has an unexpected file type: {0}".format(unknown_cgroup_type) in str(context.exception)) def test_get_unit_property_should_return_the_value_of_the_given_property(self): # We expect same behavior for v1 and v2 mock_envs = [mock_cgroup_v1_environment(self.tmp_dir), mock_cgroup_v2_environment(self.tmp_dir)] for env in mock_envs: with env: cpu_accounting = systemd.get_unit_property("walinuxagent.service", "CPUAccounting") self.assertEqual(cpu_accounting, "no", "Property {0} of {1} is incorrect".format("CPUAccounting", "walinuxagent.service")) class SystemdCgroupsApiv1TestCase(AgentTestCase): def test_get_controller_mountpoints_should_return_only_supported_controllers(self): with mock_cgroup_v1_environment(self.tmp_dir): cgroup_api = create_cgroup_api() # Expected value comes from findmnt output in the mocked environment self.assertEqual(cgroup_api._get_controller_mountpoints(), { 'cpu,cpuacct': '/sys/fs/cgroup/cpu,cpuacct', 'memory': '/sys/fs/cgroup/memory' }, "The controller mountpoints are not correct") def test_are_mountpoints_systemd_created_should_return_False_if_mountpoints_are_not_systemd(self): with mock_cgroup_v1_environment(self.tmp_dir): with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv1._get_controller_mountpoints', return_value={'cpu,cpuacct': '/custom/mountpoint/path', 'memory': '/custom/mountpoint/path'}): self.assertFalse(SystemdCgroupApiv1().are_mountpoints_systemd_created()) with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv1._get_controller_mountpoints', return_value={'cpu,cpuacct': '/custom/mountpoint/path'}): self.assertFalse(SystemdCgroupApiv1().are_mountpoints_systemd_created()) with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv1._get_controller_mountpoints', return_value={'memory': '/custom/mountpoint/path'}): self.assertFalse(SystemdCgroupApiv1().are_mountpoints_systemd_created()) def test_are_mountpoints_systemd_created_should_return_True_if_mountpoints_are_systemd(self): with mock_cgroup_v1_environment(self.tmp_dir): with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv1._get_controller_mountpoints', return_value={'cpu,cpuacct': '/sys/fs/cgroup/cpu,cpuacct', 'memory': '/sys/fs/cgroup/memory'}): self.assertTrue(SystemdCgroupApiv1().are_mountpoints_systemd_created()) # are_mountpoints_systemd_created should only check controllers which are mounted with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv1._get_controller_mountpoints', return_value={'cpu,cpuacct': '/sys/fs/cgroup/cpu,cpuacct'}): self.assertTrue(SystemdCgroupApiv1().are_mountpoints_systemd_created()) with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv1._get_controller_mountpoints', return_value={'memory': '/sys/fs/cgroup/memory'}): self.assertTrue(SystemdCgroupApiv1().are_mountpoints_systemd_created()) with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv1._get_controller_mountpoints', return_value={}): self.assertTrue(SystemdCgroupApiv1().are_mountpoints_systemd_created()) def test_get_relative_paths_for_process_should_return_the_cgroup_v1_relative_paths(self): with mock_cgroup_v1_environment(self.tmp_dir): relative_paths = create_cgroup_api()._get_process_relative_controller_paths('self') self.assertEqual(len(relative_paths), 2) self.assertEqual(relative_paths.get('cpu,cpuacct'), "system.slice/walinuxagent.service", "The relative path for the CPU cgroup is incorrect") self.assertEqual(relative_paths.get('memory'), "system.slice/walinuxagent.service", "The relative memory for the memory cgroup is incorrect") def test_get_unit_cgroup_should_return_correct_paths_for_cgroup_v1(self): with mock_cgroup_v1_environment(self.tmp_dir): cgroup = create_cgroup_api().get_unit_cgroup(unit_name="extension.service", cgroup_name="extension") self.assertIsInstance(cgroup, CgroupV1) self.assertEqual(cgroup._cgroup_name, "extension") self.assertEqual(cgroup._controller_mountpoints, {'cpu,cpuacct':'/sys/fs/cgroup/cpu,cpuacct', 'memory':'/sys/fs/cgroup/memory'}) self.assertEqual(cgroup._controller_paths, {'cpu,cpuacct': '/sys/fs/cgroup/cpu,cpuacct/system.slice/extension.service', 'memory': '/sys/fs/cgroup/memory/system.slice/extension.service'}) def test_get_unit_cgroup_should_return_only_mounted_controllers_v1(self): with mock_cgroup_v1_environment(self.tmp_dir): with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv1._get_controller_mountpoints', return_value={'cpu,cpuacct': '/sys/fs/cgroup/cpu,cpuacct'}): cgroup = create_cgroup_api().get_unit_cgroup(unit_name="extension.service", cgroup_name="extension") self.assertIsInstance(cgroup, CgroupV1) self.assertEqual(cgroup._cgroup_name, "extension") self.assertEqual(cgroup._controller_mountpoints, {'cpu,cpuacct':'/sys/fs/cgroup/cpu,cpuacct'}) self.assertEqual(cgroup._controller_paths, {'cpu,cpuacct': '/sys/fs/cgroup/cpu,cpuacct/system.slice/extension.service'}) with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv1._get_controller_mountpoints', return_value={}): cgroup = create_cgroup_api().get_unit_cgroup(unit_name="extension.service", cgroup_name="extension") self.assertIsInstance(cgroup, CgroupV1) self.assertEqual(cgroup._cgroup_name, "extension") self.assertEqual(cgroup._controller_mountpoints, {}) self.assertEqual(cgroup._controller_paths, {}) def test_get_cgroup_from_relative_path_should_return_the_correct_paths_for_cgroup_v1(self): with mock_cgroup_v1_environment(self.tmp_dir): cgroup = create_cgroup_api().get_cgroup_from_relative_path(relative_path="some/relative/path", cgroup_name="test_cgroup") self.assertIsInstance(cgroup, CgroupV1) self.assertEqual(cgroup._cgroup_name, "test_cgroup") self.assertEqual(cgroup._controller_mountpoints, {'cpu,cpuacct': '/sys/fs/cgroup/cpu,cpuacct', 'memory': '/sys/fs/cgroup/memory'}) self.assertEqual(cgroup._controller_paths, {'cpu,cpuacct': '/sys/fs/cgroup/cpu,cpuacct/some/relative/path', 'memory': '/sys/fs/cgroup/memory/some/relative/path'}) def test_get_cgroup_from_relative_path_should_return_only_mounted_controllers_v1(self): with mock_cgroup_v1_environment(self.tmp_dir): with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv1._get_controller_mountpoints', return_value={'cpu,cpuacct': '/sys/fs/cgroup/cpu,cpuacct'}): cgroup = create_cgroup_api().get_cgroup_from_relative_path(relative_path="some/relative/path", cgroup_name="test_cgroup") self.assertIsInstance(cgroup, CgroupV1) self.assertEqual(cgroup._cgroup_name, "test_cgroup") self.assertEqual(cgroup._controller_mountpoints, {'cpu,cpuacct': '/sys/fs/cgroup/cpu,cpuacct'}) self.assertEqual(cgroup._controller_paths, {'cpu,cpuacct': '/sys/fs/cgroup/cpu,cpuacct/some/relative/path'}) with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv1._get_controller_mountpoints', return_value={}): cgroup = create_cgroup_api().get_cgroup_from_relative_path(relative_path="some/relative/path", cgroup_name="test_cgroup") self.assertIsInstance(cgroup, CgroupV1) self.assertEqual(cgroup._cgroup_name, "test_cgroup") self.assertEqual(cgroup._controller_mountpoints, {}) self.assertEqual(cgroup._controller_paths, {}) def test_get_process_cgroup_should_return_the_correct_paths_for_cgroup_v1(self): with mock_cgroup_v1_environment(self.tmp_dir): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") self.assertIsInstance(cgroup, CgroupV1) self.assertEqual(cgroup._cgroup_name, "walinuxagent") self.assertEqual(cgroup._controller_mountpoints, {'cpu,cpuacct': '/sys/fs/cgroup/cpu,cpuacct', 'memory': '/sys/fs/cgroup/memory'}) self.assertEqual(cgroup._controller_paths, {'cpu,cpuacct': '/sys/fs/cgroup/cpu,cpuacct/system.slice/walinuxagent.service', 'memory': '/sys/fs/cgroup/memory/system.slice/walinuxagent.service'}) def test_get_process_cgroup_should_return_only_mounted_controllers_v1(self): with mock_cgroup_v1_environment(self.tmp_dir): with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv1._get_controller_mountpoints', return_value={'cpu,cpuacct': '/sys/fs/cgroup/cpu,cpuacct'}): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") self.assertIsInstance(cgroup, CgroupV1) self.assertEqual(cgroup._cgroup_name, "walinuxagent") self.assertEqual(cgroup._controller_mountpoints, {'cpu,cpuacct': '/sys/fs/cgroup/cpu,cpuacct'}) self.assertEqual(cgroup._controller_paths, {'cpu,cpuacct': '/sys/fs/cgroup/cpu,cpuacct/system.slice/walinuxagent.service'}) with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv1._get_controller_mountpoints', return_value={}): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") self.assertIsInstance(cgroup, CgroupV1) self.assertEqual(cgroup._cgroup_name, "walinuxagent") self.assertEqual(cgroup._controller_mountpoints, {}) self.assertEqual(cgroup._controller_paths, {}) def test_get_process_cgroup_should_return_only_mounted_process_controllers_v1(self): with mock_cgroup_v1_environment(self.tmp_dir): with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv1._get_process_relative_controller_paths', return_value={'cpu,cpuacct': 'relative/path'}): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") self.assertIsInstance(cgroup, CgroupV1) self.assertEqual(cgroup._cgroup_name, "walinuxagent") self.assertEqual(cgroup._controller_mountpoints, {'cpu,cpuacct': '/sys/fs/cgroup/cpu,cpuacct', 'memory':'/sys/fs/cgroup/memory'}) self.assertEqual(cgroup._controller_paths, {'cpu,cpuacct': '/sys/fs/cgroup/cpu,cpuacct/relative/path'}) with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv1._get_process_relative_controller_paths', return_value={}): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") self.assertIsInstance(cgroup, CgroupV1) self.assertEqual(cgroup._cgroup_name, "walinuxagent") self.assertEqual(cgroup._controller_mountpoints, {'cpu,cpuacct': '/sys/fs/cgroup/cpu,cpuacct', 'memory':'/sys/fs/cgroup/memory'}) self.assertEqual(cgroup._controller_paths, {}) @patch('time.sleep', side_effect=lambda _: mock_sleep()) def test_start_extension_cgroups_v1_command_should_return_the_command_output(self, _): with mock_cgroup_v1_environment(self.tmp_dir): original_popen = subprocess.Popen def mock_popen(command, *args, **kwargs): if isinstance(command, str) and command.startswith('systemd-run --property'): command = "echo TEST_OUTPUT" return original_popen(command, *args, **kwargs) with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as output_file: with patch("subprocess.Popen", side_effect=mock_popen) as popen_patch: # pylint: disable=unused-variable command_output = create_cgroup_api().start_extension_command( extension_name="Microsoft.Compute.TestExtension-1.2.3", command="A_TEST_COMMAND", cmd_name="test", shell=True, timeout=300, cwd=self.tmp_dir, env={}.update(os.environ), stdout=output_file, stderr=output_file) self.assertIn("[stdout]\nTEST_OUTPUT\n", command_output, "The test output was not captured") @patch('time.sleep', side_effect=lambda _: mock_sleep()) def test_start_extension_cgroups_v1_command_should_execute_the_command_in_a_cgroup(self, _): with mock_cgroup_v1_environment(self.tmp_dir): create_cgroup_api().start_extension_command( extension_name="Microsoft.Compute.TestExtension-1.2.3", command="test command", cmd_name="test", shell=False, timeout=300, cwd=self.tmp_dir, env={}.update(os.environ), stdout=subprocess.PIPE, stderr=subprocess.PIPE) tracked = CGroupsTelemetry._tracked self.assertTrue( any(cg for cg in tracked.values() if cg.name == 'Microsoft.Compute.TestExtension-1.2.3' and 'cpu' in cg.path), "The extension's CPU is not being tracked") self.assertTrue( any(cg for cg in tracked.values() if cg.name == 'Microsoft.Compute.TestExtension-1.2.3' and 'memory' in cg.path), "The extension's Memory is not being tracked") @patch('time.sleep', side_effect=lambda _: mock_sleep()) def test_start_extension_cgroups_v1_command_should_use_systemd_to_execute_the_command(self, _): with mock_cgroup_v1_environment(self.tmp_dir): with patch("subprocess.Popen", wraps=subprocess.Popen) as popen_patch: create_cgroup_api().start_extension_command( extension_name="Microsoft.Compute.TestExtension-1.2.3", command="the-test-extension-command", cmd_name="test", timeout=300, shell=True, cwd=self.tmp_dir, env={}.update(os.environ), stdout=subprocess.PIPE, stderr=subprocess.PIPE) extension_calls = [args[0] for (args, _) in popen_patch.call_args_list if "the-test-extension-command" in args[0]] self.assertEqual(1, len(extension_calls), "The extension should have been invoked exactly once") self.assertIn("systemd-run", extension_calls[0], "The extension should have been invoked using systemd") class SystemdCgroupsApiv2TestCase(AgentTestCase): def test_get_root_cgroup_path_should_return_v2_cgroup_root(self): with mock_cgroup_v2_environment(self.tmp_dir): cgroup_api = create_cgroup_api() self.assertEqual(cgroup_api._get_root_cgroup_path(), '/sys/fs/cgroup') def test_get_root_cgroup_path_should_only_match_systemd_mountpoint(self): with mock_cgroup_v2_environment(self.tmp_dir) as env: # Mock an environment which has multiple v2 mountpoints env.add_command(MockCommand(r"^findmnt -t cgroup2 --noheadings$", '''/custom/mountpoint/path1 cgroup2 cgroup2 rw,relatime /sys/fs/cgroup cgroup2 cgroup2 rw,nosuid,nodev,noexec,relatime /custom/mountpoint/path2 none cgroup2 rw,relatime ''')) cgroup_api = create_cgroup_api() self.assertEqual(cgroup_api._get_root_cgroup_path(), '/sys/fs/cgroup') def test_get_controllers_enabled_at_root_should_return_list_of_agent_supported_and_enabled_controllers(self): with mock_cgroup_v2_environment(self.tmp_dir): cgroup_api = create_cgroup_api() enabled_controllers = cgroup_api._get_controllers_enabled_at_root('/sys/fs/cgroup') self.assertEqual(len(enabled_controllers), 2) self.assertIn('cpu', enabled_controllers) self.assertIn('memory', enabled_controllers) def test_get_controllers_enabled_at_root_should_return_empty_list_if_root_cgroup_path_is_empty(self): with mock_cgroup_v2_environment(self.tmp_dir): with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv2._get_root_cgroup_path', return_value=""): cgroup_api = create_cgroup_api() self.assertEqual(cgroup_api._controllers_enabled_at_root, []) def test_get_process_relative_cgroup_path_should_return_relative_path(self): with mock_cgroup_v2_environment(self.tmp_dir): cgroup_api = create_cgroup_api() self.assertEqual(cgroup_api._get_process_relative_cgroup_path(process_id="self"), "system.slice/walinuxagent.service") def test_get_unit_cgroup_should_return_correct_paths_for_cgroup_v2(self): with mock_cgroup_v2_environment(self.tmp_dir): cgroup = create_cgroup_api().get_unit_cgroup(unit_name="extension.service", cgroup_name="extension") self.assertIsInstance(cgroup, CgroupV2) self.assertEqual(cgroup._cgroup_name, "extension") self.assertEqual(cgroup._root_cgroup_path, "/sys/fs/cgroup") self.assertEqual(cgroup._cgroup_path, "/sys/fs/cgroup/system.slice/extension.service") self.assertEqual(len(cgroup._enabled_controllers), 2) self.assertIn('cpu', cgroup._enabled_controllers) self.assertIn('memory', cgroup._enabled_controllers) def test_get_unit_cgroup_should_return_empty_paths_if_root_path_empty_v2(self): with mock_cgroup_v2_environment(self.tmp_dir): with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv2._get_root_cgroup_path', return_value=""): cgroup = create_cgroup_api().get_unit_cgroup(unit_name="extension.service", cgroup_name="extension") self.assertIsInstance(cgroup, CgroupV2) self.assertEqual(cgroup._cgroup_name, "extension") self.assertEqual(cgroup._root_cgroup_path, "") self.assertEqual(cgroup._cgroup_path, "") self.assertEqual(len(cgroup._enabled_controllers), 0) def test_get_unit_cgroup_should_return_only_enabled_controllers_v2(self): with mock_cgroup_v2_environment(self.tmp_dir): with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv2._get_controllers_enabled_at_root', return_value=['cpu']): cgroup = create_cgroup_api().get_unit_cgroup(unit_name="extension.service", cgroup_name="extension") self.assertIsInstance(cgroup, CgroupV2) self.assertEqual(cgroup._cgroup_name, "extension") self.assertEqual(cgroup._root_cgroup_path, "/sys/fs/cgroup") self.assertEqual(cgroup._cgroup_path, "/sys/fs/cgroup/system.slice/extension.service") self.assertEqual(len(cgroup._enabled_controllers), 1) self.assertIn('cpu', cgroup._enabled_controllers) with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv2._get_controllers_enabled_at_root', return_value=[]): cgroup = create_cgroup_api().get_unit_cgroup(unit_name="extension.service", cgroup_name="extension") self.assertIsInstance(cgroup, CgroupV2) self.assertEqual(cgroup._cgroup_name, "extension") self.assertEqual(cgroup._root_cgroup_path, "/sys/fs/cgroup") self.assertEqual(cgroup._cgroup_path, "/sys/fs/cgroup/system.slice/extension.service") self.assertEqual(len(cgroup._enabled_controllers), 0) def test_get_cgroup_from_relative_path_should_return_the_correct_paths_for_cgroup_v2(self): with mock_cgroup_v2_environment(self.tmp_dir): cgroup = create_cgroup_api().get_cgroup_from_relative_path(relative_path="some/relative/path", cgroup_name="test_cgroup") self.assertIsInstance(cgroup, CgroupV2) self.assertEqual(cgroup._cgroup_name, "test_cgroup") self.assertEqual(cgroup._root_cgroup_path, "/sys/fs/cgroup") self.assertEqual(cgroup._cgroup_path, "/sys/fs/cgroup/some/relative/path") self.assertEqual(len(cgroup._enabled_controllers), 2) self.assertIn('cpu', cgroup._enabled_controllers) self.assertIn('memory', cgroup._enabled_controllers) def test_get_cgroup_from_relative_path_should_return_empty_paths_if_root_path_empty_v2(self): with mock_cgroup_v2_environment(self.tmp_dir): with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv2._get_root_cgroup_path', return_value=""): cgroup = create_cgroup_api().get_cgroup_from_relative_path(relative_path="some/relative/path", cgroup_name="test_cgroup") self.assertIsInstance(cgroup, CgroupV2) self.assertEqual(cgroup._cgroup_name, "test_cgroup") self.assertEqual(cgroup._root_cgroup_path, "") self.assertEqual(cgroup._cgroup_path, "") self.assertEqual(len(cgroup._enabled_controllers), 0) def test_get_process_cgroup_should_return_the_correct_paths_for_cgroup_v2(self): with mock_cgroup_v2_environment(self.tmp_dir): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") self.assertIsInstance(cgroup, CgroupV2) self.assertEqual(cgroup._cgroup_name, "walinuxagent") self.assertEqual(cgroup._root_cgroup_path, "/sys/fs/cgroup") self.assertEqual(cgroup._cgroup_path, "/sys/fs/cgroup/system.slice/walinuxagent.service") self.assertEqual(len(cgroup._enabled_controllers), 2) self.assertIn('cpu', cgroup._enabled_controllers) self.assertIn('memory', cgroup._enabled_controllers) def test_get_process_cgroup_should_return_empty_paths_if_root_path_empty_v2(self): with mock_cgroup_v2_environment(self.tmp_dir): with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv2._get_root_cgroup_path', return_value=""): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") self.assertIsInstance(cgroup, CgroupV2) self.assertEqual(cgroup._cgroup_name, "walinuxagent") self.assertEqual(cgroup._root_cgroup_path, "") self.assertEqual(cgroup._cgroup_path, "") self.assertEqual(len(cgroup._enabled_controllers), 0) class SystemdCgroupsApiMockedFileSystemTestCase(_MockedFileSystemTestCase): def test_cleanup_legacy_cgroups_should_remove_legacy_cgroups(self): # Set up a mock /var/run/waagent.pid file daemon_pid_file = os.path.join(self.tmp_dir, "waagent.pid") fileutil.write_file(daemon_pid_file, "42\n") # Set up old controller cgroups, but do not add the daemon's PID to them legacy_cpu_cgroup = CGroupsTools.create_legacy_agent_cgroup(self.cgroups_file_system_root, "cpu", '') legacy_memory_cgroup = CGroupsTools.create_legacy_agent_cgroup(self.cgroups_file_system_root, "memory", '') with patch("azurelinuxagent.ga.cgroupapi.get_agent_pid_file_path", return_value=daemon_pid_file): legacy_cgroups = CGroupUtil.cleanup_legacy_cgroups() self.assertEqual(legacy_cgroups, 2, "cleanup_legacy_cgroups() did not find all the expected cgroups") self.assertFalse(os.path.exists(legacy_cpu_cgroup), "cleanup_legacy_cgroups() did not remove the CPU legacy cgroup") self.assertFalse(os.path.exists(legacy_memory_cgroup), "cleanup_legacy_cgroups() did not remove the memory legacy cgroup") class CgroupsApiv1TestCase(AgentTestCase): def test_get_supported_controllers_returns_v1_controllers(self): with mock_cgroup_v1_environment(self.tmp_dir): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") controllers = cgroup.get_supported_controller_names() self.assertEqual(len(controllers), 2) self.assertIn('cpu,cpuacct', controllers) self.assertIn('memory', controllers) def test_check_in_expected_slice_returns_True_if_all_paths_in_expected_slice(self): with mock_cgroup_v1_environment(self.tmp_dir): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") self.assertTrue(cgroup.check_in_expected_slice(expected_slice='system.slice')) with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv1._get_process_relative_controller_paths', return_value={'cpu,cpuacct': 'system.slice/walinuxagent.service'}): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") self.assertTrue(cgroup.check_in_expected_slice(expected_slice='system.slice')) with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv1._get_process_relative_controller_paths', return_value={}): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") self.assertTrue(cgroup.check_in_expected_slice(expected_slice='system.slice')) def test_check_in_expected_slice_returns_False_if_any_paths_not_in_expected_slice(self): with mock_cgroup_v1_environment(self.tmp_dir): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") self.assertFalse(cgroup.check_in_expected_slice(expected_slice='user.slice')) with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv1._get_process_relative_controller_paths', return_value={'cpu,cpuacct': 'system.slice/walinuxagent.service', 'memory': 'user.slice/walinuxagent.service'}): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") self.assertFalse(cgroup.check_in_expected_slice(expected_slice='user.slice')) with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv1._get_process_relative_controller_paths', return_value={'cpu,cpuacct': '', 'memory': ''}): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") self.assertFalse(cgroup.check_in_expected_slice(expected_slice='system.slice')) def test_get_controllers_returns_all_supported_controllers_v1(self): with mock_cgroup_v1_environment(self.tmp_dir): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") controllers = cgroup.get_controllers() self.assertEqual(len(controllers), 2) self.assertIsInstance(controllers[0], CpuControllerV1) self.assertEqual(controllers[0].name, "walinuxagent") self.assertEqual(controllers[0].path, "/sys/fs/cgroup/cpu,cpuacct/system.slice/walinuxagent.service") self.assertIsInstance(controllers[1], MemoryControllerV1) self.assertEqual(controllers[1].name, "walinuxagent") self.assertEqual(controllers[1].path, "/sys/fs/cgroup/memory/system.slice/walinuxagent.service") def test_get_controllers_returns_only_mounted_controllers_v1(self): with mock_cgroup_v1_environment(self.tmp_dir): with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv1._get_controller_mountpoints', return_value={'cpu,cpuacct': '/sys/fs/cgroup/cpu,cpuacct'}): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") controllers = cgroup.get_controllers() self.assertEqual(len(controllers), 1) self.assertIsInstance(controllers[0], CpuControllerV1) self.assertEqual(controllers[0].name, "walinuxagent") self.assertEqual(controllers[0].path, "/sys/fs/cgroup/cpu,cpuacct/system.slice/walinuxagent.service") with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv1._get_controller_mountpoints', return_value={'memory': '/sys/fs/cgroup/memory'}): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") controllers = cgroup.get_controllers() self.assertEqual(len(controllers), 1) self.assertIsInstance(controllers[0], MemoryControllerV1) self.assertEqual(controllers[0].name, "walinuxagent") self.assertEqual(controllers[0].path, "/sys/fs/cgroup/memory/system.slice/walinuxagent.service") with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv1._get_controller_mountpoints', return_value={}): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") controllers = cgroup.get_controllers() self.assertEqual(len(controllers), 0) def test_get_controllers_returns_only_controllers_at_expected_path_v1(self): with mock_cgroup_v1_environment(self.tmp_dir): with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv1._get_process_relative_controller_paths', return_value={'cpu,cpuacct': 'system.slice/walinuxagent.service', 'memory': 'unexpected/path'}): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") controllers = cgroup.get_controllers(expected_relative_path="system.slice/walinuxagent.service") self.assertEqual(len(controllers), 1) self.assertIsInstance(controllers[0], CpuControllerV1) self.assertEqual(controllers[0].name, "walinuxagent") self.assertEqual(controllers[0].path, "/sys/fs/cgroup/cpu,cpuacct/system.slice/walinuxagent.service") with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv1._get_process_relative_controller_paths', return_value={'cpu,cpuacct': 'unexpected/path', 'memory': 'unexpected/path'}): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") controllers = cgroup.get_controllers(expected_relative_path="system.slice/walinuxagent.service") self.assertEqual(len(controllers), 0) def test_get_procs_path_returns_correct_path_v1(self): with mock_cgroup_v1_environment(self.tmp_dir): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") procs_path = cgroup.get_controller_procs_path(controller='cpu,cpuacct') self.assertEqual(procs_path, "/sys/fs/cgroup/cpu,cpuacct/system.slice/walinuxagent.service/cgroup.procs") procs_path = cgroup.get_controller_procs_path(controller='memory') self.assertEqual(procs_path, "/sys/fs/cgroup/memory/system.slice/walinuxagent.service/cgroup.procs") def test_get_processes_returns_processes_at_all_controller_paths_v1(self): with mock_cgroup_v1_environment(self.tmp_dir): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") procs = cgroup.get_processes() self.assertEqual(len(procs), 3) self.assertIn(int(123), procs) self.assertIn(int(234), procs) self.assertIn(int(345), procs) def test_get_processes_returns_empty_list_if_no_controllers_mounted_v1(self): with mock_cgroup_v1_environment(self.tmp_dir): with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv1._get_controller_mountpoints', return_value={}): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") procs = cgroup.get_processes() self.assertIsInstance(procs, list) self.assertEqual(len(procs), 0) def test_get_processes_returns_empty_list_if_procs_path_empty_v1(self): with mock_cgroup_v1_environment(self.tmp_dir): with patch('azurelinuxagent.ga.cgroupapi.CgroupV1.get_controller_procs_path', return_value=""): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") procs = cgroup.get_processes() self.assertIsInstance(procs, list) self.assertEqual(len(procs), 0) class CgroupsApiv2TestCase(AgentTestCase): def test_get_supported_controllers_returns_v2_controllers(self): with mock_cgroup_v2_environment(self.tmp_dir): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") controllers = cgroup.get_supported_controller_names() self.assertEqual(len(controllers), 2) self.assertIn('cpu', controllers) self.assertIn('memory', controllers) def test_check_in_expected_slice_returns_True_if_cgroup_path_in_expected_slice(self): with mock_cgroup_v2_environment(self.tmp_dir): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") self.assertTrue(cgroup.check_in_expected_slice(expected_slice='system.slice')) def test_check_in_expected_slice_returns_False_if_cgroup_path_not_in_expected_slice(self): with mock_cgroup_v2_environment(self.tmp_dir): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") self.assertFalse(cgroup.check_in_expected_slice(expected_slice='user.slice')) with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv2._get_process_relative_cgroup_path', return_value=""): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") self.assertFalse(cgroup.check_in_expected_slice(expected_slice='system.slice')) def test_get_controllers_returns_all_supported_controllers_v2(self): with mock_cgroup_v2_environment(self.tmp_dir): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") controllers = cgroup.get_controllers() self.assertEqual(len(controllers), 2) self.assertIsInstance(controllers[0], CpuControllerV2) self.assertEqual(controllers[0].name, "walinuxagent") self.assertEqual(controllers[0].path, "/sys/fs/cgroup/system.slice/walinuxagent.service") self.assertIsInstance(controllers[1], MemoryControllerV2) self.assertEqual(controllers[1].name, "walinuxagent") self.assertEqual(controllers[1].path, "/sys/fs/cgroup/system.slice/walinuxagent.service") def test_get_controllers_returns_only_enabled_controllers_v2(self): with mock_cgroup_v2_environment(self.tmp_dir): with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv2._get_controllers_enabled_at_root', return_value=["cpu"]): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") controllers = cgroup.get_controllers() self.assertEqual(len(controllers), 1) self.assertIsInstance(controllers[0], CpuControllerV2) self.assertEqual(controllers[0].name, "walinuxagent") self.assertEqual(controllers[0].path, "/sys/fs/cgroup/system.slice/walinuxagent.service") with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv2._get_controllers_enabled_at_root', return_value=["memory"]): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") controllers = cgroup.get_controllers() self.assertEqual(len(controllers), 1) self.assertIsInstance(controllers[0], MemoryControllerV2) self.assertEqual(controllers[0].name, "walinuxagent") self.assertEqual(controllers[0].path, "/sys/fs/cgroup/system.slice/walinuxagent.service") with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv2._get_controllers_enabled_at_root', return_value=[]): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") controllers = cgroup.get_controllers() self.assertEqual(len(controllers), 0) def test_get_controllers_returns_empty_if_cgroup_path_is_empty_v2(self): with mock_cgroup_v2_environment(self.tmp_dir): mock_cgroup_empty_path = CgroupV2(cgroup_name="test", root_cgroup_path="/sys/fs/cgroup", cgroup_path="", enabled_controllers=["cpu", "memory"]) with patch("azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv2.get_process_cgroup", return_value=mock_cgroup_empty_path): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") controllers = cgroup.get_controllers() self.assertEqual(len(controllers), 0) def test_get_controllers_returns_only_controllers_at_expected_path_v2(self): with mock_cgroup_v2_environment(self.tmp_dir): mock_cgroup_unexpected_path = CgroupV2(cgroup_name="test", root_cgroup_path="/sys/fs/cgroup", cgroup_path="/sys/fs/cgroup/unexpected/path", enabled_controllers=["cpu", "memory"]) with patch("azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv2.get_process_cgroup", return_value=mock_cgroup_unexpected_path): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") controllers = cgroup.get_controllers(expected_relative_path="system.slice/walinuxagent.service") self.assertEqual(len(controllers), 0) def test_get_procs_path_returns_empty_if_root_cgroup_empty_v2(self): with mock_cgroup_v2_environment(self.tmp_dir): with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv2._get_root_cgroup_path', return_value=""): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") procs_path = cgroup.get_procs_path() self.assertEqual(procs_path, "") def test_get_procs_path_returns_correct_path_v2(self): with mock_cgroup_v2_environment(self.tmp_dir): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") procs_path = cgroup.get_procs_path() self.assertEqual(procs_path, "/sys/fs/cgroup/system.slice/walinuxagent.service/cgroup.procs") def test_get_processes_returns_processes_at_all_controller_paths_v2(self): with mock_cgroup_v2_environment(self.tmp_dir): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") procs = cgroup.get_processes() self.assertEqual(len(procs), 3) self.assertIn(int(123), procs) self.assertIn(int(234), procs) self.assertIn(int(345), procs) def test_get_processes_returns_empty_list_if_root_cgroup_empty_v2(self): with mock_cgroup_v2_environment(self.tmp_dir): with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv2._get_root_cgroup_path', return_value=""): cgroup = create_cgroup_api().get_process_cgroup(process_id="self", cgroup_name="walinuxagent") procs = cgroup.get_processes() self.assertEqual(len(procs), 0) Azure-WALinuxAgent-a976115/tests/ga/test_cgroupconfigurator.py000066400000000000000000002127011510742556200244610ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.4+ and Openssl 1.0+ # from __future__ import print_function import contextlib import os import random import re import subprocess import tempfile import time import threading from azurelinuxagent.common import conf from azurelinuxagent.ga.cgroupcontroller import AGENT_NAME_TELEMETRY, MetricsCounter, MetricValue, MetricsCategory from azurelinuxagent.ga.cgroupconfigurator import CGroupConfigurator, DisableCgroups from azurelinuxagent.ga.cgroupstelemetry import CGroupsTelemetry from azurelinuxagent.common.event import WALAEventOperation from azurelinuxagent.common.exception import CGroupsException, AgentMemoryExceededException from azurelinuxagent.common.future import ustr from azurelinuxagent.common.utils import shellutil, fileutil from azurelinuxagent.ga.cpucontroller import CpuControllerV1 from tests.lib.mock_environment import MockCommand from tests.lib.mock_cgroup_environment import mock_cgroup_v1_environment, UnitFilePaths, mock_cgroup_v2_environment from tests.lib.tools import AgentTestCase, patch, mock_sleep, data_dir from tests.lib.miscellaneous_tools import format_processes, wait_for class CGroupConfiguratorSystemdTestCase(AgentTestCase): @classmethod def tearDownClass(cls): CGroupConfigurator._instance = None AgentTestCase.tearDownClass() def tearDown(self): CGroupConfigurator._instance = None AgentTestCase.tearDown(self) @contextlib.contextmanager def _get_cgroup_configurator(self, initialize=True, enable=True, mock_commands=None): CGroupConfigurator._instance = None configurator = CGroupConfigurator.get_instance() CGroupsTelemetry.reset() with mock_cgroup_v1_environment(self.tmp_dir) as mock_environment: if mock_commands is not None: for command in mock_commands: mock_environment.add_command(command) configurator.mocks = mock_environment if initialize: if not enable: with patch.object(configurator, "enable"): configurator.initialize() else: configurator.initialize() yield configurator @contextlib.contextmanager def _get_cgroup_configurator_v2(self, initialize=True, enable=True, mock_commands=None): CGroupConfigurator._instance = None configurator = CGroupConfigurator.get_instance() CGroupsTelemetry.reset() with mock_cgroup_v2_environment(self.tmp_dir) as mock_environment: if mock_commands is not None: for command in mock_commands: mock_environment.add_command(command) configurator.mocks = mock_environment if initialize: if not enable: with patch.object(configurator, "enable"): configurator.initialize() else: configurator.initialize() yield configurator def test_initialize_should_enable_cgroups_v1(self): with self._get_cgroup_configurator() as configurator: self.assertTrue(configurator.enabled(), "cgroups were not enabled") def test_initialize_should_not_enable_when_cgroup_api_cannot_be_determined(self): # Mock cgroup api to raise CGroupsException def mock_create_cgroup_api(): raise CGroupsException("") with patch('azurelinuxagent.ga.cgroupconfigurator.create_cgroup_api', side_effect=mock_create_cgroup_api): with self._get_cgroup_configurator() as configurator: self.assertFalse(configurator.enabled(), "cgroups were enabled") def test_should_cleanup_and_reset_cpu_quota_if_agent_cgroups_not_enabled_for_enforcement(self): command_mocks = [MockCommand(r"^systemctl show walinuxagent.service --property CPUQuotaPerSecUSec", '''CPUQuotaPerSecUSec=5ms ''')] with self._get_cgroup_configurator_v2(initialize=False, mock_commands=command_mocks) as configurator: with patch("azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv2.can_enforce_cpu", return_value=False): agent_drop_in_file_cpu_quota = configurator.mocks.get_mapped_path(UnitFilePaths.cpu_quota) # The mock creates the drop-in file configurator.mocks.add_data_file(os.path.join(data_dir, 'init', "12-CPUQuota.conf"), UnitFilePaths.cpu_quota) self.assertTrue(os.path.exists(agent_drop_in_file_cpu_quota), "{0} was not created".format(agent_drop_in_file_cpu_quota)) configurator.initialize() self.assertFalse(os.path.exists(agent_drop_in_file_cpu_quota), "{0} was not cleaned up".format(agent_drop_in_file_cpu_quota)) cmd = 'systemctl set-property walinuxagent.service CPUQuota= --runtime' self.assertIn(cmd, configurator.mocks.commands_call_list, "The command to reset the CPU quota was not called") def test_initialize_should_start_tracking_the_agent_cgroups(self): with self._get_cgroup_configurator() as configurator: tracked = CGroupsTelemetry._tracked self.assertTrue(configurator.enabled(), "Cgroups should be enabled") self.assertTrue(any(cg for cg in tracked.values() if cg.name == AGENT_NAME_TELEMETRY and 'cpu' in cg.path), "The Agent's CPU is not being tracked. Tracked: {0}".format(tracked)) self.assertTrue(any(cg for cg in tracked.values() if cg.name == AGENT_NAME_TELEMETRY and 'memory' in cg.path), "The Agent's Memory is not being tracked. Tracked: {0}".format(tracked)) def test_initialize_should_start_tracking_other_controllers_when_one_is_not_present(self): command_mocks = [MockCommand(r"^findmnt -t cgroup --noheadings$", '''/sys/fs/cgroup/systemd cgroup cgroup rw,nosuid,nodev,noexec,relatime,xattr,name=systemd /sys/fs/cgroup/devices cgroup cgroup rw,nosuid,nodev,noexec,relatime,devices /sys/fs/cgroup/rdma cgroup cgroup rw,nosuid,nodev,noexec,relatime,rdma /sys/fs/cgroup/perf_event cgroup cgroup rw,nosuid,nodev,noexec,relatime,perf_event /sys/fs/cgroup/net_cls,net_prio cgroup cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio /sys/fs/cgroup/blkio cgroup cgroup rw,nosuid,nodev,noexec,relatime,blkio /sys/fs/cgroup/cpuset cgroup cgroup rw,nosuid,nodev,noexec,relatime,cpuset /sys/fs/cgroup/cpu,cpuacct cgroup cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct /sys/fs/cgroup/freezer cgroup cgroup rw,nosuid,nodev,noexec,relatime,freezer /sys/fs/cgroup/hugetlb cgroup cgroup rw,nosuid,nodev,noexec,relatime,hugetlb /sys/fs/cgroup/pids cgroup cgroup rw,nosuid,nodev,noexec,relatime,pids ''')] with self._get_cgroup_configurator(mock_commands=command_mocks) as configurator: tracked = CGroupsTelemetry._tracked self.assertTrue(configurator.enabled(), "Cgroups should be enabled") self.assertFalse(any(cg for cg in tracked.values() if cg.name == 'walinuxagent.service' and 'memory' in cg.path), "The Agent's memory should not be tracked. Tracked: {0}".format(tracked)) def test_initialize_should_not_enable_cgroups_when_the_cpu_and_memory_controllers_are_not_present(self): command_mocks = [MockCommand(r"^findmnt -t cgroup --noheadings$", '''/sys/fs/cgroup/systemd cgroup cgroup rw,nosuid,nodev,noexec,relatime,xattr,name=systemd /sys/fs/cgroup/devices cgroup cgroup rw,nosuid,nodev,noexec,relatime,devices /sys/fs/cgroup/rdma cgroup cgroup rw,nosuid,nodev,noexec,relatime,rdma /sys/fs/cgroup/perf_event cgroup cgroup rw,nosuid,nodev,noexec,relatime,perf_event /sys/fs/cgroup/net_cls,net_prio cgroup cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio /sys/fs/cgroup/blkio cgroup cgroup rw,nosuid,nodev,noexec,relatime,blkio /sys/fs/cgroup/cpuset cgroup cgroup rw,nosuid,nodev,noexec,relatime,cpuset /sys/fs/cgroup/freezer cgroup cgroup rw,nosuid,nodev,noexec,relatime,freezer /sys/fs/cgroup/hugetlb cgroup cgroup rw,nosuid,nodev,noexec,relatime,hugetlb /sys/fs/cgroup/pids cgroup cgroup rw,nosuid,nodev,noexec,relatime,pids ''')] with self._get_cgroup_configurator(mock_commands=command_mocks) as configurator: tracked = CGroupsTelemetry._tracked self.assertFalse(configurator.enabled(), "Cgroups should not be enabled") self.assertEqual(len(tracked), 0, "No cgroups should be tracked. Tracked: {0}".format(tracked)) def test_initialize_should_not_enable_cgroups_when_the_agent_is_not_in_the_system_slice(self): command_mocks = [MockCommand(r"^findmnt -t cgroup --noheadings$", '''/sys/fs/cgroup/systemd cgroup cgroup rw,nosuid,nodev,noexec,relatime,xattr,name=systemd* /sys/fs/cgroup/devices cgroup cgroup rw,nosuid,nodev,noexec,relatime,devices /sys/fs/cgroup/rdma cgroup cgroup rw,nosuid,nodev,noexec,relatime,rdma /sys/fs/cgroup/perf_event cgroup cgroup rw,nosuid,nodev,noexec,relatime,perf_event /sys/fs/cgroup/net_cls,net_prio cgroup cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio /sys/fs/cgroup/blkio cgroup cgroup rw,nosuid,nodev,noexec,relatime,blkio /sys/fs/cgroup/cpuset cgroup cgroup rw,nosuid,nodev,noexec,relatime,cpuset /sys/fs/cgroup/freezer cgroup cgroup rw,nosuid,nodev,noexec,relatime,freezer /sys/fs/cgroup/hugetlb cgroup cgroup rw,nosuid,nodev,noexec,relatime,hugetlb /sys/fs/cgroup/pids cgroup cgroup rw,nosuid,nodev,noexec,relatime,pids ''')] with self._get_cgroup_configurator(mock_commands=command_mocks) as configurator: tracked = CGroupsTelemetry._tracked agent_drop_in_file_cpu_quota = configurator.mocks.get_mapped_path(UnitFilePaths.cpu_quota) self.assertFalse(configurator.enabled(), "Cgroups should not be enabled") self.assertEqual(len(tracked), 0, "No cgroups should be tracked. Tracked: {0}".format(tracked)) self.assertFalse(os.path.exists(agent_drop_in_file_cpu_quota), "{0} should not have been created".format(agent_drop_in_file_cpu_quota)) def test_initialize_should_enable_cgroups_v2(self): with self._get_cgroup_configurator_v2() as configurator: self.assertTrue(configurator.enabled(), "cgroups were not enabled") def test_initialize_should_start_tracking_the_agent_cgroups_in_v2(self): with self._get_cgroup_configurator_v2() as configurator: tracked = CGroupsTelemetry._tracked self.assertTrue(configurator.enabled(), "Cgroups should be enabled") self.assertTrue(any(cg for cg in tracked if tracked[cg].name == AGENT_NAME_TELEMETRY and 'cpu' in cg), "The Agent's CPU is not being tracked. Tracked: {0}".format(tracked)) self.assertTrue(any(cg for cg in tracked if tracked[cg].name == AGENT_NAME_TELEMETRY and 'memory' in cg), "The Agent's Memory is not being tracked. Tracked: {0}".format(tracked)) def test_initialize_should_not_enable_cgroups_when_the_cpu_and_memory_controllers_are_not_present_in_v2(self): with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv2._get_controllers_enabled_at_root', return_value=[]): with self._get_cgroup_configurator_v2() as configurator: tracked = CGroupsTelemetry._tracked self.assertFalse(configurator.enabled(), "Cgroups should not be enabled") self.assertEqual(len(tracked), 0, "No cgroups should be tracked. Tracked: {0}".format(tracked)) def test_initialize_should_start_tracking_other_controllers_when_one_is_not_present_in_v2(self): with patch('azurelinuxagent.ga.cgroupapi.SystemdCgroupApiv2._get_controllers_enabled_at_root', return_value=['memory']): with self._get_cgroup_configurator_v2() as configurator: tracked = CGroupsTelemetry._tracked self.assertTrue(configurator.enabled(), "Cgroups should be enabled") self.assertFalse( any(cg for cg in tracked if tracked[cg].name == AGENT_NAME_TELEMETRY and 'cpu' in cg), "The Agent's cpu is being tracked. Tracked: {0}".format(tracked)) def test_agent_enforcement_enabled_in_v2(self): with self._get_cgroup_configurator_v2() as configurator: cmd = 'systemctl set-property walinuxagent.service CPUQuota=50% --runtime' self.assertIn(cmd, configurator.mocks.commands_call_list, "The command to set CPU quota was not called") def test_extension_enforcement_enabled_in_v2(self): service_list = [ { "name": "extension.service", "cpuQuotaPercentage": 5 } ] with self._get_cgroup_configurator_v2() as configurator: configurator.setup_extension_slice(extension_name="Microsoft.CPlat.Extension", cpu_quota=5) cmd = 'systemctl set-property azure-vmextensions-Microsoft.CPlat.Extension.slice CPUAccounting=yes MemoryAccounting=yes CPUQuota=5% --runtime' self.assertIn(cmd, configurator.mocks.commands_call_list, "The command to set the CPU quota was not called") configurator.set_extension_services_cpu_memory_quota(service_list) cmd = 'systemctl set-property extension.service CPUAccounting=yes MemoryAccounting=yes CPUQuota=5% --runtime' self.assertIn(cmd, configurator.mocks.commands_call_list, "The command to set the reset CPU quota was not called") def test_initialize_should_not_create_unit_files(self): with self._get_cgroup_configurator() as configurator: # get the paths to the mocked files azure_slice_unit_file = configurator.mocks.get_mapped_path(UnitFilePaths.azure) extensions_slice_unit_file = configurator.mocks.get_mapped_path(UnitFilePaths.vmextensions) agent_drop_in_file_slice = configurator.mocks.get_mapped_path(UnitFilePaths.slice) agent_drop_in_file_cpu_accounting = configurator.mocks.get_mapped_path(UnitFilePaths.cpu_accounting) agent_drop_in_file_memory_accounting = configurator.mocks.get_mapped_path(UnitFilePaths.memory_accounting) # The mock creates the slice unit files; delete them os.remove(azure_slice_unit_file) os.remove(extensions_slice_unit_file) # The service file for the agent includes settings for the slice and cpu accounting, but not for cpu quota; initialize() # should not create drop in files for the first 2, but it should create one the cpu quota self.assertFalse(os.path.exists(azure_slice_unit_file), "{0} should not have been created".format(azure_slice_unit_file)) self.assertFalse(os.path.exists(extensions_slice_unit_file), "{0} should not have been created".format(extensions_slice_unit_file)) self.assertFalse(os.path.exists(agent_drop_in_file_slice), "{0} should not have been created".format(agent_drop_in_file_slice)) self.assertFalse(os.path.exists(agent_drop_in_file_cpu_accounting), "{0} should not have been created".format(agent_drop_in_file_cpu_accounting)) self.assertFalse(os.path.exists(agent_drop_in_file_memory_accounting), "{0} should not have been created".format(agent_drop_in_file_memory_accounting)) def test_initialize_should_create_azure_and_vmextensions_slice_file_when_the_agent_service_file_is_not_updated(self): with self._get_cgroup_configurator(initialize=False) as configurator: # get the paths to the mocked files azure_slice_unit_file = configurator.mocks.get_mapped_path(UnitFilePaths.azure) extensions_slice_unit_file = configurator.mocks.get_mapped_path(UnitFilePaths.vmextensions) agent_drop_in_file_slice = configurator.mocks.get_mapped_path(UnitFilePaths.slice) agent_drop_in_file_cpu_accounting = configurator.mocks.get_mapped_path(UnitFilePaths.cpu_accounting) agent_drop_in_file_memory_accounting = configurator.mocks.get_mapped_path(UnitFilePaths.memory_accounting) # The mock creates the service and slice unit files; replace the former and delete the latter configurator.mocks.add_data_file(os.path.join(data_dir, 'init', "walinuxagent.service.previous"), UnitFilePaths.walinuxagent) os.remove(azure_slice_unit_file) os.remove(extensions_slice_unit_file) configurator.initialize() # The older service file for the agent did not include settings for the slice and cpu parameters; in that case, initialize() should # create drop in files to set those properties self.assertTrue(os.path.exists(azure_slice_unit_file), "{0} was not created".format(azure_slice_unit_file)) self.assertTrue(os.path.exists(extensions_slice_unit_file), "{0} was not created".format(extensions_slice_unit_file)) self.assertTrue(os.path.exists(agent_drop_in_file_slice), "{0} was not created".format(agent_drop_in_file_slice)) self.assertFalse(os.path.exists(agent_drop_in_file_cpu_accounting), "{0} was created".format(agent_drop_in_file_cpu_accounting)) self.assertFalse(os.path.exists(agent_drop_in_file_memory_accounting), "{0} was created".format(agent_drop_in_file_memory_accounting)) def test_initialize_should_clear_logcollector_slice(self): with self._get_cgroup_configurator(initialize=False) as configurator: log_collector_unit_file = configurator.mocks.get_mapped_path(UnitFilePaths.logcollector) # The mock creates the slice unit file configurator.mocks.add_data_file(os.path.join(data_dir, 'init', "azure-walinuxagent-logcollector.slice"), UnitFilePaths.logcollector) self.assertTrue(os.path.exists(log_collector_unit_file), "{0} was not created".format(log_collector_unit_file)) configurator.initialize() # initialize() should remove the unit file self.assertFalse(os.path.exists(log_collector_unit_file), "{0} should not have been created".format(log_collector_unit_file)) def test_setup_extension_slice(self): with self._get_cgroup_configurator() as configurator: # get the paths to the mocked files extension_slice_unit_file = configurator.mocks.get_mapped_path(UnitFilePaths.extensionslice) extension_name = "Microsoft.CPlat.Extension" cpu_quota = 5 configurator.setup_extension_slice(extension_name=extension_name, cpu_quota=cpu_quota) command = 'systemctl set-property azure-vmextensions-{0}.slice CPUAccounting=yes MemoryAccounting=yes CPUQuota={1}% --runtime'.format(extension_name, cpu_quota) self.assertIn(command, configurator.mocks.commands_call_list, "The command to set the CPU quota was not called") self.assertFalse(os.path.exists(extension_slice_unit_file), "{0} should not have been created".format(extension_slice_unit_file)) def test_reset_extension_quota(self): command_mocks = [MockCommand(r"^systemctl show (.+) --property CPUQuotaPerSecUSec", '''CPUQuotaPerSecUSec=5ms ''')] with self._get_cgroup_configurator(mock_commands=command_mocks) as configurator: extension_name = "Microsoft.CPlat.Extension" configurator.reset_extension_quota(extension_name=extension_name) command = 'systemctl set-property azure-vmextensions-{0}.slice CPUQuota= --runtime'.format( extension_name) self.assertIn(command, configurator.mocks.commands_call_list, "The command to reset the CPU quota was not called") def test_it_should_handle_exceptions_when_reset_extension_quota_fails(self): command_mocks = [MockCommand(r"systemctl show (.+) --property CPUQuotaPerSecUSec", return_value=1, stdout='', stderr='Failed to get properties: Access denied')] with self._get_cgroup_configurator(mock_commands=command_mocks) as configurator: extension_name = "Microsoft.CPlat.Extension" configurator.reset_extension_quota(extension_name=extension_name) command_set = 'systemctl set-property azure-vmextensions-{0}.slice CPUQuota= --runtime'.format( extension_name) self.assertIn(command_set, configurator.mocks.commands_call_list, "The command to reset the CPU quota was not called") command_get = 'systemctl show azure-vmextensions-{0}.slice --property CPUQuotaPerSecUSec'.format(extension_name) self.assertIn(command_get, configurator.mocks.commands_call_list, "The command to get the CPU quota was not called") def test_enable_should_raise_cgroups_exception_when_cgroups_are_not_supported(self): with self._get_cgroup_configurator(enable=False) as configurator: with patch.object(configurator, "supported", return_value=False): with self.assertRaises(CGroupsException) as context_manager: configurator.enable() self.assertIn("Attempted to enable cgroups, but they are not supported on the current platform", str(context_manager.exception)) def test_enable_should_set_agent_cpu_quota_and_track_throttled_time(self): with self._get_cgroup_configurator(initialize=False) as configurator: agent_drop_in_file_cpu_quota = configurator.mocks.get_mapped_path(UnitFilePaths.cpu_quota) if os.path.exists(agent_drop_in_file_cpu_quota): raise Exception("{0} should not have been created during test setup".format(agent_drop_in_file_cpu_quota)) configurator.initialize() expected_quota = "CPUQuota={0}%".format(conf.get_agent_cpu_quota()) self.assertFalse(os.path.exists(agent_drop_in_file_cpu_quota), "{0} was not created".format(agent_drop_in_file_cpu_quota)) cmd = 'systemctl set-property walinuxagent.service {0} --runtime'.format(expected_quota) self.assertIn(cmd, configurator.mocks.commands_call_list, "The command to set the CPU quota was not called") def test_enable_should_not_track_controllers_when_cgroups_v2_enabled(self): with self._get_cgroup_configurator_v2(initialize=False) as configurator: if len(CGroupsTelemetry._tracked) > 0: raise Exception("Test setup should not start tracking Throttle Time") configurator.mocks.add_file(UnitFilePaths.cpu_quota, Exception("A TEST EXCEPTION")) configurator.initialize() self.assertEqual(len(CGroupsTelemetry._tracked), 0, "Throttle time should not be tracked when using cgroups v2") def test_disable_should_reset_cpu_quota(self): with self._get_cgroup_configurator() as configurator: if len(CGroupsTelemetry._tracked) == 0: raise Exception("Test setup should have started tracking at least 1 cgroup (the agent's)") configurator.disable("UNIT TEST", DisableCgroups.AGENT) agent_drop_in_file_cpu_quota = configurator.mocks.get_mapped_path(UnitFilePaths.cpu_quota) self.assertFalse(os.path.exists(agent_drop_in_file_cpu_quota), "{0} was created".format(agent_drop_in_file_cpu_quota)) self.assertEqual(len(CGroupsTelemetry._tracked), 1, "Memory cgroups should be tracked after disable. Tracking: {0}".format(CGroupsTelemetry._tracked)) self.assertFalse(any(cg for cg in CGroupsTelemetry._tracked.values() if cg.name == 'walinuxagent.service' and 'cpu' in cg.path), "The Agent's cpu should not be tracked. Tracked: {0}".format(CGroupsTelemetry._tracked)) def test_disable_should_reset_cpu_quota_for_all_cgroups(self): service_list = [ { "name": "extension.service", "cpuQuotaPercentage": 5 } ] extension_name = "Microsoft.CPlat.Extension" extension_services = {extension_name: service_list} with self._get_cgroup_configurator() as configurator: with patch.object(configurator, "get_extension_services_list", return_value=extension_services): # get the paths to the mocked files agent_drop_in_file_cpu_quota = configurator.mocks.get_mapped_path(UnitFilePaths.cpu_quota) extension_slice_unit_file = configurator.mocks.get_mapped_path(UnitFilePaths.extensionslice) extension_service_cpu_quota = configurator.mocks.get_mapped_path(UnitFilePaths.extension_service_cpu_quota) configurator.setup_extension_slice(extension_name=extension_name, cpu_quota=5) configurator.set_extension_services_cpu_memory_quota(service_list) CGroupsTelemetry._tracked['/sys/fs/cgroup/cpu,cpuacct/system.slice/extension.service'] = \ CpuControllerV1('extension.service', '/sys/fs/cgroup/cpu,cpuacct/system.slice/extension.service') CGroupsTelemetry._tracked['/sys/fs/cgroup/cpu,cpuacct/azure.slice/azure-vmextensions.slice/' \ 'azure-vmextensions-Microsoft.CPlat.Extension.slice'] = \ CpuControllerV1('Microsoft.CPlat.Extension', '/sys/fs/cgroup/cpu,cpuacct/azure.slice/azure-vmextensions.slice/azure-vmextensions-Microsoft.CPlat.Extension.slice') configurator.disable("UNIT TEST", DisableCgroups.ALL) self.assertFalse(os.path.exists(agent_drop_in_file_cpu_quota), "{0} was created".format(agent_drop_in_file_cpu_quota)) self.assertFalse(os.path.exists(extension_slice_unit_file), "{0} was created".format(extension_slice_unit_file)) self.assertFalse(os.path.exists(extension_service_cpu_quota), "{0} was created".format(extension_service_cpu_quota)) @patch('time.sleep', side_effect=lambda _: mock_sleep()) def test_start_extension_command_should_not_use_systemd_when_cgroups_are_not_enabled(self, _): with self._get_cgroup_configurator() as configurator: configurator.disable("UNIT TEST", DisableCgroups.ALL) with patch("azurelinuxagent.ga.cgroupapi.subprocess.Popen", wraps=subprocess.Popen) as patcher: configurator.start_extension_command( extension_name="Microsoft.Compute.TestExtension-1.2.3", command="date", cmd_name="test", timeout=300, shell=False, cwd=self.tmp_dir, env={}.update(os.environ), stdout=subprocess.PIPE, stderr=subprocess.PIPE) command_calls = [args[0] for args, _ in patcher.call_args_list if len(args) > 0 and "date" in args[0]] self.assertEqual(len(command_calls), 1, "The test command should have been called exactly once [{0}]".format(command_calls)) self.assertNotIn("systemd-run", command_calls[0], "The command should not have been invoked using systemd") self.assertEqual(command_calls[0], "date", "The command line should not have been modified") @patch('time.sleep', side_effect=lambda _: mock_sleep()) def test_start_extension_command_should_use_systemd_run_when_cgroups_v1_are_enabled(self, _): with self._get_cgroup_configurator() as configurator: with patch("azurelinuxagent.ga.cgroupapi.subprocess.Popen", wraps=subprocess.Popen) as popen_patch: configurator.start_extension_command( extension_name="Microsoft.Compute.TestExtension-1.2.3", command="the-test-extension-command", cmd_name="test", timeout=300, shell=False, cwd=self.tmp_dir, env={}.update(os.environ), stdout=subprocess.PIPE, stderr=subprocess.PIPE) command_calls = [args[0] for (args, _) in popen_patch.call_args_list if "the-test-extension-command" in args[0]] self.assertEqual(len(command_calls), 1, "The test command should have been called exactly once [{0}]".format(command_calls)) self.assertIn("systemd-run", command_calls[0], "The extension should have been invoked using systemd") @patch('time.sleep', side_effect=lambda _: mock_sleep()) def test_start_extension_command_should_start_tracking_the_extension_cgroups(self, _): # CPU usage is initialized when we begin tracking a CPU cgroup; since this test does not retrieve the # CPU usage, there is no need for initialization with self._get_cgroup_configurator() as configurator: configurator.start_extension_command( extension_name="Microsoft.Compute.TestExtension-1.2.3", command="test command", cmd_name="test", timeout=300, shell=False, cwd=self.tmp_dir, env={}.update(os.environ), stdout=subprocess.PIPE, stderr=subprocess.PIPE) tracked = CGroupsTelemetry._tracked self.assertTrue( any(cg for cg in tracked.values() if cg.name == 'Microsoft.Compute.TestExtension-1.2.3' and 'cpu' in cg.path), "The extension's CPU is not being tracked") self.assertTrue( any(cg for cg in tracked.values() if cg.name == 'Microsoft.Compute.TestExtension-1.2.3' and 'memory' in cg.path), "The extension's Memory is not being tracked") def test_start_extension_command_should_raise_an_exception_when_the_command_cannot_be_started(self): with self._get_cgroup_configurator() as configurator: original_popen = subprocess.Popen def mock_popen(command_arg, *args, **kwargs): if "test command" in command_arg: raise Exception("A TEST EXCEPTION") return original_popen(command_arg, *args, **kwargs) with patch("azurelinuxagent.ga.cgroupapi.subprocess.Popen", side_effect=mock_popen): with self.assertRaises(Exception) as context_manager: configurator.start_extension_command( extension_name="Microsoft.Compute.TestExtension-1.2.3", command="test command", cmd_name="test", timeout=300, shell=False, cwd=self.tmp_dir, env={}.update(os.environ), stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.assertIn("A TEST EXCEPTION", str(context_manager.exception)) @patch('time.sleep', side_effect=lambda _: mock_sleep()) def test_start_extension_command_should_use_systemd_when_cgroup_v2_enabled(self, _): with self._get_cgroup_configurator_v2() as configurator: with patch("azurelinuxagent.ga.cgroupapi.subprocess.Popen", wraps=subprocess.Popen) as popen_patch: configurator.start_extension_command( extension_name="Microsoft.Compute.TestExtension-1.2.3", command="the-test-extension-command", cmd_name="test", timeout=300, shell=False, cwd=self.tmp_dir, env={}.update(os.environ), stdout=subprocess.PIPE, stderr=subprocess.PIPE) command_calls = [args[0] for (args, _) in popen_patch.call_args_list if "the-test-extension-command" in args[0]] self.assertEqual(len(command_calls), 1, "The test command should have been called exactly once [{0}]".format(command_calls)) self.assertIn("systemd-run", command_calls[0], "The extension should have been invoked using systemd") @patch('time.sleep', side_effect=lambda _: mock_sleep()) def test_start_extension_command_should_disable_cgroups_and_invoke_the_command_directly_if_systemd_fails(self, _): with self._get_cgroup_configurator() as configurator: configurator.mocks.add_command(MockCommand("systemd-run", return_value=1, stdout='', stderr='Failed to start transient scope unit: syntax error')) with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as output_file: with patch("azurelinuxagent.ga.cgroupapi.add_event") as mock_add_event: with patch("subprocess.Popen", wraps=subprocess.Popen) as popen_patch: CGroupsTelemetry.reset() command = "echo TEST_OUTPUT" command_output = configurator.start_extension_command( extension_name="Microsoft.Compute.TestExtension-1.2.3", command=command, cmd_name="test", timeout=300, shell=True, cwd=self.tmp_dir, env={}.update(os.environ), stdout=output_file, stderr=output_file) self.assertFalse(configurator.enabled(), "Cgroups should have been disabled") disabled_events = [kwargs for _, kwargs in mock_add_event.call_args_list if kwargs['op'] == WALAEventOperation.CGroupsDisabled] self.assertTrue(len(disabled_events) == 1, "Exactly one CGroupsDisabled telemetry event should have been issued. Found: {0}".format(disabled_events)) self.assertIn("Failed to start Microsoft.Compute.TestExtension-1.2.3 using systemd-run", disabled_events[0]['message'], "The systemd-run failure was not included in the telemetry message") self.assertEqual(False, disabled_events[0]['is_success'], "The telemetry event should indicate a failure") extension_calls = [args[0] for (args, _) in popen_patch.call_args_list if command in args[0]] self.assertEqual(2, len(extension_calls), "The extension should have been invoked exactly twice") self.assertIn("systemd-run", extension_calls[0], "The first call to the extension should have used systemd") self.assertEqual(command, extension_calls[1], "The second call to the extension should not have used systemd") self.assertEqual(len(CGroupsTelemetry._tracked), 0, "No cgroups should have been created") self.assertIn("TEST_OUTPUT\n", command_output, "The test output was not captured") @patch('time.sleep', side_effect=lambda _: mock_sleep()) def test_start_extension_command_should_disable_cgroups_and_invoke_the_command_directly_if_systemd_fails_and_reset_fails(self, _): service_list = [ { "name": "extension.service", "cpuQuotaPercentage": 5 } ] extension_name = "Microsoft.Compute.TestExtension" extension_services = {extension_name: service_list} with self._get_cgroup_configurator() as configurator: with patch.object(configurator, "get_extension_services_list", return_value=extension_services): configurator.mocks.add_command(MockCommand("systemd-run", return_value=1, stdout='', stderr='Failed to start transient scope unit: syntax error')) configurator.mocks.add_command(MockCommand(r"^systemctl show (.+) --property CPUQuotaPerSecUSec", return_value=1, stdout='', stderr='Failed to get properties: Access denied')) with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as output_file: with patch("azurelinuxagent.ga.cgroupapi.add_event") as mock_add_event: with patch("subprocess.Popen", wraps=subprocess.Popen) as popen_patch: CGroupsTelemetry.reset() command = "echo TEST_OUTPUT" command_output = configurator.start_extension_command( extension_name="Microsoft.Compute.TestExtension-1.2.3", command=command, cmd_name="test", timeout=300, shell=True, cwd=self.tmp_dir, env={}.update(os.environ), stdout=output_file, stderr=output_file) self.assertFalse(configurator.enabled(), "Cgroups should have been disabled") disabled_events = [kwargs for _, kwargs in mock_add_event.call_args_list if kwargs['op'] == WALAEventOperation.CGroupsDisabled] self.assertTrue(len(disabled_events) == 1, "Exactly one CGroupsDisabled telemetry event should have been issued. Found: {0}".format(disabled_events)) self.assertIn("Failed to start Microsoft.Compute.TestExtension-1.2.3 using systemd-run", disabled_events[0]['message'], "The systemd-run failure was not included in the telemetry message") self.assertEqual(False, disabled_events[0]['is_success'], "The telemetry event should indicate a failure") extension_calls = [args[0] for (args, _) in popen_patch.call_args_list if command in args[0]] self.assertEqual(2, len(extension_calls), "The extension should have been invoked exactly twice") self.assertIn("systemd-run", extension_calls[0], "The first call to the extension should have used systemd") self.assertEqual(command, extension_calls[1], "The second call to the extension should not have used systemd") self.assertEqual(len(CGroupsTelemetry._tracked), 0, "No cgroups should have been created") self.assertIn("TEST_OUTPUT\n", command_output, "The test output was not captured") failed_systemctl_events = [kwargs for _, kwargs in mock_add_event.call_args_list if kwargs['op'] == WALAEventOperation.CGroupsInfo and "Failed to get current CPUQuotaPerSecUSec" in kwargs['message']] # we should have at least 3 telemetry events agent + extension + extension service self.assertEqual(len(failed_systemctl_events),3, "systemctl error should have been happened: {0}".format(failed_systemctl_events)) self.assertIn("Failed to get properties: Access denied", failed_systemctl_events[0]['message'], "The systemctl error was not included in the telemetry message") @patch('time.sleep', side_effect=lambda _: mock_sleep()) def test_start_extension_command_should_disable_cgroups_and_invoke_the_command_directly_if_systemd_times_out(self, _): with self._get_cgroup_configurator() as configurator: # Systemd has its own internal timeout which is shorter than what we define for extension operation timeout. # When systemd times out, it will write a message to stderr and exit with exit code 1. # In that case, we will internally recognize the failure due to the non-zero exit code, not as a timeout. configurator.mocks.add_command(MockCommand("systemd-run", return_value=1, stdout='', stderr='Failed to start transient scope unit: Connection timed out')) with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stdout: with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stderr: with patch("subprocess.Popen", wraps=subprocess.Popen) as popen_patch: CGroupsTelemetry.reset() configurator.start_extension_command( extension_name="Microsoft.Compute.TestExtension-1.2.3", command="echo 'success'", cmd_name="test", timeout=300, shell=True, cwd=self.tmp_dir, env={}.update(os.environ), stdout=stdout, stderr=stderr) self.assertFalse(configurator.enabled(), "Cgroups should have been disabled") extension_calls = [args[0] for (args, _) in popen_patch.call_args_list if "echo 'success'" in args[0]] self.assertEqual(2, len(extension_calls), "The extension should have been called twice. Got: {0}".format(extension_calls)) self.assertIn("systemd-run", extension_calls[0], "The first call to the extension should have used systemd") self.assertNotIn("systemd-run", extension_calls[1], "The second call to the extension should not have used systemd") self.assertEqual(len(CGroupsTelemetry._tracked), 0, "No cgroups should have been created") @patch('time.sleep', side_effect=lambda _: mock_sleep()) def test_start_extension_command_should_capture_only_the_last_subprocess_output(self, _): with self._get_cgroup_configurator() as configurator: original_popen = subprocess.Popen def mock_popen(command, *args, **kwargs): # Inject a syntax error to the call # Popen can accept both strings and lists, handle both here. if isinstance(command, str) and command.startswith('systemd-run'): command = 'systemd-run syntax_error' elif isinstance(command, list) and command[0] == 'systemd-run': command = ['systemd-run', 'syntax_error'] return original_popen(command, *args, **kwargs) expected_output = "[stdout]\n{0}\n\n\n[stderr]\n" with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stdout: with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stderr: with patch("azurelinuxagent.ga.cgroupapi.subprocess.Popen", side_effect=mock_popen): # We expect this call to fail because of the syntax error process_output = configurator.start_extension_command( extension_name="Microsoft.Compute.TestExtension-1.2.3", command="echo 'very specific test message'", cmd_name="test", timeout=300, shell=True, cwd=self.tmp_dir, env={}.update(os.environ), stdout=stdout, stderr=stderr) self.assertEqual(expected_output.format("very specific test message"), process_output) def test_it_should_set_extension_services_cpu_memory_quota(self): service_list = [ { "name": "extension.service", "cpuQuotaPercentage": 5 } ] with self._get_cgroup_configurator() as configurator: # get the paths to the mocked files extension_service_cpu_accounting = configurator.mocks.get_mapped_path(UnitFilePaths.extension_service_cpu_accounting) extension_service_cpu_quota = configurator.mocks.get_mapped_path(UnitFilePaths.extension_service_cpu_quota) configurator.set_extension_services_cpu_memory_quota(service_list) expected_cpu_accounting = "CPUAccounting=yes" expected_cpu_quota_percentage = "CPUQuota=5%" expected_memory_accounting = "MemoryAccounting=yes" # now drop in files should not create self.assertFalse(os.path.exists(extension_service_cpu_accounting), "{0} was created".format(extension_service_cpu_accounting)) self.assertFalse(os.path.exists(extension_service_cpu_quota), "{0} was created".format(extension_service_cpu_quota)) cmd = 'systemctl set-property extension.service {0} {1} {2} --runtime'.format(expected_cpu_accounting, expected_memory_accounting, expected_cpu_quota_percentage) self.assertIn(cmd, configurator.mocks.commands_call_list, "The command to set the CPU quota was not called") def test_it_should_not_update_quota_when_quota_is_not_changed(self): command_mocks = [MockCommand(r"^systemctl show extension\.service --property CPUQuotaPerSecUSec", '''CPUQuotaPerSecUSec=50ms '''), MockCommand(r"^systemctl show extension\.service --property CPUAccounting", '''CPUAccounting=yes '''), MockCommand(r"^systemctl show extension\.service --property MemoryAccounting", '''MemoryAccounting=yes ''')] service_list = [ { "name": "extension.service", "cpuQuotaPercentage": 5 } ] with self._get_cgroup_configurator(mock_commands=command_mocks) as configurator: configurator.set_extension_services_cpu_memory_quota(service_list) cmd = 'systemctl set-property extension.service' commands_list = configurator.mocks.commands_call_list for command in commands_list: self.assertNotIn(cmd, command, "The command to set CPU quota was called") def test_it_should_set_extension_services_when_quotas_not_defined(self): service_list = [ { "name": "extension.service" } ] with self._get_cgroup_configurator() as configurator: # get the paths to the mocked files extension_service_cpu_accounting = configurator.mocks.get_mapped_path(UnitFilePaths.extension_service_cpu_accounting) extension_service_cpu_quota = configurator.mocks.get_mapped_path(UnitFilePaths.extension_service_cpu_quota) extension_service_memory_accounting = configurator.mocks.get_mapped_path(UnitFilePaths.extension_service_memory_accounting) extension_service_memory_quota = configurator.mocks.get_mapped_path(UnitFilePaths.extension_service_memory_limit) configurator.set_extension_services_cpu_memory_quota(service_list) command = 'systemctl set-property extension.service CPUAccounting=yes MemoryAccounting=yes --runtime' self.assertIn(command, configurator.mocks.commands_call_list, "The command to set cgroups was not called") self.assertFalse(os.path.exists(extension_service_cpu_accounting), "{0} was created".format(extension_service_cpu_accounting)) self.assertFalse(os.path.exists(extension_service_cpu_quota), "{0} should not have been created during setup".format(extension_service_cpu_quota)) self.assertFalse(os.path.exists(extension_service_memory_accounting), "{0} was created".format(extension_service_memory_accounting)) self.assertFalse(os.path.exists(extension_service_memory_quota), "{0} should not have been created during setup".format(extension_service_memory_quota)) def test_it_should_handle_systemd_errors_when_set_extension_services_cpu_memory_quota(self): service_list = [ { "name": "extension.service", "cpuQuotaPercentage": 5 }, { "name": "extension2.service", "cpuQuotaPercentage": 10 } ] with self._get_cgroup_configurator() as configurator: with patch("azurelinuxagent.ga.cgroupapi.add_event") as mock_add_event: configurator.mocks.add_command(MockCommand("systemctl show extension.service --property CPUAccounting", return_value=1, stdout='', stderr='Failed to set properties: connection timed out')) configurator.mocks.add_command(MockCommand("systemctl set-property extension2.service CPUAccounting=yes MemoryAccounting=yes CPUQuota=10% --runtime", return_value=1, stdout='', stderr='Failed to set properties: Access denied')) configurator.set_extension_services_cpu_memory_quota(service_list) commands_list = configurator.mocks.commands_call_list extension_command_set = 'systemctl set-property extension.service CPUAccounting=yes MemoryAccounting=yes CPUQuota=5% --runtime' extension2_command_set = 'systemctl set-property extension2.service CPUAccounting=yes MemoryAccounting=yes CPUQuota=10% --runtime' systemd_error_timed_out_event = [kwargs for _, kwargs in mock_add_event.call_args_list if kwargs['op'] == WALAEventOperation.CGroupsInfo and "connection timed out" in kwargs['message']] systemd_error_access_denied_event = [kwargs for _, kwargs in mock_add_event.call_args_list if kwargs['op'] == WALAEventOperation.CGroupsInfo and "Access denied" in kwargs['message']] # first service(extension) should not call set-property, as get properties failed self.assertNotIn(extension_command_set, commands_list, "The command to set the CPU quota was called") self.assertEqual(len(systemd_error_timed_out_event), 1, "systemd error timed out should have been happened: {0}".format(systemd_error_timed_out_event)) # second service(extension2) self.assertIn(extension2_command_set, commands_list, "The command to set the CPU quota was not called") self.assertEqual(len(systemd_error_access_denied_event), 1, "systemd error access denied should have been happened: {0}".format(systemd_error_access_denied_event)) def test_it_should_start_tracking_extension_services_cgroups(self): service_list = [ { "name": "extension.service" } ] with self._get_cgroup_configurator() as configurator: configurator.start_tracking_extension_services_cgroups(service_list) tracked = CGroupsTelemetry._tracked self.assertTrue( any(cg for cg in tracked.values() if cg.name == 'extension.service' and 'cpu' in cg.path), "The extension service's CPU is not being tracked") self.assertTrue( any(cg for cg in tracked.values() if cg.name == 'extension.service' and 'memory' in cg.path), "The extension service's Memory is not being tracked") def test_it_should_stop_tracking_extension_services_cgroups(self): service_list = [ { "name": "extension.service" } ] with self._get_cgroup_configurator() as configurator: with patch("os.path.exists") as mock_path: mock_path.return_value = True CGroupsTelemetry.track_cgroup_controller( CpuControllerV1('extension.service', '/sys/fs/cgroup/cpu,cpuacct/system.slice/extension.service')) configurator.stop_tracking_extension_services_cgroups(service_list) tracked = CGroupsTelemetry._tracked self.assertFalse( any(cg for cg in tracked.values() if cg.name == 'extension.service' and 'cpu' in cg.path), "The extension service's CPU is being tracked") self.assertFalse( any(cg for cg in tracked.values() if cg.name == 'extension.service' and 'memory' in cg.path), "The extension service's Memory is being tracked") def test_it_should_reset_extension_services_quota(self): command_mocks = [MockCommand(r"^systemctl show extension\.service --property CPUQuotaPerSecUSec", '''CPUQuotaPerSecUSec=5ms ''')] service_list = [ { "name": "extension.service", "cpuQuotaPercentage": 5 } ] with self._get_cgroup_configurator(mock_commands=command_mocks) as configurator: configurator.reset_extension_services_quota(service_list) cmd = 'systemctl set-property extension.service CPUQuota= --runtime' self.assertIn(cmd, configurator.mocks.commands_call_list, "The command to set the reset CPU quota was not called") def test_it_should_start_tracking_unit_cgroups(self): with self._get_cgroup_configurator() as configurator: configurator.start_tracking_unit_cgroups("extension.service") tracked = CGroupsTelemetry._tracked self.assertTrue( any(cg for cg in tracked.values() if cg.name == 'extension.service' and 'cpu' in cg.path), "The extension service's CPU is not being tracked") self.assertTrue( any(cg for cg in tracked.values() if cg.name == 'extension.service' and 'memory' in cg.path), "The extension service's Memory is not being tracked") def test_it_should_stop_tracking_unit_cgroups(self): def side_effect(path): if path == '/sys/fs/cgroup/cpu,cpuacct/system.slice/extension.service': return True return False with self._get_cgroup_configurator() as configurator: with patch("os.path.exists") as mock_path: mock_path.side_effect = side_effect CGroupsTelemetry._tracked['cpu:/sys/fs/cgroup/cpu,cpuacct/system.slice/extension.service'] = \ CpuControllerV1('extension.service', '/sys/fs/cgroup/cpu,cpuacct/system.slice/extension.service') configurator.stop_tracking_unit_cgroups("extension.service") tracked = CGroupsTelemetry._tracked self.assertFalse( any(cg for cg in tracked.values() if cg.name == 'extension.service' and 'cpu' in cg.path), "The extension service's CPU is being tracked") self.assertFalse( any(cg for cg in tracked.values() if cg.name == 'extension.service' and 'memory' in cg.path), "The extension service's Memory is being tracked") def test_check_processes_in_agent_cgroup_should_raise_a_cgroups_exception_when_there_are_unexpected_processes_in_the_agent_cgroup(self): with patch('azurelinuxagent.common.conf.get_cgroup_disable_on_process_check_failure', return_value=True): with self._get_cgroup_configurator() as configurator: pass # release the mocks used to create the test CGroupConfigurator so that they do not conflict the mock Popen below # The test script recursively creates a given number of descendant processes, then it blocks until the # 'stop_file' exists. It produces an output file containing the PID of each descendant process. test_script = os.path.join(self.tmp_dir, "create_processes.sh") stop_file = os.path.join(self.tmp_dir, "create_processes.stop") AgentTestCase.create_script(test_script, """ #!/usr/bin/env bash set -euo pipefail if [[ $# != 2 ]]; then echo "Usage: $0 " exit 1 fi echo $$ >> $1 if [[ $2 > 1 ]]; then $0 $1 $(($2 - 1)) else timeout 30s /usr/bin/env bash -c "while ! [[ -f {0} ]]; do sleep 0.25s; done" fi exit 0 """.format(stop_file)) number_of_descendants = 3 def wait_for_processes(processes_file): def _all_present(): if os.path.exists(processes_file): with open(processes_file, "r") as file_stream: _all_present.processes = [int(process) for process in file_stream.read().split()] return len(_all_present.processes) >= number_of_descendants _all_present.processes = [] if not wait_for(_all_present): raise Exception("Timeout waiting for processes. Expected {0}; got: {1}".format( number_of_descendants, format_processes(_all_present.processes))) return _all_present.processes threads = [] try: # # Start the processes that will be used by the test. We use two sets of processes: the first set simulates a command executed by the agent # (e.g. iptables) and its child processes, if any. The second set of processes simulates an extension. # agent_command_output = os.path.join(self.tmp_dir, "agent_command.pids") agent_command = threading.Thread(target=lambda: shellutil.run_command([test_script, agent_command_output, str(number_of_descendants)])) agent_command.start() threads.append(agent_command) agent_command_processes = wait_for_processes(agent_command_output) extension_output = os.path.join(self.tmp_dir, "extension.pids") def start_extension(): original_sleep = time.sleep original_popen = subprocess.Popen # Extensions are started using systemd-run; mock Popen to remove the call to systemd-run; the test script creates a couple of # child processes, which would simulate the extension's processes. def mock_popen(command, *args, **kwargs): match = re.match(r"^systemd-run --property=CPUAccounting=no --property=MemoryAccounting=no --unit=[^\s]+ --scope --slice=[^\s]+ (.+)", command) is_systemd_run = match is not None if is_systemd_run: command = match.group(1) process = original_popen(command, *args, **kwargs) if is_systemd_run: start_extension.systemd_run_pid = process.pid return process with patch('time.sleep', side_effect=lambda _: original_sleep(0.1)): # start_extension_command has a small delay; skip it with patch("azurelinuxagent.ga.cgroupapi.subprocess.Popen", side_effect=mock_popen): with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stdout: with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stderr: configurator.start_extension_command( extension_name="TestExtension", command="{0} {1} {2}".format(test_script, extension_output, number_of_descendants), cmd_name="test", timeout=30, shell=True, cwd=self.tmp_dir, env={}, stdout=stdout, stderr=stderr) start_extension.systemd_run_pid = None extension = threading.Thread(target=start_extension) extension.start() threads.append(extension) extension_processes = wait_for_processes(extension_output) # # check_processes_in_agent_cgroup uses shellutil and the cgroups api to get the commands that are currently running; # wait for all the processes to show up # if not wait_for(lambda: len(shellutil.get_running_commands()) > 0 and len(configurator._cgroups_api.get_systemd_run_commands()) > 0): raise Exception("Timeout while attempting to track the child commands") # # Verify that check_processes_in_agent_cgroup raises when there are unexpected processes in the agent's cgroup. # # For the agent's processes, we use the current process and its parent (in the actual agent these would be the daemon and the extension # handler), and the commands started by the agent. # # For other processes, we use a process that already completed, and an extension process. Note that extensions are started using # systemd-run and the process for that commands belongs to the agent's cgroup but the processes for the extension should be in a # different cgroup # def get_completed_process(): random.seed() completed = random.randint(1000, 10000) while os.path.exists("/proc/{0}".format(completed)): # ensure we do not use an existing process completed = random.randint(1000, 10000) return completed agent_processes = [os.getppid(), os.getpid()] + agent_command_processes + [start_extension.systemd_run_pid] other_processes = [get_completed_process()] + extension_processes with patch("azurelinuxagent.ga.cgroupapi.CgroupV1.get_processes", return_value=agent_processes + other_processes): with self.assertRaises(CGroupsException) as context_manager: configurator._check_processes_in_agent_cgroup(False) # will raise an exception if the processes are not as expected in the second call configurator._check_processes_in_agent_cgroup(False) # The list of processes in the message is an array of strings: "['foo', ..., 'bar']" message = ustr(context_manager.exception) search = re.search(r'unexpected processes: \[(?P.+)\]', message) self.assertIsNotNone(search, "The event message is not in the expected format: {0}".format(message)) reported = search.group('processes').split(',') self.assertEqual( len(other_processes), len(reported), "An incorrect number of processes was reported. Expected: {0} Got: {1}".format(format_processes(other_processes), reported)) for pid in other_processes: self.assertTrue( any("[PID: {0}]".format(pid) in reported_process for reported_process in reported), "Process {0} was not reported. Got: {1}".format(format_processes([pid]), reported)) finally: # create the file that stops the test processes and wait for them to complete open(stop_file, "w").close() for thread in threads: thread.join(timeout=5) def test_check_agent_throttled_time_should_raise_a_cgroups_exception_when_the_threshold_is_exceeded(self): metrics = [MetricValue(MetricsCategory.CPU_CATEGORY, MetricsCounter.THROTTLED_TIME, AGENT_NAME_TELEMETRY, conf.get_agent_cpu_throttled_time_threshold() + 1)] with self.assertRaises(CGroupsException) as context_manager: CGroupConfigurator._Impl._check_agent_throttled_time(metrics) self.assertIn("The agent has been throttled", ustr(context_manager.exception), "An incorrect exception was raised") def test_check_cgroups_should_disable_cgroups_when_a_check_fails(self): with self._get_cgroup_configurator() as configurator: checks = ["_check_processes_in_agent_cgroup", "_check_agent_throttled_time"] for method_to_fail in checks: patchers = [] try: # mock 'method_to_fail' to raise an exception and the rest to do nothing for method_to_mock in checks: side_effect = CGroupsException(method_to_fail) if method_to_mock == method_to_fail else lambda *_: None p = patch.object(configurator, method_to_mock, side_effect=side_effect) patchers.append(p) p.start() with patch("azurelinuxagent.ga.cgroupapi.add_event") as add_event: with patch('azurelinuxagent.common.conf.get_cgroup_disable_on_process_check_failure', return_value=True): configurator.enable() tracked_metrics = [ MetricValue(MetricsCategory.CPU_CATEGORY, MetricsCounter.PROCESSOR_PERCENT_TIME, "test", 10)] configurator.check_cgroups(tracked_metrics) if method_to_fail == "_check_processes_in_agent_cgroup": self.assertFalse(configurator.enabled(), "An error in {0} should have disabled cgroups".format(method_to_fail)) else: self.assertFalse(configurator.agent_enabled(), "An error in {0} should have disabled cgroups".format(method_to_fail)) disable_events = [kwargs for _, kwargs in add_event.call_args_list if kwargs["op"] == WALAEventOperation.CGroupsDisabled] self.assertTrue( len(disable_events) == 1, "Exactly 1 event should have been emitted when {0} fails. Got: {1}".format(method_to_fail, disable_events)) self.assertIn( "[CGroupsException] {0}".format(method_to_fail), disable_events[0]["message"], "The error message is not correct when {0} failed".format(method_to_fail)) finally: for p in patchers: p.stop() @patch('azurelinuxagent.ga.cgroupconfigurator.CGroupConfigurator._Impl._check_processes_in_agent_cgroup', side_effect=CGroupsException("Test")) @patch('azurelinuxagent.ga.cgroupapi.add_event') def test_agent_should_not_enable_cgroups_if_unexpected_process_already_in_agent_cgroups(self, add_event, _): command_mocks = [MockCommand(r"^systemctl show walinuxagent\.service --property Slice", '''Slice=azure.slice ''')] original_read_file = fileutil.read_file def mock_read_file(filepath, **args): if filepath == "/proc/self/cgroup": filepath = os.path.join(data_dir, "cgroups", "proc_self_cgroup_azure_slice") return original_read_file(filepath, **args) with self._get_cgroup_configurator(initialize=False, mock_commands=command_mocks) as configurator: with patch("azurelinuxagent.common.utils.fileutil.read_file", side_effect=mock_read_file): with patch('azurelinuxagent.common.conf.get_cgroup_disable_on_process_check_failure', return_value=True): configurator.initialize() self.assertFalse(configurator.enabled(), "Cgroups should not be enabled") disable_events = [kwargs for _, kwargs in add_event.call_args_list if kwargs["op"] == WALAEventOperation.CGroupsDisabled] self.assertTrue( len(disable_events) == 1, "Exactly 1 event should have been emitted. Got: {0}".format(disable_events)) self.assertIn( "Found unexpected processes in the agent cgroup before agent enable cgroups", disable_events[0]["message"], "The error message is not correct when process check failed") def test_check_agent_memory_usage_should_raise_a_cgroups_exception_when_the_limit_is_exceeded(self): metrics = [MetricValue(MetricsCategory.MEMORY_CATEGORY, MetricsCounter.TOTAL_MEM_USAGE, AGENT_NAME_TELEMETRY, conf.get_agent_memory_quota() + 1), MetricValue(MetricsCategory.MEMORY_CATEGORY, MetricsCounter.SWAP_MEM_USAGE, AGENT_NAME_TELEMETRY, conf.get_agent_memory_quota() + 1)] with self.assertRaises(AgentMemoryExceededException) as context_manager: with self._get_cgroup_configurator() as configurator: with patch("azurelinuxagent.ga.memorycontroller.MemoryControllerV1.get_tracked_metrics") as tracked_metrics: tracked_metrics.return_value = metrics configurator.check_agent_memory_usage() self.assertIn("The agent memory limit {0} bytes exceeded".format(conf.get_agent_memory_quota()), ustr(context_manager.exception), "An incorrect exception was raised") def test_get_log_collector_properties_should_return_correct_props(self): with self._get_cgroup_configurator() as configurator: self.assertEqual(configurator.get_logcollector_unit_properties(), ["--property=CPUAccounting=yes", "--property=MemoryAccounting=yes", "--property=CPUQuota=5%"]) with self._get_cgroup_configurator_v2() as configurator: self.assertEqual(configurator.get_logcollector_unit_properties(), ["--property=CPUAccounting=yes", "--property=MemoryAccounting=yes", "--property=CPUQuota=5%", "--property=MemoryHigh=170M"]) Azure-WALinuxAgent-a976115/tests/ga/test_cgroupconfigurator_sudo.py000066400000000000000000000203711510742556200255130ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.4+ and Openssl 1.0+ # from __future__ import print_function import contextlib import subprocess import tempfile from azurelinuxagent.ga.cgroupconfigurator import CGroupConfigurator from azurelinuxagent.ga.cgroupstelemetry import CGroupsTelemetry from azurelinuxagent.common.exception import ExtensionError, ExtensionErrorCodes from azurelinuxagent.common.future import ustr from tests.lib.mock_cgroup_environment import mock_cgroup_v1_environment from tests.lib.tools import AgentTestCase, patch, mock_sleep, i_am_root class CGroupConfiguratorSystemdTestCaseSudo(AgentTestCase): @classmethod def tearDownClass(cls): CGroupConfigurator._instance = None AgentTestCase.tearDownClass() @contextlib.contextmanager def _get_cgroup_configurator(self, initialize=True, enable=True, mock_commands=None): CGroupConfigurator._instance = None configurator = CGroupConfigurator.get_instance() CGroupsTelemetry.reset() with mock_cgroup_v1_environment(self.tmp_dir) as mock_environment: if mock_commands is not None: for command in mock_commands: mock_environment.add_command(command) configurator.mocks = mock_environment if initialize: if not enable: with patch.object(configurator, "enable"): configurator.initialize() else: configurator.initialize() yield configurator @patch('time.sleep', side_effect=lambda _: mock_sleep()) def test_start_extension_command_should_not_use_fallback_option_if_extension_fails(self, *args): self.assertTrue(i_am_root(), "Test does not run when non-root") with self._get_cgroup_configurator() as configurator: pass # release the mocks used to create the test CGroupConfigurator so that they do not conflict the mock Popen below command = "ls folder_does_not_exist" with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stdout: with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stderr: with patch("azurelinuxagent.ga.cgroupapi.subprocess.Popen", wraps=subprocess.Popen) as popen_patch: with self.assertRaises(ExtensionError) as context_manager: configurator.start_extension_command( extension_name="Microsoft.Compute.TestExtension-1.2.3", command=command, cmd_name="test", timeout=300, shell=True, cwd=self.tmp_dir, env={}, stdout=stdout, stderr=stderr) extension_calls = [args[0] for (args, _) in popen_patch.call_args_list if command in args[0]] self.assertEqual(1, len(extension_calls), "The extension should have been invoked exactly once") self.assertIn("systemd-run", extension_calls[0], "The first call to the extension should have used systemd") self.assertEqual(context_manager.exception.code, ExtensionErrorCodes.PluginUnknownFailure) self.assertIn("Non-zero exit code", ustr(context_manager.exception)) # The scope name should appear in the process output since systemd-run was invoked and stderr # wasn't truncated. self.assertRegex(ustr(context_manager.exception), r"Running (scope )?as unit") @patch('time.sleep', side_effect=lambda _: mock_sleep()) @patch("azurelinuxagent.ga.extensionprocessutil.TELEMETRY_MESSAGE_MAX_LEN", 5) def test_start_extension_command_should_not_use_fallback_option_if_extension_fails_with_long_output(self, *args): self.assertTrue(i_am_root(), "Test does not run when non-root") with self._get_cgroup_configurator() as configurator: pass # release the mocks used to create the test CGroupConfigurator so that they do not conflict the mock Popen below long_output = "a"*20 # large enough to ensure both stdout and stderr are truncated long_stdout_stderr_command = "echo {0} && echo {0} >&2 && ls folder_does_not_exist".format(long_output) with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stdout: with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stderr: with patch("azurelinuxagent.ga.cgroupapi.subprocess.Popen", wraps=subprocess.Popen) as popen_patch: with self.assertRaises(ExtensionError) as context_manager: configurator.start_extension_command( extension_name="Microsoft.Compute.TestExtension-1.2.3", command=long_stdout_stderr_command, cmd_name="test", timeout=300, shell=True, cwd=self.tmp_dir, env={}, stdout=stdout, stderr=stderr) extension_calls = [args[0] for (args, _) in popen_patch.call_args_list if long_stdout_stderr_command in args[0]] self.assertEqual(1, len(extension_calls), "The extension should have been invoked exactly once") self.assertIn("systemd-run", extension_calls[0], "The first call to the extension should have used systemd") self.assertEqual(context_manager.exception.code, ExtensionErrorCodes.PluginUnknownFailure) self.assertIn("Non-zero exit code", ustr(context_manager.exception)) # stdout and stderr should have been truncated, so the scope name doesn't appear in stderr # even though systemd-run ran self.assertNotIn("Running scope as unit", ustr(context_manager.exception)) def test_start_extension_command_should_not_use_fallback_option_if_extension_times_out(self, *args): # pylint: disable=unused-argument self.assertTrue(i_am_root(), "Test does not run when non-root") with self._get_cgroup_configurator() as configurator: pass # release the mocks used to create the test CGroupConfigurator so that they do not conflict the mock Popen below with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stdout: with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stderr: with patch("azurelinuxagent.ga.extensionprocessutil.wait_for_process_completion_or_timeout", return_value=[True, None, 0]): with patch("azurelinuxagent.ga.cgroupapi._SystemdCgroupApi._is_systemd_failure", return_value=False): with self.assertRaises(ExtensionError) as context_manager: configurator.start_extension_command( extension_name="Microsoft.Compute.TestExtension-1.2.3", command="date", cmd_name="test", timeout=300, shell=True, cwd=self.tmp_dir, env={}, stdout=stdout, stderr=stderr) self.assertEqual(context_manager.exception.code, ExtensionErrorCodes.PluginHandlerScriptTimedout) self.assertIn("Timeout", ustr(context_manager.exception)) Azure-WALinuxAgent-a976115/tests/ga/test_cgroupcontroller.py000066400000000000000000000037611510742556200241460ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.4+ and Openssl 1.0+ # from __future__ import print_function import os import random from azurelinuxagent.ga.cgroupcontroller import _CgroupController from tests.lib.tools import AgentTestCase, patch def consume_cpu_time(): waste = 0 for x in range(1, 200000): # pylint: disable=unused-variable waste += random.random() return waste class TestCgroupController(AgentTestCase): def test_is_active(self): test_metrics = _CgroupController("test_extension", self.tmp_dir) with open(os.path.join(self.tmp_dir, "cgroup.procs"), mode="wb") as tasks: tasks.write(str(1000).encode()) self.assertEqual(True, test_metrics.is_active()) @patch("azurelinuxagent.common.logger.periodic_warn") def test_is_active_file_not_present(self, patch_periodic_warn): test_metrics = _CgroupController("test_extension", self.tmp_dir) self.assertFalse(test_metrics.is_active()) self.assertEqual(0, patch_periodic_warn.call_count) @patch("azurelinuxagent.common.logger.periodic_warn") def test_is_active_incorrect_file(self, patch_periodic_warn): open(os.path.join(self.tmp_dir, "cgroup.procs"), mode="wb").close() test_metrics = _CgroupController("test_extension", os.path.join(self.tmp_dir, "cgroup.procs")) self.assertEqual(False, test_metrics.is_active()) self.assertEqual(1, patch_periodic_warn.call_count) Azure-WALinuxAgent-a976115/tests/ga/test_cgroupstelemetry.py000066400000000000000000000560231510742556200241570ustar00rootroot00000000000000# Copyright 2019 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.4+ and Openssl 1.0+ # import errno import os import random import time from azurelinuxagent.ga.cgroupcontroller import MetricsCounter from azurelinuxagent.ga.cgroupstelemetry import CGroupsTelemetry from azurelinuxagent.common.utils import fileutil from azurelinuxagent.ga.cpucontroller import CpuControllerV1 from azurelinuxagent.ga.memorycontroller import MemoryControllerV1 from tests.lib.tools import AgentTestCase, data_dir, patch def raise_ioerror(*_): e = IOError() from errno import EIO e.errno = EIO raise e def median(lst): data = sorted(lst) l_len = len(data) if l_len < 1: return None if l_len % 2 == 0: return (data[int((l_len - 1) / 2)] + data[int((l_len + 1) / 2)]) / 2.0 else: return data[int((l_len - 1) / 2)] def generate_metric_list(lst): return [float(sum(lst)) / float(len(lst)), min(lst), max(lst), median(lst), len(lst)] def consume_cpu_time(): waste = 0 for x in range(1, 200000): # pylint: disable=unused-variable waste += random.random() return waste def consume_memory(): waste = [] for x in range(1, 3): # pylint: disable=unused-variable waste.append([random.random()] * 10000) time.sleep(0.1) waste *= 0 return waste class TestCGroupsTelemetry(AgentTestCase): NumSummarizationValues = 7 @classmethod def setUpClass(cls): AgentTestCase.setUpClass() # CPU Cgroups compute usage based on /proc/stat and /sys/fs/cgroup/.../cpuacct.stat; use mock data for those # files original_read_file = fileutil.read_file def mock_read_file(filepath, **args): if filepath == "/proc/stat": filepath = os.path.join(data_dir, "cgroups", "v1", "proc_stat_t0") elif filepath.endswith("/cpuacct.stat"): filepath = os.path.join(data_dir, "cgroups", "v1", "cpuacct.stat_t0") return original_read_file(filepath, **args) cls._mock_read_cpu_cgroup_file = patch("azurelinuxagent.common.utils.fileutil.read_file", side_effect=mock_read_file) cls._mock_read_cpu_cgroup_file.start() @classmethod def tearDownClass(cls): cls._mock_read_cpu_cgroup_file.stop() AgentTestCase.tearDownClass() def setUp(self): AgentTestCase.setUp(self) CGroupsTelemetry.reset() def tearDown(self): AgentTestCase.tearDown(self) CGroupsTelemetry.reset() @staticmethod def _track_new_extension_cgroup_controllers(num_extensions): for i in range(num_extensions): dummy_cpu_controller = CpuControllerV1("dummy_extension_{0}".format(i), "dummy_cpu_path_{0}".format(i)) dummy_cpu_controller.track_throttle_time(True) CGroupsTelemetry.track_cgroup_controller(dummy_cpu_controller) dummy_memory_controller = MemoryControllerV1("dummy_extension_{0}".format(i), "dummy_memory_path_{0}".format(i)) CGroupsTelemetry.track_cgroup_controller(dummy_memory_controller) def _assert_cgroup_controllers_are_tracked(self, num_extensions): for i in range(num_extensions): self.assertTrue(CGroupsTelemetry.is_tracked("cpu:dummy_cpu_path_{0}".format(i))) self.assertTrue(CGroupsTelemetry.is_tracked("memory:dummy_memory_path_{0}".format(i))) def _assert_polled_metrics_equal(self, metrics, cpu_processor_metric_value, cpu_throttled_metric_value, current_total_memory_metric_value, current_anon_memory_metric_value, current_cache_memory_metric_value, max_memory_metric_value, swap_memory_value): for metric in metrics: self.assertIn(metric.category, ["CPU", "Memory"]) if metric.category == "CPU": if metric.counter == MetricsCounter.PROCESSOR_PERCENT_TIME: self.assertEqual(metric.value, cpu_processor_metric_value) if metric.counter == MetricsCounter.THROTTLED_TIME: self.assertEqual(metric.value, cpu_throttled_metric_value) if metric.category == "Memory": self.assertIn(metric.counter, [MetricsCounter.TOTAL_MEM_USAGE, MetricsCounter.ANON_MEM_USAGE, MetricsCounter.CACHE_MEM_USAGE, MetricsCounter.MAX_MEM_USAGE, MetricsCounter.SWAP_MEM_USAGE]) if metric.counter == MetricsCounter.TOTAL_MEM_USAGE: self.assertEqual(metric.value, current_total_memory_metric_value) elif metric.counter == MetricsCounter.ANON_MEM_USAGE: self.assertEqual(metric.value, current_anon_memory_metric_value) elif metric.counter == MetricsCounter.CACHE_MEM_USAGE: self.assertEqual(metric.value, current_cache_memory_metric_value) elif metric.counter == MetricsCounter.MAX_MEM_USAGE: self.assertEqual(metric.value, max_memory_metric_value) elif metric.counter == MetricsCounter.SWAP_MEM_USAGE: self.assertEqual(metric.value, swap_memory_value) def test_telemetry_polling_with_active_cgroups(self, *args): # pylint: disable=unused-argument num_extensions = 3 self._track_new_extension_cgroup_controllers(num_extensions) with patch("azurelinuxagent.ga.memorycontroller.MemoryControllerV1.get_max_memory_usage") as patch_get_memory_max_usage: with patch("azurelinuxagent.ga.memorycontroller.MemoryControllerV1.get_memory_usage") as patch_get_memory_usage: with patch("azurelinuxagent.ga.memorycontroller.MemoryControllerV1.try_swap_memory_usage") as patch_try_swap_memory_usage: with patch("azurelinuxagent.ga.cpucontroller.CpuControllerV1.get_cpu_throttled_time") as patch_get_throttle_time: with patch("azurelinuxagent.ga.cpucontroller.CpuControllerV1.get_cpu_usage") as patch_get_cpu_usage: with patch("azurelinuxagent.ga.cgroupcontroller._CgroupController.is_active") as patch_is_active: patch_is_active.return_value = True current_cpu = 30 current_throttle_time = 5 current_anon_memory = 209715200 current_cache_memory = 314572800 current_total_memory = 209715200 + 314572800 current_max_memory = 471859200 current_swap_memory = 20971520 # 1 CPU metric + 1 CPU throttle + 1 total Memory + 1 anon memory + 1 cache memory + 1 Max memory + 1 swap memory num_of_metrics_per_extn_expected = 7 patch_get_cpu_usage.return_value = current_cpu patch_get_throttle_time.return_value = current_throttle_time patch_get_memory_usage.return_value = current_anon_memory, current_cache_memory # example 200 MB, 300 MB patch_get_memory_max_usage.return_value = current_max_memory # example 450 MB patch_try_swap_memory_usage.return_value = current_swap_memory # example 20MB num_polls = 18 for data_count in range(1, num_polls + 1): # pylint: disable=unused-variable metrics = CGroupsTelemetry.poll_all_tracked() self.assertEqual(len(metrics), num_extensions * num_of_metrics_per_extn_expected) self._assert_polled_metrics_equal(metrics, current_cpu, current_throttle_time, current_total_memory, current_anon_memory, current_cache_memory, current_max_memory, current_swap_memory) @patch("azurelinuxagent.ga.memorycontroller.MemoryControllerV1.get_max_memory_usage", side_effect=raise_ioerror) @patch("azurelinuxagent.ga.memorycontroller.MemoryControllerV1.get_memory_usage", side_effect=raise_ioerror) @patch("azurelinuxagent.ga.cpucontroller.CpuControllerV1.get_cpu_usage", side_effect=raise_ioerror) @patch("azurelinuxagent.ga.cgroupcontroller._CgroupController.is_active", return_value=False) def test_telemetry_polling_with_inactive_cgroups(self, *_): num_extensions = 5 no_extensions_expected = 0 # pylint: disable=unused-variable self._track_new_extension_cgroup_controllers(num_extensions) self._assert_cgroup_controllers_are_tracked(num_extensions) metrics = CGroupsTelemetry.poll_all_tracked() for i in range(num_extensions): self.assertFalse(CGroupsTelemetry.is_tracked("dummy_cpu_path_{0}".format(i))) self.assertFalse(CGroupsTelemetry.is_tracked("dummy_memory_path_{0}".format(i))) self.assertEqual(len(metrics), 0) @patch("azurelinuxagent.ga.memorycontroller.MemoryControllerV1.get_max_memory_usage") @patch("azurelinuxagent.ga.memorycontroller.MemoryControllerV1.get_memory_usage") @patch("azurelinuxagent.ga.cpucontroller.CpuControllerV1.get_cpu_usage") @patch("azurelinuxagent.ga.cgroupcontroller._CgroupController.is_active") def test_telemetry_polling_with_changing_cgroups_state(self, patch_is_active, patch_get_cpu_usage, # pylint: disable=unused-argument patch_get_mem, patch_get_max_mem, *args): num_extensions = 5 self._track_new_extension_cgroup_controllers(num_extensions) patch_is_active.return_value = True no_extensions_expected = 0 # pylint: disable=unused-variable expected_data_count = 1 # pylint: disable=unused-variable current_cpu = 30 current_anon_memory = 104857600 current_cache_memory = 104857600 current_max_memory = 471859200 patch_get_cpu_usage.return_value = current_cpu patch_get_mem.return_value = current_anon_memory, current_cache_memory # example 100 MB, 100 MB patch_get_max_mem.return_value = current_max_memory # example 450 MB self._assert_cgroup_controllers_are_tracked(num_extensions) CGroupsTelemetry.poll_all_tracked() self._assert_cgroup_controllers_are_tracked(num_extensions) patch_is_active.return_value = False patch_get_cpu_usage.side_effect = raise_ioerror patch_get_mem.side_effect = raise_ioerror patch_get_max_mem.side_effect = raise_ioerror CGroupsTelemetry.poll_all_tracked() for i in range(num_extensions): self.assertFalse(CGroupsTelemetry.is_tracked("dummy_cpu_path_{0}".format(i))) self.assertFalse(CGroupsTelemetry.is_tracked("dummy_memory_path_{0}".format(i))) # mocking get_proc_stat to make it run on Mac and other systems. This test does not need to read the values of the # /proc/stat file on the filesystem. @patch("azurelinuxagent.common.logger.periodic_warn") def test_telemetry_polling_to_not_generate_transient_logs_ioerror_file_not_found(self, patch_periodic_warn): num_extensions = 1 self._track_new_extension_cgroup_controllers(num_extensions) self.assertEqual(0, patch_periodic_warn.call_count) # Not expecting logs present for io_error with errno=errno.ENOENT io_error_2 = IOError() io_error_2.errno = errno.ENOENT with patch("azurelinuxagent.common.utils.fileutil.read_file", side_effect=io_error_2): poll_count = 1 for data_count in range(poll_count, 10): # pylint: disable=unused-variable CGroupsTelemetry.poll_all_tracked() self.assertEqual(0, patch_periodic_warn.call_count) @patch("azurelinuxagent.common.logger.periodic_warn") def test_telemetry_polling_to_generate_transient_logs_ioerror_permission_denied(self, patch_periodic_warn): num_extensions = 1 num_controllers = 1 is_active_check_per_controller = 2 self._track_new_extension_cgroup_controllers(num_extensions) self.assertEqual(0, patch_periodic_warn.call_count) # Expecting logs to be present for different kind of errors io_error_3 = IOError() io_error_3.errno = errno.EPERM with patch("azurelinuxagent.common.utils.fileutil.read_file", side_effect=io_error_3): poll_count = 1 expected_count_per_call = num_controllers + is_active_check_per_controller # get_cpu_usage cpu controller would generate a log statement, and each cgroup controller would invoke a # is active check raising an exception for data_count in range(poll_count, 10): # pylint: disable=unused-variable CGroupsTelemetry.poll_all_tracked() self.assertEqual(poll_count * expected_count_per_call, patch_periodic_warn.call_count) def test_telemetry_polling_to_generate_transient_logs_index_error(self): num_extensions = 1 self._track_new_extension_cgroup_controllers(num_extensions) # Generating a different kind of error (non-IOError) to check the logging. # Trying to invoke IndexError during the getParameter call with patch("azurelinuxagent.common.utils.fileutil.read_file", return_value=''): with patch("azurelinuxagent.common.logger.periodic_warn") as patch_periodic_warn: expected_call_count = 1 # 1 periodic warning for cpu for data_count in range(1, 10): # pylint: disable=unused-variable CGroupsTelemetry.poll_all_tracked() self.assertEqual(expected_call_count, patch_periodic_warn.call_count) @patch("azurelinuxagent.ga.memorycontroller.MemoryControllerV1.try_swap_memory_usage") @patch("azurelinuxagent.ga.memorycontroller.MemoryControllerV1.get_max_memory_usage") @patch("azurelinuxagent.ga.memorycontroller.MemoryControllerV1.get_memory_usage") @patch("azurelinuxagent.ga.cpucontroller.CpuControllerV1.get_cpu_throttled_time") @patch("azurelinuxagent.ga.cpucontroller.CpuControllerV1.get_cpu_usage") @patch("azurelinuxagent.ga.cgroupcontroller._CgroupController.is_active") def test_telemetry_calculations(self, patch_is_active, patch_get_cpu_usage, patch_get_throttle_usage, patch_get_memory_usage, patch_get_memory_max_usage, patch_try_memory_swap_usage, *args): # pylint: disable=unused-argument num_polls = 10 num_extensions = 1 cpu_percent_values = [random.randint(0, 100) for _ in range(num_polls)] cpu_throttle_values = [random.randint(0, 100) for _ in range(num_polls)] # only verifying calculations and not validity of the values. anon_usage_values = [random.randint(0, 8 * 1024 ** 3) for _ in range(num_polls)] cache_usage_values = [random.randint(0, 8 * 1024 ** 3) for _ in range(num_polls)] max_memory_usage_values = [random.randint(0, 8 * 1024 ** 3) for _ in range(num_polls)] swap_usage_values = [random.randint(0, 8 * 1024 ** 3) for _ in range(num_polls)] self._track_new_extension_cgroup_controllers(num_extensions) self.assertEqual(2 * num_extensions, len(CGroupsTelemetry._tracked)) for i in range(num_polls): patch_is_active.return_value = True patch_get_cpu_usage.return_value = cpu_percent_values[i] patch_get_throttle_usage.return_value = cpu_throttle_values[i] patch_get_memory_usage.return_value = anon_usage_values[i], cache_usage_values[i] patch_get_memory_max_usage.return_value = max_memory_usage_values[i] patch_try_memory_swap_usage.return_value = swap_usage_values[i] metrics = CGroupsTelemetry.poll_all_tracked() # 1 CPU Processor + 1 CPU Throttle + 1 Total Memory + 1 anon memory + 1 cache memory + 1 Max memory + 1 swap memory self.assertEqual(len(metrics), 7 * num_extensions) self._assert_polled_metrics_equal(metrics, cpu_percent_values[i], cpu_throttle_values[i], anon_usage_values[i] + cache_usage_values[i], anon_usage_values[i], cache_usage_values[i], max_memory_usage_values[i], swap_usage_values[i]) def test_cgroup_tracking(self, *args): # pylint: disable=unused-argument num_extensions = 5 num_controllers = 2 self._track_new_extension_cgroup_controllers(num_extensions) self._assert_cgroup_controllers_are_tracked(num_extensions) self.assertEqual(num_extensions * num_controllers, len(CGroupsTelemetry._tracked)) def test_cgroup_is_tracked(self, *args): # pylint: disable=unused-argument num_extensions = 5 self._track_new_extension_cgroup_controllers(num_extensions) self._assert_cgroup_controllers_are_tracked(num_extensions) self.assertFalse(CGroupsTelemetry.is_tracked("not_present_cpu_dummy_path")) self.assertFalse(CGroupsTelemetry.is_tracked("not_present_memory_dummy_path")) @patch("azurelinuxagent.ga.memorycontroller.MemoryControllerV1.get_memory_usage", side_effect=raise_ioerror) def test_process_cgroup_metric_with_no_memory_cgroup_mounted(self, *args): # pylint: disable=unused-argument num_extensions = 5 self._track_new_extension_cgroup_controllers(num_extensions) with patch("azurelinuxagent.ga.cpucontroller.CpuControllerV1.get_cpu_usage") as patch_get_cpu_usage: with patch("azurelinuxagent.ga.cgroupcontroller._CgroupController.is_active") as patch_is_active: patch_is_active.return_value = True current_cpu = 30 patch_get_cpu_usage.return_value = current_cpu poll_count = 1 for data_count in range(poll_count, 10): # pylint: disable=unused-variable metrics = CGroupsTelemetry.poll_all_tracked() self.assertEqual(len(metrics), num_extensions * 2) # only CPU populated (processor time and throttled time) self._assert_polled_metrics_equal(metrics, current_cpu, 0,0, 0, 0, 0, 0) @patch("azurelinuxagent.ga.cpucontroller.CpuControllerV1.get_cpu_usage", side_effect=raise_ioerror) def test_process_cgroup_metric_with_no_cpu_cgroup_mounted(self, *args): # pylint: disable=unused-argument num_extensions = 5 self._track_new_extension_cgroup_controllers(num_extensions) with patch("azurelinuxagent.ga.memorycontroller.MemoryControllerV1.get_max_memory_usage") as patch_get_memory_max_usage: with patch("azurelinuxagent.ga.memorycontroller.MemoryControllerV1.get_memory_usage") as patch_get_memory_usage: with patch("azurelinuxagent.ga.memorycontroller.MemoryControllerV1.try_swap_memory_usage") as patch_try_swap_memory_usage: with patch("azurelinuxagent.ga.cgroupcontroller._CgroupController.is_active") as patch_is_active: patch_is_active.return_value = True current_total_memory = 209715200 current_anon_memory = 104857600 current_cache_memory = 104857600 current_max_memory = 471859200 current_swap_memory = 20971520 patch_get_memory_usage.return_value = current_anon_memory, current_cache_memory # example 100 MB, 100 MB patch_get_memory_max_usage.return_value = current_max_memory # example 450 MB patch_try_swap_memory_usage.return_value = current_swap_memory # example 20MB num_polls = 10 for data_count in range(1, num_polls + 1): # pylint: disable=unused-variable metrics = CGroupsTelemetry.poll_all_tracked() # Memory is only populated, CPU is not. Thus 5 metrics for memory. self.assertEqual(len(metrics), num_extensions * 5) self._assert_polled_metrics_equal(metrics, 0, 0, current_total_memory, current_anon_memory, current_cache_memory, current_max_memory, current_swap_memory) @patch("azurelinuxagent.ga.memorycontroller.MemoryControllerV1.get_memory_usage", side_effect=raise_ioerror) @patch("azurelinuxagent.ga.memorycontroller.MemoryControllerV1.get_max_memory_usage", side_effect=raise_ioerror) @patch("azurelinuxagent.ga.cpucontroller.CpuControllerV1.get_cpu_usage", side_effect=raise_ioerror) def test_extension_telemetry_not_sent_for_empty_perf_metrics(self, *args): # pylint: disable=unused-argument num_extensions = 5 self._track_new_extension_cgroup_controllers(num_extensions) with patch("azurelinuxagent.ga.cgroupcontroller._CgroupController.is_active") as patch_is_active: patch_is_active.return_value = False poll_count = 1 for data_count in range(poll_count, 10): # pylint: disable=unused-variable metrics = CGroupsTelemetry.poll_all_tracked() self.assertEqual(0, len(metrics)) @patch("azurelinuxagent.ga.cpucontroller.CpuControllerV1.get_cpu_usage") @patch("azurelinuxagent.ga.cpucontroller.CpuControllerV1.get_cpu_throttled_time") @patch("azurelinuxagent.ga.cgroupcontroller._CgroupController.is_active") def test_cgroup_telemetry_should_not_report_cpu_negative_value(self, patch_is_active, path_get_throttled_time, patch_get_cpu_usage): num_polls = 5 num_extensions = 1 # only verifying calculations and not validity of the values. cpu_percent_values = [random.randint(0, 100) for _ in range(num_polls-1)] cpu_percent_values.append(-1) cpu_throttled_values = [random.randint(0, 60 * 60) for _ in range(num_polls)] dummy_cpu_cgroup = CpuControllerV1("dummy_extension_name", "dummy_cpu_path") dummy_cpu_cgroup.track_throttle_time(True) CGroupsTelemetry.track_cgroup_controller(dummy_cpu_cgroup) self.assertEqual(1, len(CGroupsTelemetry._tracked)) for i in range(num_polls): patch_is_active.return_value = True patch_get_cpu_usage.return_value = cpu_percent_values[i] path_get_throttled_time.return_value = cpu_throttled_values[i] CGroupsTelemetry._track_throttled_time = True metrics = CGroupsTelemetry.poll_all_tracked() # 1 CPU metric + 1 CPU throttled # ignore CPU metrics from telemetry if cpu cgroup reports negative value if i < num_polls-1: self.assertEqual(len(metrics), 2 * num_extensions) else: self.assertEqual(len(metrics), 0) for metric in metrics: self.assertGreaterEqual(metric.value, 0, "telemetry should not report negative value") Azure-WALinuxAgent-a976115/tests/ga/test_collect_logs.py000066400000000000000000000610321510742556200232070ustar00rootroot00000000000000# Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import contextlib import os from azurelinuxagent.common import logger, conf from azurelinuxagent.ga.cgroupcontroller import MetricValue, MetricsCounter from azurelinuxagent.ga.cgroupconfigurator import CGroupConfigurator from azurelinuxagent.common.logger import Logger from azurelinuxagent.common.protocol.util import ProtocolUtil from azurelinuxagent.common.utils import fileutil from azurelinuxagent.ga.collect_logs import get_collect_logs_handler, is_log_collection_allowed, \ get_log_collector_monitor_handler from azurelinuxagent.ga.cpucontroller import CpuControllerV1, CpuControllerV2 from azurelinuxagent.ga.memorycontroller import MemoryControllerV1, MemoryControllerV2 from tests.lib.mock_wire_protocol import mock_wire_protocol, MockHttpResponse from tests.lib.http_request_predicates import HttpRequestPredicates from tests.lib.wire_protocol_data import DATA_FILE from tests.lib.tools import Mock, MagicMock, patch, AgentTestCase, clear_singleton_instances, skip_if_predicate_true, \ is_python_version_26, data_dir class CgroupVersions: V1 = "v1" V2 = "v2" @contextlib.contextmanager def _create_collect_logs_handler(iterations=1, cgroup_version=CgroupVersions.V1, cgroups_enabled=True, collect_logs_conf=True, cgroupv2_resource_limiting_conf=False): """ Creates an instance of CollectLogsHandler that * Uses a mock_wire_protocol for network requests, * Runs its main loop only the number of times given in the 'iterations' parameter, and * Does not sleep at the end of each iteration The returned CollectLogsHandler is augmented with 2 methods: * get_mock_wire_protocol() - returns the mock protocol * run_and_wait() - invokes run() and wait() on the CollectLogsHandler """ with mock_wire_protocol(DATA_FILE) as protocol: protocol_util = MagicMock() protocol_util.get_protocol = Mock(return_value=protocol) with patch("azurelinuxagent.ga.collect_logs.get_protocol_util", return_value=protocol_util): with patch("azurelinuxagent.ga.collect_logs.CollectLogsHandler.stopped", side_effect=[False] * iterations + [True]): with patch("time.sleep"): with patch("azurelinuxagent.ga.collect_logs.conf.get_collect_logs", return_value=collect_logs_conf): # Grab the singleton to patch it cgroups_configurator_singleton = CGroupConfigurator.get_instance() if cgroup_version == CgroupVersions.V1: with patch.object(cgroups_configurator_singleton, "enabled", return_value=cgroups_enabled): def run_and_wait(): collect_logs_handler.run() collect_logs_handler.join() collect_logs_handler = get_collect_logs_handler() collect_logs_handler.get_mock_wire_protocol = lambda: protocol collect_logs_handler.run_and_wait = run_and_wait yield collect_logs_handler else: with patch("azurelinuxagent.ga.collect_logs.conf.get_enable_cgroup_v2_resource_limiting", return_value=cgroupv2_resource_limiting_conf): with patch.object(cgroups_configurator_singleton, "enabled", return_value=False): with patch("azurelinuxagent.ga.cgroupconfigurator.CGroupConfigurator._Impl.using_cgroup_v2", return_value=True): def run_and_wait(): collect_logs_handler.run() collect_logs_handler.join() collect_logs_handler = get_collect_logs_handler() collect_logs_handler.get_mock_wire_protocol = lambda: protocol collect_logs_handler.run_and_wait = run_and_wait yield collect_logs_handler @skip_if_predicate_true(is_python_version_26, "Disabled on Python 2.6") class TestCollectLogs(AgentTestCase, HttpRequestPredicates): def setUp(self): AgentTestCase.setUp(self) prefix = "UnitTest" logger.DEFAULT_LOGGER = Logger(prefix=prefix) self.archive_path = os.path.join(self.tmp_dir, "logs.zip") self.mock_archive_path = patch("azurelinuxagent.ga.collect_logs.COMPRESSED_ARCHIVE_PATH", self.archive_path) self.mock_archive_path.start() self.logger_path = os.path.join(self.tmp_dir, "waagent.log") self.mock_logger_path = patch.object(conf, "get_agent_log_file", return_value=self.logger_path) self.mock_logger_path.start() # Since ProtocolUtil is a singleton per thread, we need to clear it to ensure that the test cases do not # reuse a previous state clear_singleton_instances(ProtocolUtil) def tearDown(self): if os.path.exists(self.archive_path): os.remove(self.archive_path) self.mock_archive_path.stop() if os.path.exists(self.logger_path): os.remove(self.logger_path) self.mock_logger_path.stop() AgentTestCase.tearDown(self) def _create_dummy_archive(self, size=1024): with open(self.archive_path, "wb") as f: f.truncate(size) def test_it_should_only_collect_logs_if_conditions_are_met(self): # In order to collect logs, three conditions have to be met: # 1) It should be enabled in the configuration. # 2) The system must be using cgroups to manage services - needed for resource limiting of the log collection. The # agent currently fully supports resource limiting for v1, but only supports log collector resource limiting for v2 # if enabled via configuration. # This condition is True if either: # a. cgroup usage in the agent is enabled; OR # b. the machine is using cgroup v2 and v2 resource limiting is enabled in the configuration. # 3) The python version must be greater than 2.6 in order to support the ZipFile library used when collecting. # Note, cgroups should not be in an 'enabled' state in the configurator if v2 is in use. Resource governance is # not fully supported on v2 yet. # If collect logs is not enabled in the configuration, then log collection should always be disabled # Case 1: # - Cgroups are enabled in the configurator # - Cgroup v2 is not in use # - Cgroup v2 resource limiting conf is True # - collect logs config flag false with _create_collect_logs_handler(cgroups_enabled=True, cgroup_version=CgroupVersions.V1, cgroupv2_resource_limiting_conf=True, collect_logs_conf=False): self.assertEqual(False, is_log_collection_allowed(), "Log collection should not have been enabled") # Case 2: # - Cgroups are enabled in the configurator # - Cgroup v2 is not in use # - Cgroup v2 resource limiting conf is False # - collect logs config flag false with _create_collect_logs_handler(cgroups_enabled=True, cgroup_version=CgroupVersions.V1, cgroupv2_resource_limiting_conf=False, collect_logs_conf=False): self.assertEqual(False, is_log_collection_allowed(), "Log collection should not have been enabled") # Case 3: # - Cgroups are disabled in the configurator # - Cgroup v2 is in use # - Cgroup v2 resource limiting conf is True # - collect logs config flag false with _create_collect_logs_handler(cgroups_enabled=False, cgroup_version=CgroupVersions.V2, cgroupv2_resource_limiting_conf=True, collect_logs_conf=False): self.assertEqual(False, is_log_collection_allowed(), "Log collection should not have been enabled") # Case 4: # - Cgroups are disabled in the configurator # - Cgroup v2 is in use # - Cgroup v2 resource limiting conf is False # - collect logs config flag false with _create_collect_logs_handler(cgroups_enabled=False, cgroup_version=CgroupVersions.V2, cgroupv2_resource_limiting_conf=False, collect_logs_conf=False): self.assertEqual(False, is_log_collection_allowed(), "Log collection should not have been enabled") # Case 5: # - Cgroups are disabled in the configurator # - Cgroup v2 is not in use # - Cgroup v2 resource limiting conf is True # - collect logs config flag false with _create_collect_logs_handler(cgroups_enabled=False, cgroup_version=CgroupVersions.V1, cgroupv2_resource_limiting_conf=True, collect_logs_conf=False): self.assertEqual(False, is_log_collection_allowed(), "Log collection should not have been enabled") # Case 6: # - Cgroups are disabled in the configurator # - Cgroup v2 is not in use # - Cgroup v2 resource limiting conf is False # - collect logs config flag false with _create_collect_logs_handler(cgroups_enabled=False, cgroup_version=CgroupVersions.V1, cgroupv2_resource_limiting_conf=False, collect_logs_conf=False): self.assertEqual(False, is_log_collection_allowed(), "Log collection should not have been enabled") # If collect logs is enabled in the configuration and cgroups are enbaled in the configurator, then log collection should always be enabled # Case 7: # - Cgroups are enabled in the configurator # - Cgroup v2 is not in use # - Cgroup v2 resource limiting conf is True # - collect logs config flag true with _create_collect_logs_handler(cgroups_enabled=True, cgroup_version=CgroupVersions.V1, cgroupv2_resource_limiting_conf=True, collect_logs_conf=True): self.assertEqual(True, is_log_collection_allowed(), "Log collection should have been enabled") # Case 8: # - Cgroups are enabled in the configurator # - Cgroup v2 is not in use # - Cgroup v2 resource limiting conf is False # - collect logs config flag true with _create_collect_logs_handler(cgroups_enabled=True, cgroup_version=CgroupVersions.V1, cgroupv2_resource_limiting_conf=False, collect_logs_conf=True): self.assertEqual(True, is_log_collection_allowed(), "Log collection should have been enabled") # If collect logs is enabled in the configuration and v2 is in use with the v2 resource limiting conf enabled, then log collection should always be enabled # Case 9: # - Cgroups are disabled in the configurator # - Cgroup v2 is in use # - Cgroup v2 resource limiting conf is True # - collect logs config flag true with _create_collect_logs_handler(cgroups_enabled=False, cgroup_version=CgroupVersions.V2, cgroupv2_resource_limiting_conf=True, collect_logs_conf=True): self.assertEqual(True, is_log_collection_allowed(), "Log collection should have been enabled") # If collect logs is enabled in the configuration and v2 is in use but the v2 resource limiting conf disabled, then log collection should always be disabled # Case 10: # - Cgroups are disabled in the configurator # - Cgroup v2 is in use # - Cgroup v2 resource limiting conf is False # - collect logs config flag true with _create_collect_logs_handler(cgroups_enabled=False, cgroup_version=CgroupVersions.V2, cgroupv2_resource_limiting_conf=False, collect_logs_conf=True): self.assertEqual(False, is_log_collection_allowed(), "Log collection should not have been enabled") # If collect logs is enabled in the configuration but cgroups are disabled in the configurator and v2 is not in use, then log collections should always be disabled # Case 11: # - Cgroups are disabled in the configurator # - Cgroup v2 is not in use # - Cgroup v2 resource limiting conf is True # - collect logs config flag true with _create_collect_logs_handler(cgroups_enabled=False, cgroup_version=CgroupVersions.V1, cgroupv2_resource_limiting_conf=True, collect_logs_conf=True): self.assertEqual(False, is_log_collection_allowed(), "Log collection should not have been enabled") # Case 12: # - Cgroups are disabled in the configurator # - Cgroup v2 is not in use # - Cgroup v2 resource limiting conf is False # - collect logs config flag true with _create_collect_logs_handler(cgroups_enabled=False, cgroup_version=CgroupVersions.V1, cgroupv2_resource_limiting_conf=False, collect_logs_conf=True): self.assertEqual(False, is_log_collection_allowed(), "Log collection should not have been enabled") def test_it_uploads_logs_when_collection_is_successful(self): archive_size = 42 def mock_run_command(*_, **__): return self._create_dummy_archive(size=archive_size) with _create_collect_logs_handler() as collect_logs_handler: with patch("azurelinuxagent.ga.collect_logs.shellutil.run_command", side_effect=mock_run_command): def http_put_handler(url, content, **__): if self.is_host_plugin_put_logs_request(url): http_put_handler.counter += 1 http_put_handler.archive = content return MockHttpResponse(status=200) return None http_put_handler.counter = 0 http_put_handler.archive = b"" protocol = collect_logs_handler.get_mock_wire_protocol() protocol.set_http_handlers(http_put_handler=http_put_handler) collect_logs_handler.run_and_wait() self.assertEqual(http_put_handler.counter, 1, "The PUT API to upload logs should have been called once") self.assertTrue(os.path.exists(self.archive_path), "The archive file should exist on disk") self.assertEqual(archive_size, len(http_put_handler.archive), "The archive file should have {0} bytes, not {1}".format(archive_size, len(http_put_handler.archive))) def test_it_does_not_upload_logs_when_collection_is_unsuccessful(self): with _create_collect_logs_handler() as collect_logs_handler: with patch("azurelinuxagent.ga.collect_logs.shellutil.run_command", side_effect=Exception("test exception")): def http_put_handler(url, _, **__): if self.is_host_plugin_put_logs_request(url): http_put_handler.counter += 1 return MockHttpResponse(status=200) return None http_put_handler.counter = 0 protocol = collect_logs_handler.get_mock_wire_protocol() protocol.set_http_handlers(http_put_handler=http_put_handler) collect_logs_handler.run_and_wait() self.assertFalse(os.path.exists(self.archive_path), "The archive file should not exist on disk") self.assertEqual(http_put_handler.counter, 0, "The PUT API to upload logs shouldn't have been called") @contextlib.contextmanager def _create_log_collector_monitor_handler(iterations=1, cgroup_version=CgroupVersions.V1): """ Creates an instance of LogCollectorMonitorHandler that * Runs its main loop only the number of times given in the 'iterations' parameter, and * Does not sleep at the end of each iteration The returned CollectLogsHandler is augmented with 2 methods: * run_and_wait() - invokes run() and wait() on the CollectLogsHandler """ with patch("azurelinuxagent.ga.collect_logs.LogCollectorMonitorHandler.stopped", side_effect=[False] * iterations + [True]): with patch("time.sleep"): original_read_file = fileutil.read_file def mock_read_file_v1(filepath, **args): if filepath == "/proc/stat": filepath = os.path.join(data_dir, "cgroups", "v1", "proc_stat_t0") elif filepath.endswith("/cpuacct.stat"): filepath = os.path.join(data_dir, "cgroups", "v1", "cpuacct.stat_t0") return original_read_file(filepath, **args) def mock_read_file_v2(filepath, **args): if filepath == "/proc/uptime": filepath = os.path.join(data_dir, "cgroups", "v2", "proc_uptime_t0") elif filepath.endswith("/cpu.stat"): filepath = os.path.join(data_dir, "cgroups", "v2", "cpu.stat_t0") return original_read_file(filepath, **args) mock_read_file = None cgroups = [] if cgroup_version == "v1": mock_read_file = mock_read_file_v1 cgroups = [ CpuControllerV1("test", "dummy_cpu_path"), MemoryControllerV1("test", "dummy_memory_path") ] else: mock_read_file = mock_read_file_v2 cgroups = [ CpuControllerV2("test", "dummy_cpu_path"), MemoryControllerV2("test", "dummy_memory_path") ] with patch("azurelinuxagent.common.utils.fileutil.read_file", side_effect=mock_read_file): def run_and_wait(): monitor_log_collector.run() monitor_log_collector.join() monitor_log_collector = get_log_collector_monitor_handler(cgroups) monitor_log_collector.run_and_wait = run_and_wait yield monitor_log_collector class TestLogCollectorMonitorHandler(AgentTestCase): def test_get_max_recorded_metrics(self): with _create_log_collector_monitor_handler(iterations=2) as log_collector_monitor_handler: nonlocal_vars = { 'cpu_iteration': 0, 'mem_iteration': 0, 'multiplier': 5 } def get_different_cpu_metrics(**kwargs): # pylint: disable=W0613 metrics = [MetricValue("Process", MetricsCounter.PROCESSOR_PERCENT_TIME, "service", 4.5), MetricValue("Process", MetricsCounter.THROTTLED_TIME, "service", nonlocal_vars['cpu_iteration']*nonlocal_vars['multiplier'] + 10.000)] nonlocal_vars['cpu_iteration'] += 1 return metrics def get_different_memory_metrics(**kwargs): # pylint: disable=W0613 metrics = [MetricValue("Memory", MetricsCounter.TOTAL_MEM_USAGE, "service", 20), MetricValue("Memory", MetricsCounter.ANON_MEM_USAGE, "service", 15), MetricValue("Memory", MetricsCounter.CACHE_MEM_USAGE, "service", nonlocal_vars['mem_iteration']*nonlocal_vars['multiplier'] + 5), MetricValue("Memory", MetricsCounter.MAX_MEM_USAGE, "service", 30), MetricValue("Memory", MetricsCounter.SWAP_MEM_USAGE, "service", 0)] nonlocal_vars['mem_iteration'] += 1 return metrics with patch("azurelinuxagent.ga.cpucontroller._CpuController.get_tracked_metrics", side_effect=get_different_cpu_metrics): with patch("azurelinuxagent.ga.memorycontroller._MemoryController.get_tracked_metrics", side_effect=get_different_memory_metrics): log_collector_monitor_handler.run_and_wait() max_recorded_metrics = log_collector_monitor_handler.get_max_recorded_metrics() self.assertEqual(len(max_recorded_metrics), 7) self.assertEqual(max_recorded_metrics[MetricsCounter.PROCESSOR_PERCENT_TIME], 4.5) self.assertEqual(max_recorded_metrics[MetricsCounter.THROTTLED_TIME], 15.0) self.assertEqual(max_recorded_metrics[MetricsCounter.TOTAL_MEM_USAGE], 20) self.assertEqual(max_recorded_metrics[MetricsCounter.ANON_MEM_USAGE], 15) self.assertEqual(max_recorded_metrics[MetricsCounter.CACHE_MEM_USAGE], 10) self.assertEqual(max_recorded_metrics[MetricsCounter.MAX_MEM_USAGE], 30) self.assertEqual(max_recorded_metrics[MetricsCounter.SWAP_MEM_USAGE], 0) def test_verify_log_collector_memory_limit_exceeded(self): with _create_log_collector_monitor_handler() as log_collector_monitor_handler: cache_exceeded = [MetricValue("Process", MetricsCounter.PROCESSOR_PERCENT_TIME, "service", 4.5), MetricValue("Process", MetricsCounter.THROTTLED_TIME, "service", 10.281), MetricValue("Memory", MetricsCounter.TOTAL_MEM_USAGE, "service", 170 * 1024 ** 2), MetricValue("Memory", MetricsCounter.ANON_MEM_USAGE, "service", 15 * 1024 ** 2), MetricValue("Memory", MetricsCounter.CACHE_MEM_USAGE, "service", 160 * 1024 ** 2), MetricValue("Memory", MetricsCounter.MAX_MEM_USAGE, "service", 171 * 1024 ** 2), MetricValue("Memory", MetricsCounter.SWAP_MEM_USAGE, "service", 0)] with patch("azurelinuxagent.ga.collect_logs.LogCollectorMonitorHandler._poll_resource_usage", return_value=cache_exceeded): with patch("os._exit") as mock_exit: log_collector_monitor_handler.run_and_wait() self.assertEqual(mock_exit.call_count, 1) with _create_log_collector_monitor_handler() as log_collector_monitor_handler: anon_exceeded = [MetricValue("Process", MetricsCounter.PROCESSOR_PERCENT_TIME, "service", 4.5), MetricValue("Process", MetricsCounter.THROTTLED_TIME, "service", 10.281), MetricValue("Memory", MetricsCounter.TOTAL_MEM_USAGE, "service", 170 * 1024 ** 2), MetricValue("Memory", MetricsCounter.ANON_MEM_USAGE, "service", 30 * 1024 ** 2), MetricValue("Memory", MetricsCounter.CACHE_MEM_USAGE, "service", 140 * 1024 ** 2), MetricValue("Memory", MetricsCounter.MAX_MEM_USAGE, "service", 171 * 1024 ** 2), MetricValue("Memory", MetricsCounter.SWAP_MEM_USAGE, "service", 0)] with patch("azurelinuxagent.ga.collect_logs.LogCollectorMonitorHandler._poll_resource_usage", return_value=anon_exceeded): with patch("os._exit") as mock_exit: log_collector_monitor_handler.run_and_wait() self.assertEqual(mock_exit.call_count, 1) with _create_log_collector_monitor_handler(cgroup_version=CgroupVersions.V2) as log_collector_monitor_handler: mem_throttled_exceeded = [MetricValue("Process", MetricsCounter.PROCESSOR_PERCENT_TIME, "service", 4.5), MetricValue("Process", MetricsCounter.THROTTLED_TIME, "service", 10.281), MetricValue("Memory", MetricsCounter.TOTAL_MEM_USAGE, "service", 170 * 1024 ** 2), MetricValue("Memory", MetricsCounter.ANON_MEM_USAGE, "service", 15 * 1024 ** 2), MetricValue("Memory", MetricsCounter.CACHE_MEM_USAGE, "service", 140 * 1024 ** 2), MetricValue("Memory", MetricsCounter.MAX_MEM_USAGE, "service", 171 * 1024 ** 2), MetricValue("Memory", MetricsCounter.SWAP_MEM_USAGE, "service", 0), MetricValue("Memory", MetricsCounter.MEM_THROTTLED, "service", 11)] with patch("azurelinuxagent.ga.collect_logs.LogCollectorMonitorHandler._poll_resource_usage", return_value=mem_throttled_exceeded): with patch("os._exit") as mock_exit: log_collector_monitor_handler.run_and_wait() self.assertEqual(mock_exit.call_count, 1) Azure-WALinuxAgent-a976115/tests/ga/test_collect_telemetry_events.py000066400000000000000000001013321510742556200256370ustar00rootroot00000000000000# Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import contextlib import glob import json import os import random import re import shutil import string import uuid from collections import defaultdict from mock import patch, MagicMock from azurelinuxagent.common import conf from azurelinuxagent.common.event import EVENTS_DIRECTORY from azurelinuxagent.common.exception import InvalidExtensionEventError, ServiceStoppedError from azurelinuxagent.common.protocol.util import ProtocolUtil from azurelinuxagent.common.telemetryevent import GuestAgentGenericLogsSchema, \ CommonTelemetryEventSchema from azurelinuxagent.common.utils import fileutil from azurelinuxagent.ga.collect_telemetry_events import ExtensionEventSchema, _ProcessExtensionEvents from tests.lib.http_request_predicates import HttpRequestPredicates from tests.lib.tools import AgentTestCase, clear_singleton_instances, data_dir class TestExtensionTelemetryHandler(AgentTestCase, HttpRequestPredicates): _TEST_DATA_DIR = os.path.join(data_dir, "events", "extension_events") _WELL_FORMED_FILES = os.path.join(_TEST_DATA_DIR, "well_formed_files") _MALFORMED_FILES = os.path.join(_TEST_DATA_DIR, "malformed_files") _MIX_FILES = os.path.join(_TEST_DATA_DIR, "mix_files") _SAS_FILES = os.path.join(_TEST_DATA_DIR, "sas_files") # To make tests more versatile, include this key in a test event to mark that event as a bad event. # This event will then be skipped and will not be counted as a good event. This is purely for testing purposes, # we use the good_event_count to validate the no of events the agent actually sends to Wireserver. # Eg: { # "EventLevel": "INFO", # "Message": "Starting IaaS ScriptHandler Extension v1", # "Version": "1.2.3", # "TaskName": "Extension Info", # "EventPid": "5676", # "EventTid": "1", # "OperationId": "e1065def-7571-42c2-88a2-4f8b4c8f226d", # "TimeStamp": "2019-12-12T01:11:38.2298194Z", # "BadEvent": true # } BAD_EVENT_KEY = 'BadEvent' def setUp(self): AgentTestCase.setUp(self) clear_singleton_instances(ProtocolUtil) # Create the log directory if not exists fileutil.mkdir(conf.get_ext_log_dir()) def tearDown(self): AgentTestCase.tearDown(self) @staticmethod def _parse_file_and_count_good_events(test_events_file_path): if not os.path.exists(test_events_file_path): raise OSError("Test Events file {0} not found".format(test_events_file_path)) try: with open(test_events_file_path, "rb") as fd: event_data = fd.read().decode("utf-8") # Parse the string and get the list of events events = json.loads(event_data) if not isinstance(events, list): events = [events] except Exception as e: print("Error parsing json file: {0}".format(e)) return 0 bad_key = TestExtensionTelemetryHandler.BAD_EVENT_KEY return len([e for e in events if bad_key not in e or not e[bad_key]]) @staticmethod def _create_random_extension_events_dir_with_events(no_of_extensions, events_path, no_of_chars=10): if os.path.isdir(events_path): # If its a directory, get all files from that directory test_events_paths = glob.glob(os.path.join(events_path, "*")) else: test_events_paths = [events_path] extension_names = {} for i in range(no_of_extensions): # pylint: disable=unused-variable ext_name = "Microsoft.OSTCExtensions.{0}".format(''.join(random.sample(string.ascii_letters, no_of_chars))) no_of_good_events = 0 for test_events_file_path in test_events_paths: if not os.path.exists(test_events_file_path) or not os.path.isfile(test_events_file_path): continue no_of_good_events += TestExtensionTelemetryHandler._parse_file_and_count_good_events(test_events_file_path) events_dir = os.path.join(conf.get_ext_log_dir(), ext_name, EVENTS_DIRECTORY) fileutil.mkdir(events_dir) shutil.copy(test_events_file_path, events_dir) extension_names[ext_name] = no_of_good_events return extension_names @staticmethod def _get_no_of_events_from_body(body): return body.count("") @staticmethod def _replace_in_file(file_path, replace_from, replace_to): with open(file_path, 'r') as f: content = f.read() content = content.replace(replace_from, replace_to) with open(file_path, 'w') as f: f.write(content) @staticmethod def _get_param_from_events(event_list): for event in event_list: for param in event.parameters: yield param @staticmethod def _get_handlers_with_version(event_list): event_with_name_and_versions = defaultdict(list) for param in TestExtensionTelemetryHandler._get_param_from_events(event_list): if param.name == GuestAgentGenericLogsSchema.EventName: handler_name, version = param.value.split("-") event_with_name_and_versions[handler_name].append(version) return event_with_name_and_versions @staticmethod def _get_param_value_from_event_body_if_exists(event_list, param_name): param_values = [] for param in TestExtensionTelemetryHandler._get_param_from_events(event_list): if param.name == param_name: param_values.append(param.value) return param_values @contextlib.contextmanager def _create_extension_telemetry_processor(self, telemetry_handler=None): with patch("azurelinuxagent.ga.collect_telemetry_events.NUM_OF_EVENT_FILE_RETRIES", 1): event_list = [] if not telemetry_handler: telemetry_handler = MagicMock(autospec=True) telemetry_handler.stopped = MagicMock(return_value=False) telemetry_handler.enqueue_event = MagicMock(wraps=event_list.append) extension_telemetry_processor = _ProcessExtensionEvents(telemetry_handler) extension_telemetry_processor.event_list = event_list yield extension_telemetry_processor def _assert_handler_data_in_event_list(self, telemetry_events, ext_names_with_count, expected_count=None): for ext_name, test_file_event_count in ext_names_with_count.items(): # If expected_count is not given, then the take the no of good events in the test file as the source of truth count = expected_count if expected_count is not None else test_file_event_count if count == 0: self.assertNotIn(ext_name, telemetry_events, "Found telemetry events for unwanted extension {0}".format(ext_name)) continue self.assertIn(ext_name, telemetry_events, "Extension name: {0} not found in the Telemetry Events".format(ext_name)) self.assertEqual(len(telemetry_events[ext_name]), count, "No of good events for ext {0} do not match".format(ext_name)) def _assert_param_in_events(self, event_list, param_key, param_value, min_count=1): count = 0 for param in TestExtensionTelemetryHandler._get_param_from_events(event_list): if param.name == param_key and param.value == param_value: count += 1 self.assertGreaterEqual(count, min_count, "'{0}: {1}' param only found {2} times in events. Min_count required: {3}".format( param_key, param_value, count, min_count)) @staticmethod def _is_string_in_event_body(event_body, expected_string): found = False for body in event_body: if expected_string in body: found = True break return found def test_it_should_not_capture_malformed_events(self): with self._create_extension_telemetry_processor() as extension_telemetry_processor: bad_name_ext_with_count = self._create_random_extension_events_dir_with_events(2, self._MALFORMED_FILES) bad_json_ext_with_count = self._create_random_extension_events_dir_with_events(2, os.path.join( self._MALFORMED_FILES, "bad_json_files", "1591816395.json")) extension_telemetry_processor.run() telemetry_events = self._get_handlers_with_version(extension_telemetry_processor.event_list) self._assert_handler_data_in_event_list(telemetry_events, bad_name_ext_with_count, expected_count=0) self._assert_handler_data_in_event_list(telemetry_events, bad_json_ext_with_count, expected_count=0) def test_it_should_redact_extension_events(self): with self._create_extension_telemetry_processor() as extension_telemetry_processor: ext_names_with_count = self._create_random_extension_events_dir_with_events(2, self._SAS_FILES) extension_telemetry_processor.run() telemetry_events = self._get_handlers_with_version(extension_telemetry_processor.event_list) self._assert_handler_data_in_event_list(telemetry_events, ext_names_with_count) context1_vals = self._get_param_value_from_event_body_if_exists(extension_telemetry_processor.event_list, GuestAgentGenericLogsSchema.Context1) self.assertEqual(4, len(context1_vals), "There should be 4 Context1 values") for val in context1_vals: self.assertIn("", val, "sasToken should be redacted") def test_it_should_capture_and_send_correct_events(self): with self._create_extension_telemetry_processor() as extension_telemetry_processor: ext_names_with_count = self._create_random_extension_events_dir_with_events(2, self._WELL_FORMED_FILES) ext_names_with_count.update(self._create_random_extension_events_dir_with_events(3, os.path.join( self._MIX_FILES, "1591835859.json"))) extension_telemetry_processor.run() telemetry_events = self._get_handlers_with_version(extension_telemetry_processor.event_list) self._assert_handler_data_in_event_list(telemetry_events, ext_names_with_count) def test_it_should_disregard_bad_events_and_keep_good_ones_in_a_mixed_file(self): with self._create_extension_telemetry_processor() as extension_telemetry_processor: extensions_with_count = self._create_random_extension_events_dir_with_events(2, self._MIX_FILES) extensions_with_count.update(self._create_random_extension_events_dir_with_events(3, os.path.join( self._MALFORMED_FILES, "bad_name_file.json"))) extension_telemetry_processor.run() telemetry_events = self._get_handlers_with_version(extension_telemetry_processor.event_list) self._assert_handler_data_in_event_list(telemetry_events, extensions_with_count) def test_it_should_limit_max_no_of_events_to_send_per_run_per_extension_and_report_event(self): max_events = 5 with patch("azurelinuxagent.ga.collect_telemetry_events.add_log_event") as mock_event: with self._create_extension_telemetry_processor() as extension_telemetry_processor: with patch.object(extension_telemetry_processor, "_MAX_NUMBER_OF_EVENTS_PER_EXTENSION_PER_PERIOD", max_events): ext_names_with_count = self._create_random_extension_events_dir_with_events(5, self._WELL_FORMED_FILES) extension_telemetry_processor.run() telemetry_events = self._get_handlers_with_version(extension_telemetry_processor.event_list) self._assert_handler_data_in_event_list(telemetry_events, ext_names_with_count, expected_count=max_events) pattern = r'Reached max count for the extension:\s*(?P.+?);\s*.+' self._assert_event_reported(mock_event, ext_names_with_count, pattern) def test_it_should_only_process_the_newer_events(self): max_events = 5 no_of_extension = 2 test_guid = str(uuid.uuid4()) with self._create_extension_telemetry_processor() as extension_telemetry_processor: with patch.object(extension_telemetry_processor, "_MAX_NUMBER_OF_EVENTS_PER_EXTENSION_PER_PERIOD", max_events): ext_names_with_count = self._create_random_extension_events_dir_with_events(no_of_extension, self._WELL_FORMED_FILES) for ext_name in ext_names_with_count.keys(): self._replace_in_file( os.path.join(conf.get_ext_log_dir(), ext_name, EVENTS_DIRECTORY, "9999999999.json"), replace_from='"{0}": ""'.format(ExtensionEventSchema.OperationId), replace_to='"{0}": "{1}"'.format(ExtensionEventSchema.OperationId, test_guid)) extension_telemetry_processor.run() telemetry_events = self._get_handlers_with_version(extension_telemetry_processor.event_list) self._assert_handler_data_in_event_list(telemetry_events, ext_names_with_count, expected_count=max_events) self._assert_param_in_events(extension_telemetry_processor.event_list, param_key=GuestAgentGenericLogsSchema.Context1, param_value="This is the latest event", min_count=no_of_extension*max_events) self._assert_param_in_events(extension_telemetry_processor.event_list, param_key=GuestAgentGenericLogsSchema.Context3, param_value=test_guid, min_count=no_of_extension*max_events) def test_it_should_parse_extension_event_irrespective_of_case(self): with self._create_extension_telemetry_processor() as extension_telemetry_processor: extensions_with_count = self._create_random_extension_events_dir_with_events(2, os.path.join( self._TEST_DATA_DIR, "different_cases")) extension_telemetry_processor.run() telemetry_events = self._get_handlers_with_version(extension_telemetry_processor.event_list) self._assert_handler_data_in_event_list(telemetry_events, extensions_with_count) def test_it_should_parse_special_chars_properly(self): with self._create_extension_telemetry_processor() as extension_telemetry_processor: extensions_with_count = self._create_random_extension_events_dir_with_events(2, os.path.join( self._TEST_DATA_DIR, "special_chars")) extension_telemetry_processor.run() telemetry_events = self._get_handlers_with_version(extension_telemetry_processor.event_list) self._assert_handler_data_in_event_list(telemetry_events, extensions_with_count) def test_it_should_parse_int_type_for_eventpid_or_eventtid_properly(self): with self._create_extension_telemetry_processor() as extension_telemetry_processor: extensions_with_count = self._create_random_extension_events_dir_with_events(2, os.path.join( self._TEST_DATA_DIR, "int_type")) extension_telemetry_processor.run() telemetry_events = self._get_handlers_with_version(extension_telemetry_processor.event_list) self._assert_handler_data_in_event_list(telemetry_events, extensions_with_count) def _setup_and_assert_tests_for_max_sizes(self, no_of_extensions=2, expected_count=None): with self._create_extension_telemetry_processor() as extension_telemetry_processor: extensions_with_count = self._create_random_extension_events_dir_with_events(no_of_extensions, os.path.join( self._TEST_DATA_DIR, "large_messages")) extension_telemetry_processor.run() telemetry_events = self._get_handlers_with_version(extension_telemetry_processor.event_list) self._assert_handler_data_in_event_list(telemetry_events, extensions_with_count, expected_count) return extensions_with_count, extension_telemetry_processor.event_list def _assert_invalid_extension_error_event_reported(self, mock_event, handler_name_with_count, error, expected_drop_count=None): self.assertTrue(mock_event.called, "Even a single event not logged") patt = r'Extension:\s+(?PMicrosoft.OSTCExtensions.+?);.+\s*Reason:\s+\[InvalidExtensionEventError\](?P.+?):.+Dropped Count:\s*(?P\d+)' for _, kwargs in mock_event.call_args_list: msg = kwargs['message'] match = re.search(patt, msg, re.MULTILINE) if match is not None: self.assertEqual(match.group("reason").strip(), error, "Incorrect error") self.assertIn(match.group("name"), handler_name_with_count, "Extension event not found") count = handler_name_with_count.pop(match.group("name")) count = expected_drop_count if expected_drop_count is not None else count self.assertEqual(int(count), int(match.group("count")), "Dropped count doesnt match") self.assertEqual(len(handler_name_with_count), 0, "All extension events not matched") def _assert_event_reported(self, mock_event, handler_name_with_count, pattern): self.assertTrue(mock_event.called, "Even a single event not logged") for _, kwargs in mock_event.call_args_list: msg = kwargs['message'] match = re.search(pattern, msg, re.MULTILINE) if match is not None: expected_handler_name = match.group("name") self.assertIn(expected_handler_name, handler_name_with_count, "Extension event not found") handler_name_with_count.pop(expected_handler_name) self.assertEqual(len(handler_name_with_count), 0, "All extension events not matched") def test_it_should_trim_message_if_more_than_limit(self): max_len = 100 no_of_extensions = 2 with patch("azurelinuxagent.ga.collect_telemetry_events._ProcessExtensionEvents._EXTENSION_EVENT_MAX_MSG_LEN", max_len): handler_name_with_count, event_list = self._setup_and_assert_tests_for_max_sizes() # pylint: disable=unused-variable context1_vals = self._get_param_value_from_event_body_if_exists(event_list, GuestAgentGenericLogsSchema.Context1) self.assertEqual(no_of_extensions, len(context1_vals), "There should be {0} Context1 values".format(no_of_extensions)) for val in context1_vals: self.assertLessEqual(len(val), max_len, "Message Length does not match") def test_it_should_skip_events_larger_than_max_size_and_report_event(self): max_size = 1000 no_of_extensions = 3 with patch("azurelinuxagent.ga.collect_telemetry_events.add_log_event") as mock_event: with patch("azurelinuxagent.ga.collect_telemetry_events._ProcessExtensionEvents._EXTENSION_EVENT_MAX_SIZE", max_size): handler_name_with_count, _ = self._setup_and_assert_tests_for_max_sizes(no_of_extensions, expected_count=0) self._assert_invalid_extension_error_event_reported(mock_event, handler_name_with_count, error=InvalidExtensionEventError.OversizeEventError) def test_it_should_skip_large_files_greater_than_max_file_size_and_report_event(self): max_file_size = 10000 no_of_extensions = 5 with patch("azurelinuxagent.ga.collect_telemetry_events.add_log_event") as mock_event: with patch("azurelinuxagent.ga.collect_telemetry_events._ProcessExtensionEvents._EXTENSION_EVENT_FILE_MAX_SIZE", max_file_size): handler_name_with_count, _ = self._setup_and_assert_tests_for_max_sizes(no_of_extensions, expected_count=0) pattern = r'Skipping file:\s*{0}/(?P.+?)/{1}.+'.format(conf.get_ext_log_dir(), EVENTS_DIRECTORY) self._assert_event_reported(mock_event, handler_name_with_count, pattern) def test_it_should_map_extension_event_json_correctly_to_telemetry_event(self): # EventName maps to HandlerName + '-' + Version from event file expected_mapping = { GuestAgentGenericLogsSchema.EventName: ExtensionEventSchema.Version, GuestAgentGenericLogsSchema.CapabilityUsed: ExtensionEventSchema.EventLevel, GuestAgentGenericLogsSchema.TaskName: ExtensionEventSchema.TaskName, GuestAgentGenericLogsSchema.Context1: ExtensionEventSchema.Message, GuestAgentGenericLogsSchema.Context2: ExtensionEventSchema.Timestamp, GuestAgentGenericLogsSchema.Context3: ExtensionEventSchema.OperationId, CommonTelemetryEventSchema.EventPid: ExtensionEventSchema.EventPid, CommonTelemetryEventSchema.EventTid: ExtensionEventSchema.EventTid } with self._create_extension_telemetry_processor() as extension_telemetry_processor: test_file = os.path.join(self._WELL_FORMED_FILES, "1592355539.json") handler_name = list(self._create_random_extension_events_dir_with_events(1, test_file))[0] extension_telemetry_processor.run() telemetry_event_map = defaultdict(list) for telemetry_event_key in expected_mapping: telemetry_event_map[telemetry_event_key] = self._get_param_value_from_event_body_if_exists( extension_telemetry_processor.event_list, telemetry_event_key) with open(test_file, 'r') as event_file: data = json.load(event_file) extension_event_map = defaultdict(list) for extension_event in data: for event_key in extension_event: extension_event_map[event_key].append(extension_event[event_key]) for telemetry_key in expected_mapping: extension_event_key = expected_mapping[telemetry_key] telemetry_data = telemetry_event_map[telemetry_key] # EventName = "HandlerName-Version" from Extensions extension_data = ["{0}-{1}".format(handler_name, v) for v in extension_event_map[ extension_event_key]] if telemetry_key == GuestAgentGenericLogsSchema.EventName else \ extension_event_map[extension_event_key] self.assertEqual(telemetry_data, extension_data, "The data for {0} and {1} doesn't map properly".format(telemetry_key, extension_event_key)) def test_it_should_always_cleanup_files_on_good_and_bad_cases(self): with self._create_extension_telemetry_processor() as extension_telemetry_processor: extensions_with_count = self._create_random_extension_events_dir_with_events(2, os.path.join( self._TEST_DATA_DIR, "large_messages")) extensions_with_count.update(self._create_random_extension_events_dir_with_events(3, self._MALFORMED_FILES)) extensions_with_count.update(self._create_random_extension_events_dir_with_events(4, self._WELL_FORMED_FILES)) extensions_with_count.update(self._create_random_extension_events_dir_with_events(1, self._MIX_FILES)) # Create random files in the events directory for each extension just to ensure that we delete them later for handler_name in extensions_with_count.keys(): file_name = os.path.join(conf.get_ext_log_dir(), handler_name, EVENTS_DIRECTORY, ''.join(random.sample(string.ascii_letters, 10))) with open(file_name, 'a') as random_file: random_file.write('1*2*3' * 100) extension_telemetry_processor.run() telemetry_events = self._get_handlers_with_version(extension_telemetry_processor.event_list) self._assert_handler_data_in_event_list(telemetry_events, extensions_with_count) for handler_name in extensions_with_count.keys(): events_path = os.path.join(conf.get_ext_log_dir(), handler_name, EVENTS_DIRECTORY) self.assertTrue(os.path.exists(events_path), "{0} dir doesn't exist".format(events_path)) self.assertEqual(0, len(os.listdir(events_path)), "There should be no files inside the events dir") def test_it_should_skip_unwanted_parameters_in_event_file(self): extra_params = ["SomethingNewButNotCool", "SomethingVeryWeird"] with self._create_extension_telemetry_processor() as extension_telemetry_processor: extensions_with_count= self._create_random_extension_events_dir_with_events(3, os.path.join( self._TEST_DATA_DIR, "extra_parameters")) extension_telemetry_processor.run() telemetry_events = self._get_handlers_with_version(extension_telemetry_processor.event_list) self._assert_handler_data_in_event_list(telemetry_events, extensions_with_count) for param in extra_params: self.assertEqual(0, len( self._get_param_value_from_event_body_if_exists(extension_telemetry_processor.event_list, extra_params)), "Unwanted param {0} found".format(param)) def test_it_should_not_send_events_which_dont_have_all_required_keys_and_report_event(self): with patch("azurelinuxagent.ga.collect_telemetry_events.add_log_event") as mock_event: with self._create_extension_telemetry_processor() as extension_telemetry_processor: extensions_with_count = self._create_random_extension_events_dir_with_events(3, os.path.join( self._TEST_DATA_DIR, "missing_parameters")) extension_telemetry_processor.run() telemetry_events = self._get_handlers_with_version(extension_telemetry_processor.event_list) self._assert_handler_data_in_event_list(telemetry_events, extensions_with_count, expected_count=0) self.assertTrue(mock_event.called, "Even a single event not logged") for _, kwargs in mock_event.call_args_list: # Example: #['Dropped events for Extension: Microsoft.OSTCExtensions.ZJQRNqbKtP; Details:', # 'Reason: [InvalidExtensionEventError] MissingKeyError: eventpid not found; Dropped Count: 2', # 'Reason: [InvalidExtensionEventError] MissingKeyError: Message not found; Dropped Count: 2', # 'Reason: [InvalidExtensionEventError] MissingKeyError: version not found; Dropped Count: 3'] msg = kwargs['message'].split("\n") ext_name = re.search(r'Dropped events for Extension:\s+(?PMicrosoft.OSTCExtensions.+?);.+', msg.pop(0)) if ext_name is not None: ext_name = ext_name.group('name') self.assertIn(ext_name, extensions_with_count, "Extension {0} not found".format(ext_name)) patt = r'\s*Reason:\s+\[InvalidExtensionEventError\](?P.+?):\s*(?P.+?)\s+not found;\s+Dropped Count:\s*(?P\d+)' expected_error_drop_count = { ExtensionEventSchema.EventPid.lower(): 2, ExtensionEventSchema.Message.lower(): 2, ExtensionEventSchema.Version.lower(): 3 } for m in msg: match = re.search(patt, m) self.assertIsNotNone(match, "No InvalidExtensionEventError errors reported") self.assertEqual(match.group("reason").strip(), InvalidExtensionEventError.MissingKeyError, "Error is not a {0}".format(InvalidExtensionEventError.MissingKeyError)) observerd_error = match.group("key").lower() self.assertIn(observerd_error, expected_error_drop_count, "Unexpected error reported") self.assertEqual(expected_error_drop_count.pop(observerd_error), int(match.group("count")), "Unequal no of dropped events") self.assertEqual(len(expected_error_drop_count), 0, "All errros not found yet") del extensions_with_count[ext_name] self.assertEqual(len(extensions_with_count), 0, "All extension events not matched") def test_it_should_not_send_event_where_message_is_empty_and_report_event(self): with patch("azurelinuxagent.ga.collect_telemetry_events.add_log_event") as mock_event: with self._create_extension_telemetry_processor() as extension_telemetry_processor: extensions_with_count = self._create_random_extension_events_dir_with_events(3, os.path.join( self._TEST_DATA_DIR, "empty_message")) extension_telemetry_processor.run() telemetry_events = self._get_handlers_with_version(extension_telemetry_processor.event_list) self._assert_handler_data_in_event_list(telemetry_events, extensions_with_count, expected_count=0) self._assert_invalid_extension_error_event_reported(mock_event, extensions_with_count, InvalidExtensionEventError.EmptyMessageError, expected_drop_count=1) def test_it_should_not_process_events_if_send_telemetry_events_handler_stopped(self): event_list = [] telemetry_handler = MagicMock(autospec=True) telemetry_handler.stopped = MagicMock(return_value=True) telemetry_handler.enqueue_event = MagicMock(wraps=event_list.append) with self._create_extension_telemetry_processor(telemetry_handler) as extension_telemetry_processor: self._create_random_extension_events_dir_with_events(3, self._WELL_FORMED_FILES) extension_telemetry_processor.run() self.assertEqual(0, len(event_list), "No events should have been enqueued") def test_it_should_not_delete_event_files_except_current_one_if_service_stopped_midway(self): event_list = [] telemetry_handler = MagicMock(autospec=True) telemetry_handler.stopped = MagicMock(return_value=False) telemetry_handler.enqueue_event = MagicMock(side_effect=ServiceStoppedError("Telemetry service stopped"), wraps=event_list.append) no_of_extensions = 3 # self._WELL_FORMED_FILES has 3 event files, i.e. total files for 3 extensions = 3 * 3 = 9 # But since we delete the file that we were processing last, expected count = 8 expected_event_file_count = 8 with self._create_extension_telemetry_processor(telemetry_handler) as extension_telemetry_processor: ext_names = self._create_random_extension_events_dir_with_events(no_of_extensions, self._WELL_FORMED_FILES) extension_telemetry_processor.run() self.assertEqual(0, len(event_list), "No events should have been enqueued") total_file_count = 0 for ext_name in ext_names: event_dir = os.path.join(conf.get_ext_log_dir(), ext_name, EVENTS_DIRECTORY) file_count = len(os.listdir(event_dir)) self.assertGreater(file_count, 0, "Some event files should still be there") total_file_count += file_count self.assertEqual(expected_event_file_count, total_file_count, "Expected File count doesn't match")Azure-WALinuxAgent-a976115/tests/ga/test_cpucontroller.py000066400000000000000000000327451510742556200234420ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.4+ and Openssl 1.0+ # from __future__ import print_function import errno import os import random import shutil from azurelinuxagent.ga.cgroupcontroller import MetricsCounter from azurelinuxagent.ga.cpucontroller import CpuControllerV1, CpuControllerV2 from azurelinuxagent.common.exception import CGroupsException from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.utils import fileutil from tests.lib.tools import AgentTestCase, patch, data_dir def consume_cpu_time(): waste = 0 for x in range(1, 200000): # pylint: disable=unused-variable waste += random.random() return waste class TestCpuControllerV1(AgentTestCase): @classmethod def setUpClass(cls): AgentTestCase.setUpClass() original_read_file = fileutil.read_file # # Tests that need to mock the contents of /proc/stat or */cpuacct/stat can set this map from # the file that needs to be mocked to the mock file (each test starts with an empty map). If # an Exception is given instead of a path, the exception is raised # cls.mock_read_file_map = {} def mock_read_file(filepath, **args): if filepath in cls.mock_read_file_map: mapped_value = cls.mock_read_file_map[filepath] if isinstance(mapped_value, Exception): raise mapped_value filepath = mapped_value return original_read_file(filepath, **args) cls.mock_read_file = patch("azurelinuxagent.common.utils.fileutil.read_file", side_effect=mock_read_file) cls.mock_read_file.start() @classmethod def tearDownClass(cls): cls.mock_read_file.stop() AgentTestCase.tearDownClass() def setUp(self): AgentTestCase.setUp(self) TestCpuControllerV1.mock_read_file_map.clear() def test_initialize_cpu_usage_v1_should_set_current_cpu_usage(self): controller = CpuControllerV1("test", "/sys/fs/cgroup/cpu/system.slice/test") TestCpuControllerV1.mock_read_file_map = { "/proc/stat": os.path.join(data_dir, "cgroups", "v1", "proc_stat_t0"), os.path.join(controller.path, "cpuacct.stat"): os.path.join(data_dir, "cgroups", "v1", "cpuacct.stat_t0") } controller.initialize_cpu_usage() self.assertEqual(controller._current_cgroup_cpu, 63763) self.assertEqual(controller._current_system_cpu, 5496872) def test_get_cpu_usage_v1_should_return_the_cpu_usage_since_its_last_invocation(self): osutil = get_osutil() controller = CpuControllerV1("test", "/sys/fs/cgroup/cpu/system.slice/test") TestCpuControllerV1.mock_read_file_map = { "/proc/stat": os.path.join(data_dir, "cgroups", "v1", "proc_stat_t0"), os.path.join(controller.path, "cpuacct.stat"): os.path.join(data_dir, "cgroups", "v1", "cpuacct.stat_t0") } controller.initialize_cpu_usage() TestCpuControllerV1.mock_read_file_map = { "/proc/stat": os.path.join(data_dir, "cgroups", "v1", "proc_stat_t1"), os.path.join(controller.path, "cpuacct.stat"): os.path.join(data_dir, "cgroups", "v1", "cpuacct.stat_t1") } cpu_usage = controller.get_cpu_usage() self.assertEqual(cpu_usage, round(100.0 * 0.000307697876885 * osutil.get_processor_cores(), 3)) TestCpuControllerV1.mock_read_file_map = { "/proc/stat": os.path.join(data_dir, "cgroups", "v1", "proc_stat_t2"), os.path.join(controller.path, "cpuacct.stat"): os.path.join(data_dir, "cgroups", "v1", "cpuacct.stat_t2") } cpu_usage = controller.get_cpu_usage() self.assertEqual(cpu_usage, round(100.0 * 0.000445181085968 * osutil.get_processor_cores(), 3)) def test_initialize_cpu_usage_v1_should_set_the_cgroup_usage_to_0_when_the_cgroup_does_not_exist(self): controller = CpuControllerV1("test", "/sys/fs/cgroup/cpu/system.slice/test") io_error_2 = IOError() io_error_2.errno = errno.ENOENT # "No such directory" TestCpuControllerV1.mock_read_file_map = { "/proc/stat": os.path.join(data_dir, "cgroups", "v1", "proc_stat_t0"), os.path.join(controller.path, "cpuacct.stat"): io_error_2 } controller.initialize_cpu_usage() self.assertEqual(controller._current_cgroup_cpu, 0) self.assertEqual(controller._current_system_cpu, 5496872) # check the system usage just for test sanity def test_initialize_cpu_usage_v1_should_raise_an_exception_when_called_more_than_once(self): controller = CpuControllerV1("test", "/sys/fs/cgroup/cpu/system.slice/test") TestCpuControllerV1.mock_read_file_map = { "/proc/stat": os.path.join(data_dir, "cgroups", "v1", "proc_stat_t0"), os.path.join(controller.path, "cpuacct.stat"): os.path.join(data_dir, "cgroups", "v1", "cpuacct.stat_t0") } controller.initialize_cpu_usage() with self.assertRaises(CGroupsException): controller.initialize_cpu_usage() def test_get_cpu_usage_v1_should_raise_an_exception_when_initialize_cpu_usage_has_not_been_invoked(self): controller = CpuControllerV1("test", "/sys/fs/cgroup/cpu/system.slice/test") with self.assertRaises(CGroupsException): cpu_usage = controller.get_cpu_usage() # pylint: disable=unused-variable def test_get_throttled_time_v1_should_return_the_value_since_its_last_invocation(self): test_file = os.path.join(self.tmp_dir, "cpu.stat") controller = CpuControllerV1("test", self.tmp_dir) controller.initialize_cpu_usage() controller.track_throttle_time(True) shutil.copyfile(os.path.join(data_dir, "cgroups", "v1", "cpu.stat_t0"), test_file) # throttled_time = 50 controller.get_cpu_throttled_time() shutil.copyfile(os.path.join(data_dir, "cgroups", "v1", "cpu.stat_t1"), test_file) # throttled_time = 2075541442327 throttled_time = controller.get_cpu_throttled_time() self.assertEqual(throttled_time, round(float(2075541442327 - 50) / 1E9, 3), "The value of throttled_time is incorrect") def test_get_tracked_metrics_v1_should_return_the_throttled_time(self): controller = CpuControllerV1("test", os.path.join(data_dir, "cgroups", "v1")) controller.initialize_cpu_usage() controller.track_throttle_time(True) def find_throttled_time(metrics): return [m for m in metrics if m.counter == MetricsCounter.THROTTLED_TIME] found = find_throttled_time(controller.get_tracked_metrics()) self.assertTrue(len(found) == 1, "get_tracked_metrics should have fetched the throttled time by default. Found: {0}".format(found)) class TestCpuControllerV2(AgentTestCase): @classmethod def setUpClass(cls): AgentTestCase.setUpClass() original_read_file = fileutil.read_file # # Tests that need to mock the contents of /proc/stat or */cpuacct/stat can set this map from # the file that needs to be mocked to the mock file (each test starts with an empty map). If # an Exception is given instead of a path, the exception is raised # cls.mock_read_file_map = {} def mock_read_file(filepath, **args): if filepath in cls.mock_read_file_map: mapped_value = cls.mock_read_file_map[filepath] if isinstance(mapped_value, Exception): raise mapped_value filepath = mapped_value return original_read_file(filepath, **args) cls.mock_read_file = patch("azurelinuxagent.common.utils.fileutil.read_file", side_effect=mock_read_file) cls.mock_read_file.start() @classmethod def tearDownClass(cls): cls.mock_read_file.stop() AgentTestCase.tearDownClass() def setUp(self): AgentTestCase.setUp(self) TestCpuControllerV2.mock_read_file_map.clear() def test_initialize_cpu_usage_v2_should_set_current_cpu_usage(self): controller = CpuControllerV2("test", "/sys/fs/cgroup/cpu/system.slice/test") TestCpuControllerV2.mock_read_file_map = { "/proc/uptime": os.path.join(data_dir, "cgroups", "v2", "proc_uptime_t0"), os.path.join(controller.path, "cpu.stat"): os.path.join(data_dir, "cgroups", "v2", "cpu.stat_t0") } controller.initialize_cpu_usage() self.assertEqual(controller._current_cgroup_cpu, 817045397 / 1E6) self.assertEqual(controller._current_system_cpu, 776968.02) def test_get_cpu_usage_v2_should_return_the_cpu_usage_since_its_last_invocation(self): controller = CpuControllerV2("test", "/sys/fs/cgroup/cpu/system.slice/test") TestCpuControllerV2.mock_read_file_map = { "/proc/uptime": os.path.join(data_dir, "cgroups", "v2", "proc_uptime_t0"), os.path.join(controller.path, "cpu.stat"): os.path.join(data_dir, "cgroups", "v2", "cpu.stat_t0") } controller.initialize_cpu_usage() TestCpuControllerV2.mock_read_file_map = { "/proc/uptime": os.path.join(data_dir, "cgroups", "v2", "proc_uptime_t1"), os.path.join(controller.path, "cpu.stat"): os.path.join(data_dir, "cgroups", "v2", "cpu.stat_t1") } cpu_usage = controller.get_cpu_usage() cgroup_usage_delta = (819624087 / 1E6) - (817045397 / 1E6) system_usage_delta = 777350.57 - 776968.02 self.assertEqual(cpu_usage, round(100.0 * cgroup_usage_delta/system_usage_delta, 3)) TestCpuControllerV2.mock_read_file_map = { "/proc/uptime": os.path.join(data_dir, "cgroups", "v2", "proc_uptime_t2"), os.path.join(controller.path, "cpu.stat"): os.path.join(data_dir, "cgroups", "v2", "cpu.stat_t2") } cpu_usage = controller.get_cpu_usage() cgroup_usage_delta = (822052295 / 1E6) - (819624087 / 1E6) system_usage_delta = 779218.68 - 777350.57 self.assertEqual(cpu_usage, round(100.0 * cgroup_usage_delta/system_usage_delta, 3)) def test_initialize_cpu_usage_v2_should_set_the_cgroup_usage_to_0_when_the_cgroup_does_not_exist(self): controller = CpuControllerV2("test", "/sys/fs/cgroup/cpu/system.slice/test") io_error_2 = IOError() io_error_2.errno = errno.ENOENT # "No such directory" TestCpuControllerV2.mock_read_file_map = { "/proc/uptime": os.path.join(data_dir, "cgroups", "v2", "proc_uptime_t0"), os.path.join(controller.path, "cpu.stat"): io_error_2 } controller.initialize_cpu_usage() self.assertEqual(controller._current_cgroup_cpu, 0) self.assertEqual(controller._current_system_cpu, 776968.02) # check the system usage just for test sanity def test_initialize_cpu_usage_v2_should_raise_an_exception_when_called_more_than_once(self): controller = CpuControllerV2("test", "/sys/fs/cgroup/cpu/system.slice/test") TestCpuControllerV2.mock_read_file_map = { "/proc/uptime": os.path.join(data_dir, "cgroups", "v2", "proc_uptime_t0"), os.path.join(controller.path, "cpu.stat"): os.path.join(data_dir, "cgroups", "v2", "cpu.stat_t0") } controller.initialize_cpu_usage() with self.assertRaises(CGroupsException): controller.initialize_cpu_usage() def test_get_cpu_usage_v2_should_raise_an_exception_when_initialize_cpu_usage_has_not_been_invoked(self): controller = CpuControllerV2("test", "/sys/fs/cgroup/cpu/system.slice/test") with self.assertRaises(CGroupsException): cpu_usage = controller.get_cpu_usage() # pylint: disable=unused-variable def test_get_throttled_time_v2_should_return_the_value_since_its_last_invocation(self): controller = CpuControllerV2("test", self.tmp_dir) controller.initialize_cpu_usage() controller.track_throttle_time(True) test_file = os.path.join(self.tmp_dir, "cpu.stat") shutil.copyfile(os.path.join(data_dir, "cgroups", "v2", "cpu.stat_t0"), test_file) # throttled_time = 15735198706 controller.get_cpu_throttled_time() shutil.copyfile(os.path.join(data_dir, "cgroups", "v2", "cpu.stat_t1"), test_file) # throttled_usec = 15796563650 throttled_time = controller.get_cpu_throttled_time() self.assertEqual(throttled_time, round(float(15796563650 - 15735198706) / 1E6, 3), "The value of throttled_time is incorrect") def test_get_tracked_metrics_v2_should_return_the_throttled_time(self): controller = CpuControllerV2("test", os.path.join(data_dir, "cgroups", "v2")) controller.initialize_cpu_usage() controller.track_throttle_time(True) def find_throttled_time(metrics): return [m for m in metrics if m.counter == MetricsCounter.THROTTLED_TIME] found = find_throttled_time(controller.get_tracked_metrics()) self.assertTrue(len(found) == 1, "get_tracked_metrics should have fetched the throttled time by default. Found: {0}".format(found)) Azure-WALinuxAgent-a976115/tests/ga/test_env.py000066400000000000000000000231711510742556200213300ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import datetime from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.osutil.default import DefaultOSUtil, shellutil from azurelinuxagent.ga.env import MonitorDhcpClientRestart, EnableFirewall from tests.lib.tools import AgentTestCase, patch from tests.lib.mock_firewall_command import MockIpTables class MonitorDhcpClientRestartTestCase(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) # save the original run_command so that mocks can reference it self.shellutil_run_command = shellutil.run_command # save an instance of the original DefaultOSUtil so that mocks can reference it self.default_osutil = DefaultOSUtil() # AgentTestCase.setUp mocks osutil.factory._get_osutil; we override that mock for this class with a new mock # that always returns the default implementation. self.mock_get_osutil = patch("azurelinuxagent.common.osutil.factory._get_osutil", return_value=DefaultOSUtil()) self.mock_get_osutil.start() def tearDown(self): self.mock_get_osutil.stop() AgentTestCase.tearDown(self) def test_get_dhcp_client_pid_should_return_a_sorted_list_of_pids(self): with patch("azurelinuxagent.common.utils.shellutil.run_command", return_value="11 9 5 22 4 6"): pids = MonitorDhcpClientRestart(get_osutil())._get_dhcp_client_pid() self.assertEqual(pids, [4, 5, 6, 9, 11, 22]) def test_get_dhcp_client_pid_should_return_an_empty_list_and_log_a_warning_when_dhcp_client_is_not_running(self): with patch("azurelinuxagent.common.osutil.default.shellutil.run_command", side_effect=lambda _: self.shellutil_run_command(["pidof", "non-existing-process"])): with patch('azurelinuxagent.common.logger.Logger.warn') as mock_warn: pids = MonitorDhcpClientRestart(get_osutil())._get_dhcp_client_pid() self.assertEqual(pids, []) self.assertEqual(mock_warn.call_count, 1) args, kwargs = mock_warn.call_args # pylint: disable=unused-variable message = args[0] self.assertEqual("Dhcp client is not running.", message) def test_get_dhcp_client_pid_should_return_and_empty_list_and_log_an_error_when_an_invalid_command_is_used(self): with patch("azurelinuxagent.common.osutil.default.shellutil.run_command", side_effect=lambda _: self.shellutil_run_command(["non-existing-command"])): with patch('azurelinuxagent.common.logger.Logger.error') as mock_error: pids = MonitorDhcpClientRestart(get_osutil())._get_dhcp_client_pid() self.assertEqual(pids, []) self.assertEqual(mock_error.call_count, 1) args, kwargs = mock_error.call_args # pylint: disable=unused-variable self.assertIn("Failed to get the PID of the DHCP client", args[0]) self.assertIn("No such file or directory", args[1]) def test_get_dhcp_client_pid_should_not_log_consecutive_errors(self): monitor_dhcp_client_restart = MonitorDhcpClientRestart(get_osutil()) with patch('azurelinuxagent.common.logger.Logger.warn') as mock_warn: def assert_warnings(count): self.assertEqual(mock_warn.call_count, count) for call_args in mock_warn.call_args_list: args, _ = call_args self.assertEqual("Dhcp client is not running.", args[0]) with patch("azurelinuxagent.common.osutil.default.shellutil.run_command", side_effect=lambda _: self.shellutil_run_command(["pidof", "non-existing-process"])): # it should log the first error pids = monitor_dhcp_client_restart._get_dhcp_client_pid() self.assertEqual(pids, []) assert_warnings(1) # it should not log subsequent errors for _ in range(0, 3): pids = monitor_dhcp_client_restart._get_dhcp_client_pid() self.assertEqual(pids, []) self.assertEqual(mock_warn.call_count, 1) with patch("azurelinuxagent.common.osutil.default.shellutil.run_command", return_value="123"): # now it should succeed pids = monitor_dhcp_client_restart._get_dhcp_client_pid() self.assertEqual(pids, [123]) assert_warnings(1) with patch("azurelinuxagent.common.osutil.default.shellutil.run_command", side_effect=lambda _: self.shellutil_run_command(["pidof", "non-existing-process"])): # it should log the new error pids = monitor_dhcp_client_restart._get_dhcp_client_pid() self.assertEqual(pids, []) assert_warnings(2) # it should not log subsequent errors for _ in range(0, 3): pids = monitor_dhcp_client_restart._get_dhcp_client_pid() self.assertEqual(pids, []) self.assertEqual(mock_warn.call_count, 2) def test_handle_dhclient_restart_should_reconfigure_network_routes_when_dhcp_client_restarts(self): with patch("azurelinuxagent.common.dhcp.DhcpHandler.conf_routes") as mock_conf_routes: monitor_dhcp_client_restart = MonitorDhcpClientRestart(get_osutil()) monitor_dhcp_client_restart._period = datetime.timedelta(seconds=0) # Run the operation one time to initialize the DHCP PIDs with patch.object(monitor_dhcp_client_restart, "_get_dhcp_client_pid", return_value=[123]): monitor_dhcp_client_restart.run() # # if the dhcp client has not been restarted then it should not reconfigure the network routes # def mock_check_pid_alive(pid): if pid == 123: return True raise Exception("Unexpected PID: {0}".format(pid)) with patch("azurelinuxagent.common.osutil.default.DefaultOSUtil.check_pid_alive", side_effect=mock_check_pid_alive): with patch.object(monitor_dhcp_client_restart, "_get_dhcp_client_pid", side_effect=Exception("get_dhcp_client_pid should not have been invoked")): monitor_dhcp_client_restart.run() self.assertEqual(mock_conf_routes.call_count, 1) # count did not change # # if the process was restarted then it should reconfigure the network routes # def mock_check_pid_alive(pid): # pylint: disable=function-redefined if pid == 123: return False raise Exception("Unexpected PID: {0}".format(pid)) with patch("azurelinuxagent.common.osutil.default.DefaultOSUtil.check_pid_alive", side_effect=mock_check_pid_alive): with patch.object(monitor_dhcp_client_restart, "_get_dhcp_client_pid", return_value=[456, 789]): monitor_dhcp_client_restart.run() self.assertEqual(mock_conf_routes.call_count, 2) # count increased # # if the new dhcp client has not been restarted then it should not reconfigure the network routes # def mock_check_pid_alive(pid): # pylint: disable=function-redefined if pid in [456, 789]: return True raise Exception("Unexpected PID: {0}".format(pid)) with patch("azurelinuxagent.common.osutil.default.DefaultOSUtil.check_pid_alive", side_effect=mock_check_pid_alive): with patch.object(monitor_dhcp_client_restart, "_get_dhcp_client_pid", side_effect=Exception("get_dhcp_client_pid should not have been invoked")): monitor_dhcp_client_restart.run() self.assertEqual(mock_conf_routes.call_count, 2) # count did not change class TestEnableFirewall(AgentTestCase): def test_it_should_restore_missing_firewall_rules(self): with MockIpTables() as mock_iptables: enable_firewall = EnableFirewall('168.63.129.16') test_cases = [ # Exit codes for the "-C" (check) command {"accept_dns": 1, "accept": 0, "drop": 0, "legacy": 0}, {"accept_dns": 0, "accept": 1, "drop": 0, "legacy": 0}, {"accept_dns": 0, "accept": 1, "drop": 0, "legacy": 0}, {"accept_dns": 1, "accept": 1, "drop": 1, "legacy": 0}, ] for test_case in test_cases: mock_iptables.set_return_values("-C", **test_case) enable_firewall.run() self.assertGreaterEqual(len(mock_iptables.call_list), 3, "Expected at least 3 iptables commands, got {0} (Test case: {1})".format(mock_iptables.call_list, test_case)) self.assertEqual( [ mock_iptables.get_accept_dns_command("-A"), mock_iptables.get_accept_command("-A"), mock_iptables.get_drop_command("-A"), ], mock_iptables.call_list[-3:], "Expected the 3 firewall rules to be restored (Test case: {0})".format(test_case)) Azure-WALinuxAgent-a976115/tests/ga/test_extension.py000066400000000000000000007216061510742556200225640ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import contextlib import datetime import glob import json import os.path import random import re import shutil import subprocess import tempfile import time import unittest from azurelinuxagent.common import conf from azurelinuxagent.common.agent_supported_feature import get_agent_supported_features_list_for_crp from azurelinuxagent.ga.cgroupconfigurator import CGroupConfigurator from azurelinuxagent.common.datacontract import get_properties from azurelinuxagent.common.event import WALAEventOperation from azurelinuxagent.common.utils import fileutil from azurelinuxagent.common.utils.fileutil import read_file from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from azurelinuxagent.common.version import AGENT_VERSION from azurelinuxagent.common.exception import ResourceGoneError, ExtensionDownloadError, ProtocolError, \ ExtensionErrorCodes, ExtensionError, GoalStateAggregateStatusCodes from azurelinuxagent.common.protocol.restapi import ExtensionSettings, Extension, ExtHandlerStatus, \ ExtensionStatus, ExtensionRequestedState from azurelinuxagent.common.protocol import wire from azurelinuxagent.common.protocol.wire import WireProtocol, InVMArtifactsProfile from azurelinuxagent.common.utils.restutil import KNOWN_WIRESERVER_IP from azurelinuxagent.common.utils.archive import ARCHIVE_DIRECTORY_NAME from azurelinuxagent.ga.signing_certificate_util import write_signing_certificates from azurelinuxagent.ga.exthandlers import ExtHandlerInstance, migrate_handler_state, \ get_exthandlers_handler, ExtCommandEnvVariable, HandlerManifest, NOT_RUN, \ ExtensionStatusValue, HANDLER_COMPLETE_NAME_PATTERN, HandlerEnvironment, GoalStateStatus, ExtHandlerState from azurelinuxagent.ga.signature_validation_util import signature_has_been_validated from tests.lib import wire_protocol_data from tests.lib.mock_wire_protocol import mock_wire_protocol, MockHttpResponse from tests.lib.http_request_predicates import HttpRequestPredicates from tests.lib.wire_protocol_data import DATA_FILE, DATA_FILE_EXT_ADDITIONAL_LOCATIONS from tests.lib.tools import AgentTestCase, data_dir, MagicMock, Mock, patch, mock_sleep, load_bin_data, load_data from tests.lib.extension_emulator import Actions, ExtensionCommandNames, extension_emulator, \ enable_invocations, generate_put_handler # Mocking the original sleep to reduce test execution time SLEEP = time.sleep SUCCESS_CODE_FROM_STATUS_FILE = 1 def raise_system_exception(): raise Exception def raise_ioerror(*args): # pylint: disable=unused-argument e = IOError() from errno import EIO e.errno = EIO raise e class TestExtensionCleanup(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) self.mock_sleep = patch("time.sleep", lambda *_: mock_sleep(0.01)) self.mock_sleep.start() def tearDown(self): self.mock_sleep.stop() AgentTestCase.tearDown(self) @staticmethod def _count_packages(): return len(glob.glob(os.path.join(conf.get_lib_dir(), "*.zip"))) @staticmethod def _count_extension_directories(): paths = [os.path.join(conf.get_lib_dir(), p) for p in os.listdir(conf.get_lib_dir())] return len([p for p in paths if os.path.isdir(p) and TestExtensionCleanup._is_extension_dir(p)]) @staticmethod def _is_extension_dir(path): return re.match(HANDLER_COMPLETE_NAME_PATTERN, os.path.basename(path)) is not None def _assert_ext_handler_status(self, aggregate_status, expected_status, version, expected_ext_handler_count=0, verify_ext_reported=True): self.assertIsNotNone(aggregate_status, "Aggregate status should not be None") handler_statuses = aggregate_status['aggregateStatus']['handlerAggregateStatus'] self.assertEqual(expected_ext_handler_count, len(handler_statuses), "All ExtensionHandlers: {0}".format(handler_statuses)) for ext_handler_status in handler_statuses: debug_info = "ExtensionHandler: {0}".format(ext_handler_status) self.assertEqual(expected_status, ext_handler_status['status'], debug_info) self.assertEqual(version, ext_handler_status['handlerVersion'], debug_info) if verify_ext_reported: self.assertIn("runtimeSettingsStatus", ext_handler_status, debug_info) return @contextlib.contextmanager def _setup_test_env(self, test_data): with mock_wire_protocol(test_data) as protocol: def mock_http_put(url, *args, **_): if HttpRequestPredicates.is_host_plugin_status_request(url): # Skip reading the HostGA request data as its encoded return MockHttpResponse(status=500) protocol.aggregate_status = json.loads(args[0]) return MockHttpResponse(status=201) protocol.aggregate_status = None protocol.set_http_handlers(http_put_handler=mock_http_put) no_of_extensions = protocol.mock_wire_data.get_no_of_plugins_in_extension_config() exthandlers_handler = get_exthandlers_handler(protocol) yield exthandlers_handler, protocol, no_of_extensions def test_cleanup_leaves_installed_extensions(self): with self._setup_test_env(wire_protocol_data.DATA_FILE_MULTIPLE_EXT) as (exthandlers_handler, protocol, no_of_exts): exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(no_of_exts, TestExtensionCleanup._count_extension_directories(), "No of extension directories doesnt match the no of extensions in GS") self._assert_ext_handler_status(protocol.aggregate_status, "Ready", expected_ext_handler_count=no_of_exts, version="1.0.0") def test_cleanup_removes_uninstalled_extensions(self): with self._setup_test_env(wire_protocol_data.DATA_FILE_MULTIPLE_EXT) as (exthandlers_handler, protocol, no_of_exts): exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_ext_handler_status(protocol.aggregate_status, "Ready", expected_ext_handler_count=no_of_exts, version="1.0.0") # Update incarnation and extension config protocol.mock_wire_data.set_incarnation(2) protocol.mock_wire_data.set_extensions_config_state(ExtensionRequestedState.Uninstall) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(0, TestExtensionCleanup._count_packages(), "All packages must be deleted") self._assert_ext_handler_status(protocol.aggregate_status, "Ready", expected_ext_handler_count=0, version="1.0.0") self.assertEqual(0, TestExtensionCleanup._count_extension_directories(), "All extension directories should be removed") def test_cleanup_removes_orphaned_packages(self): data_file = wire_protocol_data.DATA_FILE_NO_EXT.copy() data_file["ext_conf"] = "wire/ext_conf_no_extensions-no_status_blob.xml" no_of_orphaned_packages = 5 with self._setup_test_env(data_file) as (exthandlers_handler, protocol, no_of_exts): self.assertEqual(no_of_exts, 0, "Test setup error - Extensions found in ExtConfig") # Create random extension directories for i in range(no_of_orphaned_packages): eh = Extension(name='Random.Extension.ShouldNot.Be.There') eh.version = FlexibleVersion("9.9.0") + i handler = ExtHandlerInstance(eh, "unused") os.mkdir(handler.get_base_dir()) self.assertEqual(no_of_orphaned_packages, TestExtensionCleanup._count_extension_directories(), "Test Setup error - Not enough extension directories") exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(no_of_exts, TestExtensionCleanup._count_extension_directories(), "There should be no extension directories in FS") self.assertIsNone(protocol.aggregate_status, "Since there's no ExtConfig, we shouldn't even report status as we pull status blob link from ExtConfig") def test_cleanup_leaves_failed_extensions(self): original_popen = subprocess.Popen def mock_fail_popen(*args, **kwargs): # pylint: disable=unused-argument return original_popen("fail_this_command", **kwargs) with self._setup_test_env(wire_protocol_data.DATA_FILE_EXT_SINGLE) as (exthandlers_handler, protocol, no_of_exts): with patch("azurelinuxagent.ga.cgroupapi.subprocess.Popen", mock_fail_popen): exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_ext_handler_status(protocol.aggregate_status, "NotReady", expected_ext_handler_count=no_of_exts, version="1.0.0", verify_ext_reported=False) self.assertEqual(no_of_exts, TestExtensionCleanup._count_extension_directories(), "There should still be 1 extension directory in FS") # Update incarnation and extension config to uninstall the extension, this should delete the extension protocol.mock_wire_data.set_incarnation(2) protocol.mock_wire_data.set_extensions_config_state(ExtensionRequestedState.Uninstall) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(0, TestExtensionCleanup._count_packages(), "All packages must be deleted") self.assertEqual(0, TestExtensionCleanup._count_extension_directories(), "All extension directories should be removed") self._assert_ext_handler_status(protocol.aggregate_status, "Ready", expected_ext_handler_count=0, version="1.0.0") def test_it_should_report_and_cleanup_only_if_gs_supported(self): def assert_gs_aggregate_status(seq_no, status, code): gs_status = protocol.aggregate_status['aggregateStatus']['vmArtifactsAggregateStatus']['goalStateAggregateStatus'] self.assertEqual(gs_status['inSvdSeqNo'], seq_no, "Seq number not matching") self.assertEqual(gs_status['code'], code, "The error code not matching") self.assertEqual(gs_status['status'], status, "The status not matching") def assert_extension_seq_no(expected_seq_no): for handler_status in protocol.aggregate_status['aggregateStatus']['handlerAggregateStatus']: self.assertEqual(expected_seq_no, handler_status['runtimeSettingsStatus']['sequenceNumber'], "Sequence number mismatch") with self._setup_test_env(wire_protocol_data.DATA_FILE_MULTIPLE_EXT) as (exthandlers_handler, protocol, orig_no_of_exts): # Run 1 - GS has no required features and contains 5 extensions exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(orig_no_of_exts, TestExtensionCleanup._count_extension_directories(), "No of extension directories doesnt match the no of extensions in GS") self._assert_ext_handler_status(protocol.aggregate_status, "Ready", expected_ext_handler_count=orig_no_of_exts, version="1.0.0") assert_gs_aggregate_status(seq_no='1', status=GoalStateStatus.Success, code=GoalStateAggregateStatusCodes.Success) assert_extension_seq_no(expected_seq_no=0) # Run 2 - Change the GS to one with Required features not supported by the agent # This ExtensionConfig has 1 extension - ExampleHandlerLinuxWithRequiredFeatures protocol.mock_wire_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_REQUIRED_FEATURES) protocol.mock_wire_data.set_incarnation(2) protocol.mock_wire_data.set_extensions_config_sequence_number(random.randint(10, 100)) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertGreater(orig_no_of_exts, 1, "No of extensions to check should be > 1") self.assertEqual(orig_no_of_exts, TestExtensionCleanup._count_extension_directories(), "No of extension directories should not be changed") self._assert_ext_handler_status(protocol.aggregate_status, "Ready", expected_ext_handler_count=orig_no_of_exts, version="1.0.0") assert_gs_aggregate_status(seq_no='2', status=GoalStateStatus.Failed, code=GoalStateAggregateStatusCodes.GoalStateUnsupportedRequiredFeatures) # Since its an unsupported GS, we should report the last state of extensions assert_extension_seq_no(0) # assert the extension in the new Config was not reported as that GS was not executed self.assertTrue(any('ExampleHandlerLinuxWithRequiredFeatures' not in ext_handler_status['handlerName'] for ext_handler_status in protocol.aggregate_status['aggregateStatus']['handlerAggregateStatus']), "Unwanted handler found in status reporting") # Run 3 - Run a GS with no Required Features and ensure we execute all extensions properly # This ExtensionConfig has 1 extension - OSTCExtensions.ExampleHandlerLinux protocol.mock_wire_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) protocol.mock_wire_data.set_incarnation(3) extension_seq_no = random.randint(10, 100) protocol.mock_wire_data.set_extensions_config_sequence_number(extension_seq_no) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(1, TestExtensionCleanup._count_extension_directories(), "No of extension directories should not be changed") self._assert_ext_handler_status(protocol.aggregate_status, "Ready", expected_ext_handler_count=1, version="1.0.0") assert_gs_aggregate_status(seq_no='3', status=GoalStateStatus.Success, code=GoalStateAggregateStatusCodes.Success) assert_extension_seq_no(expected_seq_no=extension_seq_no) # Only OSTCExtensions.ExampleHandlerLinux extension should be reported self.assertEqual('OSTCExtensions.ExampleHandlerLinux', protocol.aggregate_status['aggregateStatus']['handlerAggregateStatus'][0]['handlerName'], "Expected handler not found in status reporting") class TestHandlerStateMigration(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) handler_name = "Not.A.Real.Extension" handler_version = "1.2.3" self.ext_handler = Extension(handler_name) self.ext_handler.version = handler_version self.ext_handler_i = ExtHandlerInstance(self.ext_handler, "dummy protocol") self.handler_state = "Enabled" self.handler_status = ExtHandlerStatus( name=handler_name, version=handler_version, status="Ready", message="Uninteresting message") return def _prepare_handler_state(self): handler_state_path = os.path.join( self.tmp_dir, "handler_state", self.ext_handler_i.get_full_name()) os.makedirs(handler_state_path) fileutil.write_file( os.path.join(handler_state_path, "state"), self.handler_state) fileutil.write_file( os.path.join(handler_state_path, "status"), json.dumps(get_properties(self.handler_status))) return def _prepare_handler_config(self): handler_config_path = os.path.join( self.tmp_dir, self.ext_handler_i.get_full_name(), "config") os.makedirs(handler_config_path) return def test_migration_migrates(self): self._prepare_handler_state() self._prepare_handler_config() migrate_handler_state() self.assertEqual(self.ext_handler_i.get_handler_state(), self.handler_state) self.assertEqual( self.ext_handler_i.get_handler_status().status, self.handler_status.status) return def test_migration_skips_if_empty(self): self._prepare_handler_config() migrate_handler_state() self.assertFalse( os.path.isfile(os.path.join(self.ext_handler_i.get_conf_dir(), "HandlerState"))) self.assertFalse( os.path.isfile(os.path.join(self.ext_handler_i.get_conf_dir(), "HandlerStatus"))) return def test_migration_cleans_up(self): self._prepare_handler_state() self._prepare_handler_config() migrate_handler_state() self.assertFalse(os.path.isdir(os.path.join(conf.get_lib_dir(), "handler_state"))) return def test_migration_does_not_overwrite(self): self._prepare_handler_state() self._prepare_handler_config() state = "Installed" status = "NotReady" code = 1 message = "A message" self.assertNotEqual(state, self.handler_state) self.assertNotEqual(status, self.handler_status.status) self.assertNotEqual(code, self.handler_status.code) self.assertNotEqual(message, self.handler_status.message) self.ext_handler_i.set_handler_state(state) self.ext_handler_i.set_handler_status(status=status, code=code, message=message) migrate_handler_state() self.assertEqual(self.ext_handler_i.get_handler_state(), state) handler_status = self.ext_handler_i.get_handler_status() self.assertEqual(handler_status.status, status) self.assertEqual(handler_status.code, code) self.assertEqual(handler_status.message, message) return def test_set_handler_status_ignores_none_content(self): """ Validate that set_handler_status ignore cases where json.dumps returns a value of None. """ self._prepare_handler_state() self._prepare_handler_config() status = "Ready" code = 0 message = "A message" try: with patch('json.dumps', return_value=None): self.ext_handler_i.set_handler_status(status=status, code=code, message=message) except Exception as e: # pylint: disable=unused-variable self.fail("set_handler_status threw an exception") @patch("shutil.move", side_effect=Exception) def test_migration_ignores_move_errors(self, shutil_mock): # pylint: disable=unused-argument self._prepare_handler_state() self._prepare_handler_config() try: migrate_handler_state() except Exception as e: self.assertTrue(False, "Unexpected exception: {0}".format(str(e))) # pylint: disable=redundant-unittest-assert return @patch("shutil.rmtree", side_effect=Exception) def test_migration_ignores_tree_remove_errors(self, shutil_mock): # pylint: disable=unused-argument self._prepare_handler_state() self._prepare_handler_config() try: migrate_handler_state() except Exception as e: self.assertTrue(False, "Unexpected exception: {0}".format(str(e))) # pylint: disable=redundant-unittest-assert return class TestExtensionBase(AgentTestCase): def _assert_handler_status(self, report_vm_status, expected_status, expected_ext_count, version, expected_handler_name="OSTCExtensions.ExampleHandlerLinux", expected_msg=None, expected_code=None): self.assertTrue(report_vm_status.called) args, kw = report_vm_status.call_args # pylint: disable=unused-variable vm_status = args[0] self.assertNotEqual(0, len(vm_status.vmAgent.extensionHandlers)) handler_status = next( status for status in vm_status.vmAgent.extensionHandlers if status.name == expected_handler_name) self.assertEqual(expected_status, handler_status.status, get_properties(handler_status)) self.assertEqual(expected_handler_name, handler_status.name) self.assertEqual(version, handler_status.version) self.assertEqual(expected_ext_count, len([ext_handler for ext_handler in vm_status.vmAgent.extensionHandlers if ext_handler.name == expected_handler_name and ext_handler.extension_status is not None])) if expected_msg is not None: self.assertIn(expected_msg, handler_status.message) if expected_code is not None: self.assertEqual(expected_code, handler_status.code) # Deprecated. New tests should be added to the TestExtension class @patch('time.sleep', side_effect=lambda _: mock_sleep(0.001)) @patch("azurelinuxagent.common.protocol.wire.CryptUtil") @patch("azurelinuxagent.common.utils.restutil.http_get") class TestExtension_Deprecated(TestExtensionBase): def setUp(self): AgentTestCase.setUp(self) def _assert_ext_pkg_file_status(self, expected_to_be_present=True, extension_version="1.0.0", extension_handler_name="OSTCExtensions.ExampleHandlerLinux"): zip_file_format = "{0}__{1}.zip" if expected_to_be_present: self.assertIn(zip_file_format.format(extension_handler_name, extension_version), os.listdir(conf.get_lib_dir())) else: self.assertNotIn(zip_file_format.format(extension_handler_name, extension_version), os.listdir(conf.get_lib_dir())) def _assert_no_handler_status(self, report_vm_status): self.assertTrue(report_vm_status.called) args, kw = report_vm_status.call_args # pylint: disable=unused-variable vm_status = args[0] self.assertEqual(0, len(vm_status.vmAgent.extensionHandlers)) return @staticmethod def _create_mock(test_data, mock_http_get, mock_crypt_util, *_): # Mock protocol to return test data mock_http_get.side_effect = test_data.mock_http_get mock_crypt_util.side_effect = test_data.mock_crypt_util protocol = WireProtocol(KNOWN_WIRESERVER_IP) protocol.detect() protocol.report_vm_status = MagicMock() handler = get_exthandlers_handler(protocol) return handler, protocol def _set_up_update_test_and_update_gs(self, patch_command, *args): """ This helper function sets up the Update test by setting up the protocol and ext_handler and asserts the ext_handler runs fine the first time before patching a failure command for testing. :param patch_command: The patch_command to setup for failure :param args: Any additional args passed to the function, needed for creating a mock for handler and protocol :return: test_data, exthandlers_handler, protocol """ test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_SINGLE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter # Ensure initial install and enable is successful exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(0, patch_command.call_count) self._assert_handler_status(protocol.report_vm_status, "Ready", expected_ext_count=1, version="1.0.0") self._assert_ext_status(protocol.report_vm_status, "success", 0) # Next incarnation, update version test_data.set_incarnation(2) test_data.set_extensions_config_version("1.0.1") test_data.set_manifest_version('1.0.1') protocol.client.update_goal_state() # Ensure the patched command fails patch_command.return_value = "exit 1" return test_data, exthandlers_handler, protocol @staticmethod def _create_extension_handlers_handler(protocol): handler = get_exthandlers_handler(protocol) return handler def test_ext_handler(self, *args): # Test enable scenario. test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0") self._assert_ext_status(protocol.report_vm_status, "success", 0) # Test goal state not changed exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0") # Test goal state changed test_data.set_incarnation(2) test_data.set_extensions_config_sequence_number(1) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0") self._assert_ext_status(protocol.report_vm_status, "success", 1) # Test hotfix test_data.set_incarnation(3) test_data.set_extensions_config_version("1.1.1") test_data.set_extensions_config_sequence_number(2) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.1.1") self._assert_ext_status(protocol.report_vm_status, "success", 2) # Test upgrade test_data.set_incarnation(4) test_data.set_extensions_config_version("1.2.0") test_data.set_extensions_config_sequence_number(3) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.2.0") self._assert_ext_status(protocol.report_vm_status, "success", 3) # Test disable test_data.set_incarnation(5) test_data.set_extensions_config_state(ExtensionRequestedState.Disabled) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "NotReady", 1, "1.2.0") # Test uninstall test_data.set_incarnation(6) test_data.set_extensions_config_state(ExtensionRequestedState.Uninstall) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_no_handler_status(protocol.report_vm_status) # Test uninstall again! test_data.set_incarnation(7) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_no_handler_status(protocol.report_vm_status) def test_it_should_only_download_extension_manifest_once_per_goal_state(self, *args): def _assert_handler_status_and_manifest_download_count(protocol, test_data, manifest_count): self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0") self._assert_ext_status(protocol.report_vm_status, "success", 0) self.assertEqual(test_data.call_counts['manifest.xml'], manifest_count, "We should have downloaded extension manifest {0} times".format(manifest_count)) test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() _assert_handler_status_and_manifest_download_count(protocol, test_data, 1) # Update Incarnation test_data.set_incarnation(2) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() _assert_handler_status_and_manifest_download_count(protocol, test_data, 2) def test_it_should_fail_handler_on_bad_extension_config_and_report_error(self, mock_get, mock_crypt_util, *args): invalid_config_dir = os.path.join(data_dir, "wire", "invalid_config") self.assertGreater(len(os.listdir(invalid_config_dir)), 0, "Not even a single bad config file found") for bad_config_file_path in os.listdir(invalid_config_dir): bad_conf = DATA_FILE.copy() bad_conf["ext_conf"] = os.path.join(invalid_config_dir, bad_config_file_path) test_data = wire_protocol_data.WireProtocolData(bad_conf) exthandlers_handler, protocol = self._create_mock(test_data, mock_get, mock_crypt_util, *args) with patch('azurelinuxagent.ga.exthandlers.add_event') as patch_add_event: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "NotReady", 0, "1.0.0") invalid_config_errors = [kw for _, kw in patch_add_event.call_args_list if kw['op'] == WALAEventOperation.InvalidExtensionConfig] self.assertEqual(1, len(invalid_config_errors), "Error not logged and reported to Kusto for {0}".format(bad_config_file_path)) def test_it_should_process_valid_extensions_if_present(self, mock_get, mock_crypt_util, *args): bad_conf = DATA_FILE.copy() bad_conf["ext_conf"] = os.path.join("wire", "ext_conf_invalid_and_valid_handlers.xml") test_data = wire_protocol_data.WireProtocolData(bad_conf) exthandlers_handler, protocol = self._create_mock(test_data, mock_get, mock_crypt_util, *args) exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertTrue(protocol.report_vm_status.called) args, _ = protocol.report_vm_status.call_args vm_status = args[0] expected_handlers = ["OSTCExtensions.InvalidExampleHandlerLinux", "OSTCExtensions.ValidExampleHandlerLinux"] self.assertEqual(2, len(vm_status.vmAgent.extensionHandlers)) for handler in vm_status.vmAgent.extensionHandlers: expected_status = "NotReady" if "InvalidExampleHandlerLinux" in handler.name else "Ready" expected_ext_count = 0 if "InvalidExampleHandlerLinux" in handler.name else 1 self.assertEqual(expected_status, handler.status, "Invalid status") self.assertIn(handler.name, expected_handlers, "Handler not found") self.assertEqual("1.0.0", handler.version, "Incorrect handler version") self.assertEqual(expected_ext_count, len([ext for ext in vm_status.vmAgent.extensionHandlers if ext.name == handler.name and ext.extension_status is not None]), "Incorrect extensions enabled") expected_handlers.remove(handler.name) self.assertEqual(0, len(expected_handlers), "All handlers not reported status") def test_it_should_ignore_case_when_parsing_plugin_settings(self, mock_get, mock_crypt_util, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_CASE_MISMATCH_EXT) exthandlers_handler, protocol = self._create_mock(test_data, mock_get, mock_crypt_util, *args) exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() expected_ext_handlers = ["OSTCExtensions.ExampleHandlerLinux", "Microsoft.Powershell.ExampleExtension", "Microsoft.EnterpriseCloud.Monitoring.ExampleHandlerLinux", "Microsoft.CPlat.Core.ExampleExtensionLinux", "Microsoft.OSTCExtensions.Edp.ExampleExtensionLinuxInTest"] self.assertTrue(protocol.report_vm_status.called, "Handler status not reported") args, _ = protocol.report_vm_status.call_args vm_status = args[0] self.assertEqual(len(expected_ext_handlers), len(vm_status.vmAgent.extensionHandlers), "No of Extension handlers dont match") for handler_status in vm_status.vmAgent.extensionHandlers: self.assertEqual("Ready", handler_status.status, "Handler is not Ready") self.assertIn(handler_status.name, expected_ext_handlers, "Handler not reported") self.assertEqual("1.0.0", handler_status.version, "Handler version not matching") self.assertEqual(1, len( [status for status in vm_status.vmAgent.extensionHandlers if status.name == handler_status.name]), "No settings were found for this extension") expected_ext_handlers.remove(handler_status.name) self.assertEqual(0, len(expected_ext_handlers), "All handlers not reported") def test_ext_handler_no_settings(self, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_NO_SETTINGS) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter test_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux") with enable_invocations(test_ext) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 0, "1.0.0") invocation_record.compare( (test_ext, ExtensionCommandNames.INSTALL), (test_ext, ExtensionCommandNames.ENABLE) ) # Uninstall the Plugin and make sure Disable called test_data.set_incarnation(2) test_data.set_extensions_config_state(ExtensionRequestedState.Uninstall) protocol.client.update_goal_state() with enable_invocations(test_ext) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertTrue(protocol.report_vm_status.called) args, _ = protocol.report_vm_status.call_args self.assertEqual(0, len(args[0].vmAgent.extensionHandlers)) invocation_record.compare( (test_ext, ExtensionCommandNames.DISABLE), (test_ext, ExtensionCommandNames.UNINSTALL) ) def test_ext_handler_no_public_settings(self, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_NO_PUBLIC) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0") def test_ext_handler_no_ext(self, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_NO_EXT) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter # Assert no extension handler status exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_no_handler_status(protocol.report_vm_status) def test_ext_handler_sequencing(self, *args): # Test enable scenario. test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_SEQUENCING) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter dep_ext_level_2 = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux") dep_ext_level_1 = extension_emulator(name="OSTCExtensions.OtherExampleHandlerLinux") with enable_invocations(dep_ext_level_2, dep_ext_level_1) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0", expected_handler_name="OSTCExtensions.OtherExampleHandlerLinux") self._assert_ext_status(protocol.report_vm_status, "success", 0, expected_handler_name="OSTCExtensions.OtherExampleHandlerLinux") # check handler list and dependency levels self.assertTrue(exthandlers_handler.ext_handlers is not None) self.assertTrue(exthandlers_handler.ext_handlers is not None) self.assertEqual(len(exthandlers_handler.ext_handlers), 2) self.assertEqual(1, next(handler for handler in exthandlers_handler.ext_handlers if handler.name == dep_ext_level_1.name).settings[0].dependencyLevel) self.assertEqual(2, next(handler for handler in exthandlers_handler.ext_handlers if handler.name == dep_ext_level_2.name).settings[0].dependencyLevel) # Ensure the invocation order follows the dependency levels invocation_record.compare( (dep_ext_level_1, ExtensionCommandNames.INSTALL), (dep_ext_level_1, ExtensionCommandNames.ENABLE), (dep_ext_level_2, ExtensionCommandNames.INSTALL), (dep_ext_level_2, ExtensionCommandNames.ENABLE) ) # Test goal state not changed exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0", expected_handler_name="OSTCExtensions.OtherExampleHandlerLinux") # Test goal state changed test_data.set_incarnation(2) test_data.set_extensions_config_sequence_number(1) # Swap the dependency ordering dep_ext_level_3 = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux") dep_ext_level_4 = extension_emulator(name="OSTCExtensions.OtherExampleHandlerLinux") test_data.ext_conf = test_data.ext_conf.replace("dependencyLevel=\"2\"", "dependencyLevel=\"3\"") test_data.ext_conf = test_data.ext_conf.replace("dependencyLevel=\"1\"", "dependencyLevel=\"4\"") protocol.client.update_goal_state() with enable_invocations(dep_ext_level_3, dep_ext_level_4) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0") self._assert_ext_status(protocol.report_vm_status, "success", 1) self.assertEqual(len(exthandlers_handler.ext_handlers), 2) self.assertEqual(3, next(handler for handler in exthandlers_handler.ext_handlers if handler.name == dep_ext_level_3.name).settings[0].dependencyLevel) self.assertEqual(4, next(handler for handler in exthandlers_handler.ext_handlers if handler.name == dep_ext_level_4.name).settings[0].dependencyLevel) # Ensure the invocation order follows the dependency levels invocation_record.compare( (dep_ext_level_3, ExtensionCommandNames.ENABLE), (dep_ext_level_4, ExtensionCommandNames.ENABLE) ) # Test disable # In the case of disable, the last extension to be enabled should be # the first extension disabled. The first extension enabled should be # the last one disabled. test_data.set_incarnation(3) test_data.set_extensions_config_state(ExtensionRequestedState.Disabled) protocol.client.update_goal_state() with enable_invocations(dep_ext_level_3, dep_ext_level_4) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "NotReady", 1, "1.0.0", expected_handler_name="OSTCExtensions.OtherExampleHandlerLinux") self.assertEqual(3, next(handler for handler in exthandlers_handler.ext_handlers if handler.name == dep_ext_level_3.name).settings[0].dependencyLevel) self.assertEqual(4, next(handler for handler in exthandlers_handler.ext_handlers if handler.name == dep_ext_level_4.name).settings[0].dependencyLevel) # Ensure the invocation order follows the dependency levels invocation_record.compare( (dep_ext_level_4, ExtensionCommandNames.DISABLE), (dep_ext_level_3, ExtensionCommandNames.DISABLE) ) # Test uninstall # In the case of uninstall, the last extension to be installed should be # the first extension uninstalled. The first extension installed # should be the last one uninstalled. test_data.set_incarnation(4) test_data.set_extensions_config_state(ExtensionRequestedState.Uninstall) # Swap the dependency ordering AGAIN dep_ext_level_5 = extension_emulator(name="OSTCExtensions.OtherExampleHandlerLinux") dep_ext_level_6 = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux") test_data.ext_conf = test_data.ext_conf.replace("dependencyLevel=\"3\"", "dependencyLevel=\"6\"") test_data.ext_conf = test_data.ext_conf.replace("dependencyLevel=\"4\"", "dependencyLevel=\"5\"") protocol.client.update_goal_state() with enable_invocations(dep_ext_level_5, dep_ext_level_6) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_no_handler_status(protocol.report_vm_status) self.assertEqual(len(exthandlers_handler.ext_handlers), 2) self.assertEqual(5, next(handler for handler in exthandlers_handler.ext_handlers if handler.name == dep_ext_level_5.name).settings[0].dependencyLevel) self.assertEqual(6, next(handler for handler in exthandlers_handler.ext_handlers if handler.name == dep_ext_level_6.name).settings[0].dependencyLevel) # Ensure the invocation order follows the dependency levels invocation_record.compare( (dep_ext_level_6, ExtensionCommandNames.UNINSTALL), (dep_ext_level_5, ExtensionCommandNames.UNINSTALL) ) def test_it_should_process_sequencing_properly_even_if_no_settings_for_dependent_extension( self, mock_get, mock_crypt, *args): test_data_file = DATA_FILE.copy() test_data_file["ext_conf"] = "wire/ext_conf_dependencies_with_empty_settings.xml" test_data = wire_protocol_data.WireProtocolData(test_data_file) exthandlers_handler, protocol = self._create_mock(test_data, mock_get, mock_crypt, *args) ext_1 = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux") ext_2 = extension_emulator(name="OSTCExtensions.OtherExampleHandlerLinux") with enable_invocations(ext_1, ext_2) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() # Ensure no extension status was reported for OtherExampleHandlerLinux as no settings provided for it self._assert_handler_status(protocol.report_vm_status, "Ready", 0, "1.0.0", expected_handler_name="OSTCExtensions.OtherExampleHandlerLinux") # Ensure correct status reported back for the other extension with settings self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0", expected_handler_name="OSTCExtensions.ExampleHandlerLinux") self._assert_ext_status(protocol.report_vm_status, "success", 0, expected_handler_name="OSTCExtensions.ExampleHandlerLinux") # Ensure the invocation order follows the dependency levels invocation_record.compare( (ext_2, ExtensionCommandNames.INSTALL), (ext_2, ExtensionCommandNames.ENABLE), (ext_1, ExtensionCommandNames.INSTALL), (ext_1, ExtensionCommandNames.ENABLE) ) def test_ext_handler_sequencing_should_fail_if_handler_failed(self, mock_get, mock_crypt, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_SEQUENCING) exthandlers_handler, protocol = self._create_mock(test_data, mock_get, mock_crypt, *args) original_popen = subprocess.Popen def _assert_event_reported_only_on_incarnation_change(expected_count=1): handler_seq_reporting = [kwargs for _, kwargs in patch_add_event.call_args_list if kwargs[ 'op'] == WALAEventOperation.ExtensionProcessing and "Skipping processing of extensions since execution of dependent extension" in kwargs['message']] self.assertEqual(len(handler_seq_reporting), expected_count, "Error should be reported only on incarnation change") def mock_fail_extension_commands(args, **kwargs): if 'sample.py' in args: return original_popen("fail_this_command", **kwargs) return original_popen(args, **kwargs) with patch("subprocess.Popen", mock_fail_extension_commands): with patch('azurelinuxagent.ga.exthandlers.add_event') as patch_add_event: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "NotReady", 0, "1.0.0", expected_handler_name="OSTCExtensions.OtherExampleHandlerLinux") _assert_event_reported_only_on_incarnation_change(expected_count=1) test_data.set_incarnation(2) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() # We should report error again on incarnation change _assert_event_reported_only_on_incarnation_change(expected_count=2) # Test it recovers on a new goal state if Handler succeeds test_data.set_incarnation(3) test_data.set_extensions_config_sequence_number(1) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0", expected_handler_name="OSTCExtensions.OtherExampleHandlerLinux") self._assert_ext_status(protocol.report_vm_status, "success", 1, expected_handler_name="OSTCExtensions.OtherExampleHandlerLinux") # Update incarnation to confirm extension invocation order test_data.set_incarnation(4) protocol.client.update_goal_state() dep_ext_level_2 = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux") dep_ext_level_1 = extension_emulator(name="OSTCExtensions.OtherExampleHandlerLinux") with enable_invocations(dep_ext_level_2, dep_ext_level_1) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() # check handler list and dependency levels self.assertTrue(exthandlers_handler.ext_handlers is not None) self.assertTrue(exthandlers_handler.ext_handlers is not None) self.assertEqual(len(exthandlers_handler.ext_handlers), 2) self.assertEqual(1, next(handler for handler in exthandlers_handler.ext_handlers if handler.name == dep_ext_level_1.name).settings[0].dependencyLevel) self.assertEqual(2, next(handler for handler in exthandlers_handler.ext_handlers if handler.name == dep_ext_level_2.name).settings[0].dependencyLevel) # Ensure the invocation order follows the dependency levels invocation_record.compare( (dep_ext_level_1, ExtensionCommandNames.ENABLE), (dep_ext_level_2, ExtensionCommandNames.ENABLE) ) def test_ext_handler_sequencing_default_dependency_level(self, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=unused-variable,no-value-for-parameter exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(exthandlers_handler.ext_handlers[0].settings[0].dependencyLevel, 0) self.assertEqual(exthandlers_handler.ext_handlers[0].settings[0].dependencyLevel, 0) def test_ext_handler_sequencing_invalid_dependency_level(self, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_SEQUENCING) test_data.set_incarnation(2) test_data.set_extensions_config_sequence_number(1) test_data.ext_conf = test_data.ext_conf.replace("dependencyLevel=\"1\"", "dependencyLevel=\"a6\"") test_data.ext_conf = test_data.ext_conf.replace("dependencyLevel=\"2\"", "dependencyLevel=\"5b\"") exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=unused-variable,no-value-for-parameter exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(exthandlers_handler.ext_handlers[0].settings[0].dependencyLevel, 0) self.assertEqual(exthandlers_handler.ext_handlers[0].settings[0].dependencyLevel, 0) def test_ext_handler_rollingupgrade(self, *args): # Test enable scenario. test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_ROLLINGUPGRADE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0") self._assert_ext_status(protocol.report_vm_status, "success", 0) # Test goal state changed test_data.set_incarnation(2) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0") self._assert_ext_status(protocol.report_vm_status, "success", 0) # Test minor version bump test_data.set_incarnation(3) test_data.set_extensions_config_version("1.1.0") protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.1.0") self._assert_ext_status(protocol.report_vm_status, "success", 0) # Test hotfix version bump test_data.set_incarnation(4) test_data.set_extensions_config_version("1.1.1") protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.1.1") self._assert_ext_status(protocol.report_vm_status, "success", 0) # Test disable test_data.set_incarnation(5) test_data.set_extensions_config_state(ExtensionRequestedState.Disabled) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "NotReady", 1, "1.1.1") # Test uninstall test_data.set_incarnation(6) test_data.set_extensions_config_state(ExtensionRequestedState.Uninstall) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_no_handler_status(protocol.report_vm_status) # Test uninstall again! test_data.set_incarnation(7) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_no_handler_status(protocol.report_vm_status) # Test re-install test_data.set_incarnation(8) test_data.set_extensions_config_state(ExtensionRequestedState.Enabled) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.1.1") self._assert_ext_status(protocol.report_vm_status, "success", 0) # Test version bump post-re-install test_data.set_incarnation(9) test_data.set_extensions_config_version("1.2.0") protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.2.0") self._assert_ext_status(protocol.report_vm_status, "success", 0) # Test rollback test_data.set_incarnation(10) test_data.set_extensions_config_version("1.1.0") protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.1.0") self._assert_ext_status(protocol.report_vm_status, "success", 0) def test_it_should_create_extension_events_dir_and_set_handler_environment_only_if_extension_telemetry_enabled(self, *args): for enable_extensions in [False, True]: tmp_lib_dir = tempfile.mkdtemp(prefix="ExtensionEnabled{0}".format(enable_extensions)) with patch("azurelinuxagent.common.conf.get_lib_dir", return_value=tmp_lib_dir): with patch("azurelinuxagent.common.agent_supported_feature._ETPFeature.is_supported", enable_extensions): # Create new object for each run to force re-installation of extensions as we # only create handler_environment on installation test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_MULTIPLE_EXT) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0") self._assert_ext_status(protocol.report_vm_status, "success", 0) for ext_handler in exthandlers_handler.ext_handlers: ehi = ExtHandlerInstance(ext_handler, protocol) self.assertEqual(enable_extensions, os.path.exists(ehi.get_extension_events_dir()), "Events directory incorrectly set") handler_env_json = ehi.get_env_file() with open(handler_env_json, 'r') as env_json: env_data = json.load(env_json) self.assertEqual(enable_extensions, HandlerEnvironment.eventsFolder in env_data[0][ HandlerEnvironment.handlerEnvironment], "eventsFolder wrongfully set in HandlerEnvironment.json file") if enable_extensions: self.assertEqual(ehi.get_extension_events_dir(), env_data[0][HandlerEnvironment.handlerEnvironment][ HandlerEnvironment.eventsFolder], "Events directory dont match") # Clean the File System for the next test run if os.path.exists(tmp_lib_dir): shutil.rmtree(tmp_lib_dir, ignore_errors=True) def test_it_should_not_delete_extension_events_directory_on_extension_uninstall(self, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter with patch("azurelinuxagent.common.agent_supported_feature._ETPFeature.is_supported", True): exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0") self._assert_ext_status(protocol.report_vm_status, "success", 0) ehi = ExtHandlerInstance(exthandlers_handler.ext_handlers[0], protocol) self.assertTrue(os.path.exists(ehi.get_extension_events_dir()), "Events directory should exist") # Uninstall extensions now test_data.set_extensions_config_state(ExtensionRequestedState.Uninstall) test_data.set_incarnation(2) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertTrue(os.path.exists(ehi.get_extension_events_dir()), "Events directory should still exist") def test_it_should_uninstall_unregistered_extensions_properly(self, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0") # Update version and set it to uninstall. That is how it would be propagated by CRP if a version 1.0.0 is # unregistered in PIR and a new version 1.0.1 is published. test_data.set_extensions_config_state(ExtensionRequestedState.Uninstall) test_data.set_extensions_config_version("1.0.1") # Since the installed version is not in PIR anymore, we need to also remove it from manifest file test_data.manifest = test_data.manifest.replace("1.0.0", "9.9.9") test_data.set_incarnation(2) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() args, _ = protocol.report_vm_status.call_args vm_status = args[0] self.assertEqual(0, len(vm_status.vmAgent.extensionHandlers), "The extension should not be reported as it is uninstalled") @patch('azurelinuxagent.common.errorstate.ErrorState.is_triggered') @patch('azurelinuxagent.ga.exthandlers.add_event') def test_ext_handler_report_status_permanent(self, mock_add_event, mock_error_state, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter protocol.report_vm_status = Mock(side_effect=ProtocolError) mock_error_state.return_value = True exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() args, kw = mock_add_event.call_args self.assertEqual(False, kw['is_success']) self.assertTrue("Failed to report vm agent status" in kw['message']) self.assertEqual("ReportStatusExtended", kw['op']) @patch('azurelinuxagent.ga.exthandlers.add_event') def test_ext_handler_report_status_resource_gone(self, mock_add_event, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter protocol.report_vm_status = Mock(side_effect=ResourceGoneError) exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() args, kw = mock_add_event.call_args self.assertEqual(False, kw['is_success']) self.assertTrue("ResourceGoneError" in kw['message']) self.assertEqual("ReportStatus", kw['op']) @patch('azurelinuxagent.common.errorstate.ErrorState.is_triggered') @patch('azurelinuxagent.ga.exthandlers.add_event') def test_ext_handler_download_failure_permanent_ProtocolError(self, mock_add_event, mock_error_state, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter protocol.get_goal_state().fetch_extension_manifest = Mock(side_effect=ProtocolError) mock_error_state.return_value = True exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() event_occurrences = [kw for _, kw in mock_add_event.call_args_list if "[ExtensionError] Failed to get ext handler pkgs" in kw['message']] self.assertEqual(1, len(event_occurrences)) self.assertFalse(event_occurrences[0]['is_success']) self.assertTrue("Failed to get ext handler pkgs" in event_occurrences[0]['message']) self.assertTrue("ProtocolError" in event_occurrences[0]['message']) @patch('azurelinuxagent.ga.exthandlers.fileutil') def test_ext_handler_io_error(self, mock_fileutil, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=unused-variable,no-value-for-parameter mock_fileutil.write_file.return_value = IOError("Mock IO Error") exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() def _assert_ext_status(self, vm_agent_status, expected_status, expected_seq_no, expected_handler_name="OSTCExtensions.ExampleHandlerLinux", expected_msg=None): self.assertTrue(vm_agent_status.called) args, _ = vm_agent_status.call_args vm_status = args[0] ext_status = next(handler_status.extension_status for handler_status in vm_status.vmAgent.extensionHandlers if handler_status.name == expected_handler_name) self.assertEqual(expected_status, ext_status.status) self.assertEqual(expected_seq_no, ext_status.sequenceNumber) if expected_msg is not None: self.assertIn(expected_msg, ext_status.message) def test_it_should_initialise_and_use_command_execution_log_for_extensions(self, mock_get, mock_crypt_util, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) exthandlers_handler, protocol = self._create_mock(test_data, mock_get, mock_crypt_util, *args) exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0") command_execution_log = os.path.join(conf.get_ext_log_dir(), "OSTCExtensions.ExampleHandlerLinux", "CommandExecution.log") self.assertTrue(os.path.exists(command_execution_log), "CommandExecution.log file not found") self.assertGreater(os.path.getsize(command_execution_log), 0, "The file should not be empty") def test_ext_handler_no_reporting_status(self, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0") # Remove status file and re-run collecting extension status status_file = os.path.join(self.tmp_dir, "OSTCExtensions.ExampleHandlerLinux-1.0.0", "status", "0.status") self.assertTrue(os.path.isfile(status_file)) os.remove(status_file) exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0") self._assert_ext_status(protocol.report_vm_status, ExtensionStatusValue.transitioning, 0, expected_msg="This status is being reported by the Guest Agent since no status " "file was reported by extension OSTCExtensions.ExampleHandlerLinux") def test_wait_for_handler_completion_no_status(self, mock_http_get, mock_crypt_util, *args): """ Testing depends-on scenario when there is no status file reported by the extension. Expected to retry and eventually report failure for all dependent extensions. """ exthandlers_handler, protocol = self._create_mock( wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_SEQUENCING), mock_http_get, mock_crypt_util, *args) original_popen = subprocess.Popen def mock_popen(cmd, *args, **kwargs): # For the purpose of this test, deleting the placeholder status file created by the agent if "sample.py" in cmd: status_path = os.path.join(kwargs['env'][ExtCommandEnvVariable.ExtensionPath], "status", "{0}.status".format(kwargs['env'][ExtCommandEnvVariable.ExtensionSeqNumber])) mock_popen.deleted_status_file = status_path if os.path.exists(status_path): os.remove(status_path) return original_popen(["echo", "Yes"], *args, **kwargs) with patch('azurelinuxagent.ga.cgroupapi.subprocess.Popen', side_effect=mock_popen): with patch('azurelinuxagent.ga.exthandlers._DEFAULT_EXT_TIMEOUT_MINUTES', 0.01): exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() # The Handler Status for the base extension should be ready as it was executed successfully by the agent self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0", expected_handler_name="OSTCExtensions.OtherExampleHandlerLinux") # The extension status reported by the Handler should be transitioning since no status file was found self._assert_ext_status(protocol.report_vm_status, ExtensionStatusValue.transitioning, 0, expected_handler_name="OSTCExtensions.OtherExampleHandlerLinux", expected_msg="This status is being reported by the Guest Agent since no status " "file was reported by extension OSTCExtensions.OtherExampleHandlerLinux") # The Handler Status for the dependent extension should be NotReady as it was not executed at all # And since it was not executed, it should not report any extension status either self._assert_handler_status(protocol.report_vm_status, "NotReady", 0, "1.0.0", expected_msg="Dependent Extension OSTCExtensions.OtherExampleHandlerLinux did not reach a terminal state within the allowed timeout. Last status was {0}".format( ExtensionStatusValue.warning)) def test_it_should_not_create_placeholder_for_single_config_extensions(self, mock_http_get, mock_crypt_util, *args): original_popen = subprocess.Popen def mock_popen(cmd, *_, **kwargs): if 'env' in kwargs: if ExtensionCommandNames.ENABLE not in cmd: # To force the test extension to not create a status file on Install, changing command return original_popen(["echo", "not-enable"], *_, **kwargs) seq_no = kwargs['env'][ExtCommandEnvVariable.ExtensionSeqNumber] ext_path = kwargs['env'][ExtCommandEnvVariable.ExtensionPath] status_file_name = "{0}.status".format(seq_no) status_file = os.path.join(ext_path, "status", status_file_name) self.assertFalse(os.path.exists(status_file), "Placeholder file should not be created for single config extensions") return original_popen(cmd, *_, **kwargs) aks_test_mock = DATA_FILE.copy() aks_test_mock["ext_conf"] = "wire/ext_conf_aks_extension.xml" exthandlers_handler, protocol = self._create_mock(wire_protocol_data.WireProtocolData(aks_test_mock), mock_http_get, mock_crypt_util, *args) with patch('azurelinuxagent.ga.cgroupapi.subprocess.Popen', side_effect=mock_popen): exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0", expected_handler_name="OSTCExtensions.ExampleHandlerLinux") self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0", expected_handler_name="Microsoft.AKS.Compute.AKS.Linux.AKSNode") self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0", expected_handler_name="Microsoft.AKS.Compute.AKS-Engine.Linux.Billing") # Extension without settings self._assert_handler_status(protocol.report_vm_status, "Ready", 0, "1.0.0", expected_handler_name="Microsoft.AKS.Compute.AKS.Linux.Billing") self._assert_ext_status(protocol.report_vm_status, ExtensionStatusValue.success, 0, expected_handler_name="OSTCExtensions.ExampleHandlerLinux", expected_msg="Enabling non-AKS") self._assert_ext_status(protocol.report_vm_status, ExtensionStatusValue.success, 0, expected_handler_name="Microsoft.AKS.Compute.AKS.Linux.AKSNode", expected_msg="Enabling AKSNode") self._assert_ext_status(protocol.report_vm_status, ExtensionStatusValue.success, 0, expected_handler_name="Microsoft.AKS.Compute.AKS-Engine.Linux.Billing", expected_msg="Enabling AKSBilling") def test_it_should_include_part_of_status_in_ext_handler_message(self, mock_http_get, mock_crypt_util, *args): """ Testing scenario when the status file is invalid, The extension status reported by the Handler should contain a fragment of status file for debugging. """ exthandlers_handler, protocol = self._create_mock( wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE), mock_http_get, mock_crypt_util, *args) original_popen = subprocess.Popen def mock_popen(cmd, *args, **kwargs): # For the purpose of this test, replacing the status file with file that could not be parsed if "sample.py" in cmd: status_path = os.path.join(kwargs['env'][ExtCommandEnvVariable.ExtensionPath], "status", "{0}.status".format(kwargs['env'][ExtCommandEnvVariable.ExtensionSeqNumber])) invalid_json_path = os.path.join(data_dir, "ext", "sample-status-invalid-json-format.json") if 'enable' in cmd: invalid_json = fileutil.read_file(invalid_json_path) fileutil.write_file(status_path,invalid_json) return original_popen(["echo", "Yes"], *args, **kwargs) with patch('azurelinuxagent.ga.cgroupapi.subprocess.Popen', side_effect=mock_popen): exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() # The Handler Status for the base extension should be ready as it was executed successfully by the agent self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0", expected_handler_name="OSTCExtensions.ExampleHandlerLinux") # The extension status reported by the Handler should contain a fragment of status file for # debugging. The uniqueMachineId tag comes from status file self._assert_ext_status(protocol.report_vm_status, ExtensionStatusValue.error, 0, expected_handler_name="OSTCExtensions.ExampleHandlerLinux", expected_msg="\"uniqueMachineId\": \"e5e5602b-48a6-4c35-9f96-752043777af1\"") def test_wait_for_handler_completion_success_status(self, mock_http_get, mock_crypt_util, *args): """ Testing depends-on scenario on a successful case. Expected to report the status for both extensions properly. """ exthandlers_handler, protocol = self._create_mock( wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_SEQUENCING), mock_http_get, mock_crypt_util, *args) exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0", expected_handler_name="OSTCExtensions.OtherExampleHandlerLinux", expected_msg='Plugin enabled') # The extension status reported by the Handler should be an error since no status file was found self._assert_ext_status(protocol.report_vm_status, ExtensionStatusValue.success, 0, expected_handler_name="OSTCExtensions.OtherExampleHandlerLinux") # The Handler Status for the dependent extension should be NotReady as it was not executed at all # And since it was not executed, it should not report any extension status either self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0", expected_msg='Plugin enabled') self._assert_ext_status(protocol.report_vm_status, ExtensionStatusValue.success, 0) def test_wait_for_handler_completion_error_status(self, mock_http_get, mock_crypt_util, *args): """ Testing wait_for_handler_completion() when there is error status. Expected to return False. """ exthandlers_handler, protocol = self._create_mock( wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_SEQUENCING), mock_http_get, mock_crypt_util, *args) original_popen = subprocess.Popen def mock_popen(cmd, *args, **kwargs): # For the purpose of this test, deleting the placeholder status file created by the agent if "sample.py" in cmd: return original_popen(["/fail/this/command"], *args, **kwargs) return original_popen(cmd, *args, **kwargs) with patch('azurelinuxagent.ga.cgroupapi.subprocess.Popen', side_effect=mock_popen): exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() # The Handler Status for the base extension should be NotReady as it failed self._assert_handler_status(protocol.report_vm_status, "NotReady", 0, "1.0.0", expected_handler_name="OSTCExtensions.OtherExampleHandlerLinux") # The Handler Status for the dependent extension should be NotReady as it was not executed at all # And since it was not executed, it should not report any extension status either self._assert_handler_status(protocol.report_vm_status, "NotReady", 0, "1.0.0", expected_msg='Skipping processing of extensions since execution of dependent extension OSTCExtensions.OtherExampleHandlerLinux failed') def test_get_ext_handling_status(self, *args): """ Testing get_ext_handling_status() function with various cases and verifying against the expected values """ test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=unused-variable,no-value-for-parameter handler_name = "Handler" exthandler = Extension(name=handler_name) extension = ExtensionSettings(name=handler_name) exthandler.settings.append(extension) # In the following list of test cases, the first element corresponds to seq_no. # the second element is the status file name, the third element indicates if the status file exits or not. # The fourth element is the expected value from get_ext_handling_status() test_cases = [ [-5, None, False, None], [-1, None, False, None], [0, None, False, None], [0, "filename", False, "warning"], [0, "filename", True, ExtensionStatus(status="success")], [5, "filename", False, "warning"], [5, "filename", True, ExtensionStatus(status="success")] ] orig_state = os.path.exists for case in test_cases: ext_handler_i = ExtHandlerInstance(exthandler, protocol) ext_handler_i.get_status_file_path = MagicMock(return_value=(case[0], case[1])) os.path.exists = MagicMock(return_value=case[2]) if case[2]: # when the status file exists, it is expected return the value from collect_ext_status() ext_handler_i.collect_ext_status = MagicMock(return_value=case[3]) status = ext_handler_i.get_ext_handling_status(extension) if case[2]: self.assertEqual(status, case[3].status) else: self.assertEqual(status, case[3]) os.path.exists = orig_state def test_is_ext_handling_complete(self, *args): """ Testing is_ext_handling_complete() with various input and verifying against the expected output values. """ test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=unused-variable,no-value-for-parameter handler_name = "Handler" exthandler = Extension(name=handler_name) extension = ExtensionSettings(name=handler_name) exthandler.settings.append(extension) ext_handler_i = ExtHandlerInstance(exthandler, protocol) # Testing no status case ext_handler_i.get_ext_handling_status = MagicMock(return_value=None) completed, status = ext_handler_i.is_ext_handling_complete(extension) self.assertTrue(completed) self.assertEqual(status, None) # Here the key represents the possible input value to is_ext_handling_complete() # the value represents the output tuple from is_ext_handling_complete() expected_results = { "error": (True, "error"), "success": (True, "success"), "warning": (False, "warning"), "transitioning": (False, "transitioning") } for key in expected_results.keys(): ext_handler_i.get_ext_handling_status = MagicMock(return_value=key) completed, status = ext_handler_i.is_ext_handling_complete(extension) self.assertEqual(completed, expected_results[key][0]) self.assertEqual(status, expected_results[key][1]) def test_ext_handler_version_decide_autoupgrade_internalversion(self, *args): for internal in [False, True]: for autoupgrade in [False, True]: if internal: config_version = '1.3.0' decision_version = '1.3.0' if autoupgrade: datafile = wire_protocol_data.DATA_FILE_EXT_AUTOUPGRADE_INTERNALVERSION else: datafile = wire_protocol_data.DATA_FILE_EXT_INTERNALVERSION else: config_version = '1.0.0' decision_version = '1.0.0' if autoupgrade: datafile = wire_protocol_data.DATA_FILE_EXT_AUTOUPGRADE else: datafile = wire_protocol_data.DATA_FILE _, protocol = self._create_mock(wire_protocol_data.WireProtocolData(datafile), *args) # pylint: disable=no-value-for-parameter ext_handlers = protocol.get_goal_state().extensions_goal_state.extensions self.assertEqual(1, len(ext_handlers)) ext_handler = ext_handlers[0] self.assertEqual('OSTCExtensions.ExampleHandlerLinux', ext_handler.name) self.assertEqual(config_version, ext_handler.version, "config version.") ExtHandlerInstance(ext_handler, protocol).decide_version(None, None, None) self.assertEqual(decision_version, ext_handler.version, "decision version.") def test_ext_handler_version_decide_between_minor_versions(self, *args): """ Using v2.x~v4.x for unit testing Available versions via manifest XML (I stands for internal): 2.0.0, 2.1.0, 2.1.1, 2.2.0, 2.3.0(I), 2.4.0(I), 3.0, 3.1, 4.0.0.0, 4.0.0.1, 4.1.0.0 See tests/data/wire/manifest.xml for possible versions """ # (installed_version, config_version, exptected_version, autoupgrade_expected_version) cases = [ (None, '2.0', '2.0.0'), (None, '2.0.0', '2.0.0'), ('1.0', '1.0.0', '1.0.0'), (None, '2.1.0', '2.1.0'), (None, '2.1.1', '2.1.1'), (None, '2.2.0', '2.2.0'), (None, '2.3.0', '2.3.0'), (None, '2.4.0', '2.4.0'), (None, '3.0', '3.0'), (None, '3.1', '3.1'), (None, '4.0', '4.0.0.1'), (None, '4.1', '4.1.0.0'), ] _, protocol = self._create_mock(wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE), *args) # pylint: disable=no-value-for-parameter version_uri = 'http://mock-goal-state/Microsoft.OSTCExtensions_ExampleHandlerLinux_asiaeast_manifest.xml' for (installed_version, config_version, expected_version) in cases: ext_handler = Mock() ext_handler.properties = Mock() ext_handler.name = 'OSTCExtensions.ExampleHandlerLinux' ext_handler.manifest_uris = [version_uri] ext_handler.version = config_version ext_handler_instance = ExtHandlerInstance(ext_handler, protocol) ext_handler_instance.get_installed_version = Mock(return_value=installed_version) ext_handler_instance.decide_version(None, None, None) self.assertEqual(expected_version, ext_handler.version) @patch('azurelinuxagent.common.conf.get_extensions_enabled', return_value=False) def test_extensions_disabled(self, _, *args): # test status is reported for no extensions test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_NO_EXT) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_no_handler_status(protocol.report_vm_status) # test status is reported, but extensions are not processed test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() report_vm_status = protocol.report_vm_status self.assertTrue(report_vm_status.called) args, kw = report_vm_status.call_args # pylint: disable=unused-variable vm_status = args[0] self.assertEqual(1, len(vm_status.vmAgent.extensionHandlers)) exthandler = vm_status.vmAgent.extensionHandlers[0] self.assertEqual(ExtensionErrorCodes.PluginEnableProcessingFailed, exthandler.code) self.assertEqual('NotReady', exthandler.status) self.assertEqual("Extension 'OSTCExtensions.ExampleHandlerLinux' will not be processed since extension processing is disabled. To enable extension processing, set Extensions.Enabled=y in '/etc/waagent.conf'", exthandler.message) ext_status = exthandler.extension_status self.assertEqual(ExtensionErrorCodes.PluginEnableProcessingFailed, ext_status.code) self.assertEqual('error', ext_status.status) self.assertEqual("Extension 'OSTCExtensions.ExampleHandlerLinux' will not be processed since extension processing is disabled. To enable extension processing, set Extensions.Enabled=y in '/etc/waagent.conf'", ext_status.message) def test_extensions_deleted(self, *args): # Ensure initial enable is successful test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_DELETION) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0") self._assert_ext_status(protocol.report_vm_status, "success", 0) # Update incarnation, simulate new extension version and old one deleted test_data.set_incarnation(2) test_data.set_extensions_config_version("1.0.1") test_data.set_manifest_version('1.0.1') protocol.client.update_goal_state() # Ensure new extension can be enabled exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.1") self._assert_ext_status(protocol.report_vm_status, "success", 0) @patch('azurelinuxagent.ga.exthandlers.ExtHandlerInstance.install', side_effect=ExtHandlerInstance.install, autospec=True) @patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_install_command') def test_install_failure(self, patch_get_install_command, patch_install, *args): """ When extension install fails, the operation should not be retried. """ test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_SINGLE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter # Ensure initial install is unsuccessful patch_get_install_command.return_value = "exit.sh 1" exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(1, patch_install.call_count) self.assertEqual(1, protocol.report_vm_status.call_count) self._assert_handler_status(protocol.report_vm_status, "NotReady", expected_ext_count=0, version="1.0.0") @patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_install_command') def test_install_failure_check_exception_handling(self, patch_get_install_command, *args): """ When extension install fails, the operation should be reported to our telemetry service. """ test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_SINGLE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter # Ensure install is unsuccessful patch_get_install_command.return_value = "exit.sh 1" exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(1, protocol.report_vm_status.call_count) self._assert_handler_status(protocol.report_vm_status, expected_status="NotReady", expected_ext_count=0, version="1.0.0") @patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_enable_command') def test_enable_failure_check_exception_handling(self, patch_get_enable_command, *args): """ When extension enable fails, the operation should be reported. """ test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_SINGLE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter # Ensure initial install is successful, but enable fails patch_get_enable_command.call_count = 0 patch_get_enable_command.return_value = "exit.sh 1" exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(1, patch_get_enable_command.call_count) self.assertEqual(1, protocol.report_vm_status.call_count) self._assert_handler_status(protocol.report_vm_status, "NotReady", expected_ext_count=1, version="1.0.0") @patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_disable_command') def test_disable_failure_with_exception_handling(self, patch_get_disable_command, *args): """ When extension disable fails, the operation should be reported. """ # Ensure initial install and enable is successful, but disable fails test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_SINGLE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter patch_get_disable_command.call_count = 0 patch_get_disable_command.return_value = "exit 1" exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(0, patch_get_disable_command.call_count) self.assertEqual(1, protocol.report_vm_status.call_count) self._assert_handler_status(protocol.report_vm_status, "Ready", expected_ext_count=1, version="1.0.0") self._assert_ext_status(protocol.report_vm_status, "success", 0) # Next incarnation, disable extension test_data.set_incarnation(2) test_data.set_extensions_config_state(ExtensionRequestedState.Disabled) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(1, patch_get_disable_command.call_count) self.assertEqual(2, protocol.report_vm_status.call_count) self._assert_handler_status(protocol.report_vm_status, "NotReady", expected_ext_count=1, version="1.0.0") @patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_uninstall_command') def test_uninstall_failure(self, patch_get_uninstall_command, *args): """ When extension uninstall fails, the operation should not be retried. """ # Ensure initial install and enable is successful, but uninstall fails test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_SINGLE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter patch_get_uninstall_command.call_count = 0 patch_get_uninstall_command.return_value = "exit 1" exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(0, patch_get_uninstall_command.call_count) self.assertEqual(1, protocol.report_vm_status.call_count) self._assert_handler_status(protocol.report_vm_status, "Ready", expected_ext_count=1, version="1.0.0") self._assert_ext_status(protocol.report_vm_status, "success", 0) # Next incarnation, disable extension test_data.set_incarnation(2) test_data.set_extensions_config_state(ExtensionRequestedState.Uninstall) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(1, patch_get_uninstall_command.call_count) self.assertEqual(2, protocol.report_vm_status.call_count) self.assertEqual("Ready", protocol.report_vm_status.call_args[0][0].vmAgent.status) self._assert_no_handler_status(protocol.report_vm_status) # Ensure there are no further retries exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(1, patch_get_uninstall_command.call_count) self.assertEqual(3, protocol.report_vm_status.call_count) self.assertEqual("Ready", protocol.report_vm_status.call_args[0][0].vmAgent.status) self._assert_no_handler_status(protocol.report_vm_status) @patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_update_command') def test_extension_upgrade_failure_when_new_version_update_fails(self, patch_get_update_command, *args): """ When the update command of the new extension fails, it should result in the new extension failed and the old extension disabled. On the next goal state, the entire upgrade scenario should be retried (once), meaning the download, initialize and update are called on the new extension. Note: we don't re-download the zip since it wasn't cleaned up in the previous goal state (we only clean up NotInstalled handlers), so we just re-use the existing zip of the new extension. """ test_data, exthandlers_handler, protocol = self._set_up_update_test_and_update_gs(patch_get_update_command, *args) extension_name = exthandlers_handler.ext_handlers[0].name extension_calls = [] original_popen = subprocess.Popen def mock_popen(*args, **kwargs): # Maintain an internal list of invoked commands of the test extension to assert on later if extension_name in args[0]: extension_calls.append(args[0]) return original_popen(*args, **kwargs) with patch('azurelinuxagent.ga.cgroupapi.subprocess.Popen', side_effect=mock_popen): exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() update_command_count = len([extension_call for extension_call in extension_calls if patch_get_update_command.return_value in extension_call]) enable_command_count = len([extension_call for extension_call in extension_calls if "-enable" in extension_call]) self.assertEqual(1, update_command_count) self.assertEqual(0, enable_command_count) # We report the failure of the new extension version self._assert_handler_status(protocol.report_vm_status, "NotReady", expected_ext_count=1, version="1.0.1") # If the incarnation number changes (there's a new goal state), ensure we go through the entire upgrade # process again. test_data.set_incarnation(3) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() update_command_count = len([extension_call for extension_call in extension_calls if patch_get_update_command.return_value in extension_call]) enable_command_count = len([extension_call for extension_call in extension_calls if "-enable" in extension_call]) self.assertEqual(2, update_command_count) self.assertEqual(0, enable_command_count) # We report the failure of the new extension version self._assert_handler_status(protocol.report_vm_status, "NotReady", expected_ext_count=1, version="1.0.1") @patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_disable_command') def test_extension_upgrade_failure_when_prev_version_disable_fails(self, patch_get_disable_command, *args): test_data, exthandlers_handler, protocol = self._set_up_update_test_and_update_gs(patch_get_disable_command, *args) # pylint: disable=unused-variable with patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_enable_command') as patch_get_enable_command: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() # When the previous version's disable fails, we expect the upgrade scenario to fail, so the enable # for the new version is not called and the new version handler's status is reported as not ready. self.assertEqual(1, patch_get_disable_command.call_count) self.assertEqual(0, patch_get_enable_command.call_count) self._assert_handler_status(protocol.report_vm_status, "NotReady", expected_ext_count=0, version="1.0.1", expected_code=ExtensionErrorCodes.PluginUpdateProcessingFailed) @patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_disable_command') def test_extension_upgrade_failure_when_prev_version_disable_fails_and_recovers_on_next_incarnation(self, patch_get_disable_command, *args): test_data, exthandlers_handler, protocol = self._set_up_update_test_and_update_gs(patch_get_disable_command, *args) with patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_enable_command') as patch_get_enable_command: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() # When the previous version's disable fails, we expect the upgrade scenario to fail, so the enable # for the new version is not called and the new version handler's status is reported as not ready. self.assertEqual(1, patch_get_disable_command.call_count) self.assertEqual(0, patch_get_enable_command.call_count) self._assert_handler_status(protocol.report_vm_status, "NotReady", expected_ext_count=0, version="1.0.1") # Force a new goal state incarnation, only then will we attempt the upgrade again test_data.set_incarnation(3) protocol.client.update_goal_state() # Ensure disable won't fail by making launch_command a no-op with patch('azurelinuxagent.ga.exthandlers.ExtHandlerInstance.launch_command') as patch_launch_command: # pylint: disable=unused-variable exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(2, patch_get_disable_command.call_count) self.assertEqual(1, patch_get_enable_command.call_count) self._assert_handler_status(protocol.report_vm_status, "Ready", expected_ext_count=1, version="1.0.1") @patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_disable_command') def test_extension_upgrade_failure_when_prev_version_disable_fails_incorrect_zip(self, patch_get_disable_command, *args): test_data, exthandlers_handler, protocol = self._set_up_update_test_and_update_gs(patch_get_disable_command, # pylint: disable=unused-variable *args) # The download logic has retry logic that sleeps before each try - make sleep a no-op. with patch("time.sleep"): with patch("zipfile.ZipFile.extractall") as patch_zipfile_extractall: with patch( 'azurelinuxagent.ga.exthandlers.HandlerManifest.get_enable_command') as patch_get_enable_command: patch_zipfile_extractall.side_effect = raise_ioerror # The zipfile was corrupt and the upgrade sequence failed exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() # We never called the disable of the old version due to the failure when unzipping the new version, # nor the enable of the new version self.assertEqual(0, patch_get_disable_command.call_count) self.assertEqual(0, patch_get_enable_command.call_count) # Ensure we are processing the same goal state only once loop_run = 5 for x in range(loop_run): # pylint: disable=unused-variable exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(0, patch_get_disable_command.call_count) self.assertEqual(0, patch_get_enable_command.call_count) @patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_disable_command') def test_old_handler_reports_failure_on_disable_fail_on_update(self, patch_get_disable_command, *args): old_version, new_version = "1.0.0", "1.0.1" test_data, exthandlers_handler, protocol = self._set_up_update_test_and_update_gs(patch_get_disable_command, # pylint: disable=unused-variable *args) with patch.object(ExtHandlerInstance, "report_event", autospec=True) as patch_report_event: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(1, patch_get_disable_command.call_count) old_version_args, old_version_kwargs = patch_report_event.call_args new_version_args, new_version_kwargs = patch_report_event.call_args_list[0] self.assertEqual(new_version_args[0].ext_handler.version, new_version, "The first call to report event should be from the new version of the ext-handler " "to report download succeeded") self.assertEqual(new_version_kwargs['message'], "Download succeeded", "The message should be Download Succedded") self.assertEqual(old_version_args[0].ext_handler.version, old_version, "The last report event call should be from the old version ext-handler " "to report the event from the previous version") self.assertFalse(old_version_kwargs['is_success'], "The last call to report event should be for a failure") self.assertTrue('Error' in old_version_kwargs['message'], "No error reported") # This is ensuring that the error status is being written to the new version self._assert_handler_status(protocol.report_vm_status, "NotReady", expected_ext_count=0, version=new_version) @patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_update_command') def test_upgrade_failure_with_exception_handling(self, patch_get_update_command, *args): """ Extension upgrade failure should not be retried """ test_data, exthandlers_handler, protocol = self._set_up_update_test_and_update_gs(patch_get_update_command, # pylint: disable=unused-variable *args) exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(1, patch_get_update_command.call_count) self._assert_handler_status(protocol.report_vm_status, "NotReady", expected_ext_count=1, version="1.0.1", expected_code=ExtensionErrorCodes.PluginUpdateProcessingFailed) @patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_disable_command') def test_extension_upgrade_should_pass_when_continue_on_update_failure_is_true_and_prev_version_disable_fails( self, patch_get_disable_command, *args): test_data, exthandlers_handler, protocol = self._set_up_update_test_and_update_gs(patch_get_disable_command, # pylint: disable=unused-variable *args) with patch('azurelinuxagent.ga.exthandlers.HandlerManifest.is_continue_on_update_failure', return_value=True) \ as mock_continue_on_update_failure: # These are just testing the mocks have been called and asserting the test conditions have been met exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(1, patch_get_disable_command.call_count) self.assertEqual(2, mock_continue_on_update_failure.call_count, "This should be called twice, for both disable and uninstall") # Ensure the handler status and ext_status is successful self._assert_handler_status(protocol.report_vm_status, "Ready", expected_ext_count=1, version="1.0.1") self._assert_ext_status(protocol.report_vm_status, "success", 0) @patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_uninstall_command') def test_extension_upgrade_should_pass_when_continue_on_update_failue_is_true_and_prev_version_uninstall_fails( self, patch_get_uninstall_command, *args): test_data, exthandlers_handler, protocol = self._set_up_update_test_and_update_gs(patch_get_uninstall_command, # pylint: disable=unused-variable *args) with patch('azurelinuxagent.ga.exthandlers.HandlerManifest.is_continue_on_update_failure', return_value=True) \ as mock_continue_on_update_failure: # These are just testing the mocks have been called and asserting the test conditions have been met exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(1, patch_get_uninstall_command.call_count) self.assertEqual(2, mock_continue_on_update_failure.call_count, "This should be called twice, for both disable and uninstall") # Ensure the handler status and ext_status is successful self._assert_handler_status(protocol.report_vm_status, "Ready", expected_ext_count=1, version="1.0.1") self._assert_ext_status(protocol.report_vm_status, "success", 0) @patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_disable_command') def test_extension_upgrade_should_fail_when_continue_on_update_failure_is_false_and_prev_version_disable_fails( self, patch_get_disable_command, *args): test_data, exthandlers_handler, protocol = self._set_up_update_test_and_update_gs(patch_get_disable_command, # pylint: disable=unused-variable *args) with patch('azurelinuxagent.ga.exthandlers.HandlerManifest.is_continue_on_update_failure', return_value=False) \ as mock_continue_on_update_failure: # These are just testing the mocks have been called and asserting the test conditions have been met exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(1, patch_get_disable_command.call_count) self.assertEqual(1, mock_continue_on_update_failure.call_count, "The first call would raise an exception") # Assert test scenario self._assert_handler_status(protocol.report_vm_status, "NotReady", expected_ext_count=0, version="1.0.1", expected_code=ExtensionErrorCodes.PluginUpdateProcessingFailed) @patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_uninstall_command') def test_extension_upgrade_should_fail_when_continue_on_update_failure_is_false_and_prev_version_uninstall_fails( self, patch_get_uninstall_command, *args): test_data, exthandlers_handler, protocol = self._set_up_update_test_and_update_gs(patch_get_uninstall_command, # pylint: disable=unused-variable *args) with patch('azurelinuxagent.ga.exthandlers.HandlerManifest.is_continue_on_update_failure', return_value=False) \ as mock_continue_on_update_failure: # These are just testing the mocks have been called and asserting the test conditions have been met exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(1, patch_get_uninstall_command.call_count) self.assertEqual(2, mock_continue_on_update_failure.call_count, "The second call would raise an exception") # Assert test scenario self._assert_handler_status(protocol.report_vm_status, "NotReady", expected_ext_count=0, version="1.0.1", expected_code=ExtensionErrorCodes.PluginUpdateProcessingFailed) @patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_disable_command') def test_extension_upgrade_should_fail_when_continue_on_update_failure_is_true_and_old_disable_and_new_enable_fails( self, patch_get_disable_command, *args): test_data, exthandlers_handler, protocol = self._set_up_update_test_and_update_gs(patch_get_disable_command, # pylint: disable=unused-variable *args) with patch('azurelinuxagent.ga.exthandlers.HandlerManifest.is_continue_on_update_failure', return_value=True) \ as mock_continue_on_update_failure: with patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_enable_command', return_value="exit 1")\ as patch_get_enable: # These are just testing the mocks have been called and asserting the test conditions have been met exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(1, patch_get_disable_command.call_count) self.assertEqual(2, mock_continue_on_update_failure.call_count) self.assertEqual(1, patch_get_enable.call_count) # Assert test scenario self._assert_handler_status(protocol.report_vm_status, "NotReady", expected_ext_count=1, version="1.0.1", expected_code=ExtensionErrorCodes.PluginEnableProcessingFailed) @patch('azurelinuxagent.ga.exthandlers.HandlerManifest.is_continue_on_update_failure', return_value=True) def test_uninstall_rc_env_var_should_report_not_run_for_non_update_calls_to_exthandler_run( self, patch_continue_on_update, *args): test_data, exthandlers_handler, protocol = self._set_up_update_test_and_update_gs(Mock(), *args) with patch.object(CGroupConfigurator.get_instance(), "start_extension_command", side_effect=[ExtensionError("Disable Failed"), "ok", ExtensionError("uninstall failed"), "ok", "ok", "New enable run ok"]) as patch_start_cmd: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() _, update_kwargs = patch_start_cmd.call_args_list[1] _, install_kwargs = patch_start_cmd.call_args_list[3] _, enable_kwargs = patch_start_cmd.call_args_list[4] # Ensure that the env variables were present in the first run when failures were thrown for update self.assertEqual(2, patch_continue_on_update.call_count) self.assertTrue( '-update' in update_kwargs['command'] and ExtCommandEnvVariable.DisableReturnCode in update_kwargs['env'], "The update command call should have Disable Failed in env variable") self.assertTrue( '-install' in install_kwargs['command'] and ExtCommandEnvVariable.DisableReturnCode not in install_kwargs[ 'env'], "The Disable Failed env variable should be removed from install command") self.assertTrue( '-install' in install_kwargs['command'] and ExtCommandEnvVariable.UninstallReturnCode in install_kwargs[ 'env'], "The install command call should have Uninstall Failed in env variable") self.assertTrue( '-enable' in enable_kwargs['command'] and ExtCommandEnvVariable.UninstallReturnCode in enable_kwargs['env'], "The enable command call should have Uninstall Failed in env variable") # Initiating another run which shouldn't have any failed env variables in it if no failures # Updating Incarnation test_data.set_incarnation(3) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() _, new_enable_kwargs = patch_start_cmd.call_args # Ensure the new run didn't have Disable Return Code env variable self.assertNotIn(ExtCommandEnvVariable.DisableReturnCode, new_enable_kwargs['env']) # Ensure the new run had Uninstall Return Code env variable == NOT_RUN self.assertIn(ExtCommandEnvVariable.UninstallReturnCode, new_enable_kwargs['env']) self.assertTrue( new_enable_kwargs['env'][ExtCommandEnvVariable.UninstallReturnCode] == NOT_RUN) # Ensure the handler status and ext_status is successful self._assert_handler_status(protocol.report_vm_status, "Ready", expected_ext_count=1, version="1.0.1") self._assert_ext_status(protocol.report_vm_status, "success", 0) def test_ext_path_and_version_env_variables_set_for_ever_operation(self, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_SINGLE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter with patch.object(CGroupConfigurator.get_instance(), "start_extension_command") as patch_start_cmd: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() # Extension Path and Version should be set for all launch_command calls for args, kwargs in patch_start_cmd.call_args_list: self.assertIn(ExtCommandEnvVariable.ExtensionPath, kwargs['env']) self.assertIn('OSTCExtensions.ExampleHandlerLinux-1.0.0', kwargs['env'][ExtCommandEnvVariable.ExtensionPath]) self.assertIn(ExtCommandEnvVariable.ExtensionVersion, kwargs['env']) self.assertEqual("1.0.0", kwargs['env'][ExtCommandEnvVariable.ExtensionVersion]) self._assert_handler_status(protocol.report_vm_status, "Ready", expected_ext_count=1, version="1.0.0") @patch("azurelinuxagent.ga.cgroupconfigurator.handle_process_completion", side_effect="Process Successful") def test_ext_sequence_no_should_be_set_for_every_command_call(self, _, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_MULTIPLE_EXT) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter with patch("subprocess.Popen") as patch_popen: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() for _, kwargs in patch_popen.call_args_list: self.assertIn(ExtCommandEnvVariable.ExtensionSeqNumber, kwargs['env']) self.assertEqual(kwargs['env'][ExtCommandEnvVariable.ExtensionSeqNumber], "0") self._assert_handler_status(protocol.report_vm_status, "Ready", expected_ext_count=1, version="1.0.0") # Next incarnation and seq for extensions, update version test_data.goal_state = test_data.goal_state.replace("1<", "2<") test_data.ext_conf = test_data.ext_conf.replace('version="1.0.0"', 'version="1.0.1"') test_data.ext_conf = test_data.ext_conf.replace('seqNo="0"', 'seqNo="1"') test_data.manifest = test_data.manifest.replace('1.0.0', '1.0.1') exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter with patch("subprocess.Popen") as patch_popen: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() for _, kwargs in patch_popen.call_args_list: self.assertIn(ExtCommandEnvVariable.ExtensionSeqNumber, kwargs['env']) self.assertEqual(kwargs['env'][ExtCommandEnvVariable.ExtensionSeqNumber], "1") self._assert_handler_status(protocol.report_vm_status, "Ready", expected_ext_count=1, version="1.0.1") def test_ext_sequence_no_should_be_set_from_within_extension(self, *args): test_file_name = "testfile.sh" handler_json = { "installCommand": test_file_name, "uninstallCommand": test_file_name, "updateCommand": test_file_name, "enableCommand": test_file_name, "disableCommand": test_file_name, "rebootAfterInstall": False, "reportHeartbeat": False, "continueOnUpdateFailure": False } manifest = HandlerManifest({'handlerManifest': handler_json}) # Script prints env variables passed to this process and prints all starting with ConfigSequenceNumber test_file = """ printenv | grep ConfigSequenceNumber """ base_dir = os.path.join(conf.get_lib_dir(), 'OSTCExtensions.ExampleHandlerLinux-1.0.0') if not os.path.exists(base_dir): os.mkdir(base_dir) self.create_script(os.path.join(base_dir, test_file_name), test_file) test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_SINGLE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=unused-variable,no-value-for-parameter expected_seq_no = 0 with patch.object(ExtHandlerInstance, "load_manifest", return_value=manifest): with patch.object(ExtHandlerInstance, 'report_event') as mock_report_event: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() for _, kwargs in mock_report_event.call_args_list: # The output is of the format - 'Command: testfile.sh -{Operation} \n[stdout]ConfigSequenceNumber=N\n[stderr]' if ("Command: " + test_file_name) not in kwargs['message']: continue self.assertIn("{0}={1}".format(ExtCommandEnvVariable.ExtensionSeqNumber, expected_seq_no), kwargs['message']) # Update goal state, extension version and seq no test_data.goal_state = test_data.goal_state.replace("1<", "2<") test_data.ext_conf = test_data.ext_conf.replace('version="1.0.0"', 'version="1.0.1"') test_data.ext_conf = test_data.ext_conf.replace('seqNo="0"', 'seqNo="1"') test_data.manifest = test_data.manifest.replace('1.0.0', '1.0.1') expected_seq_no = 1 base_dir = os.path.join(conf.get_lib_dir(), 'OSTCExtensions.ExampleHandlerLinux-1.0.1') if not os.path.exists(base_dir): os.mkdir(base_dir) self.create_script(os.path.join(base_dir, test_file_name), test_file) with patch.object(ExtHandlerInstance, 'report_event') as mock_report_event: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() for _, kwargs in mock_report_event.call_args_list: # The output is of the format - 'testfile.sh\n[stdout]ConfigSequenceNumber=N\n[stderr]' if test_file_name not in kwargs['message']: continue self.assertIn("{0}={1}".format(ExtCommandEnvVariable.ExtensionSeqNumber, expected_seq_no), kwargs['message']) def test_correct_exit_code_should_be_set_on_uninstall_cmd_failure(self, *args): test_file_name = "testfile.sh" test_error_file_name = "error.sh" handler_json = { "installCommand": test_file_name + " -install", "uninstallCommand": test_error_file_name, "updateCommand": test_file_name + " -update", "enableCommand": test_file_name + " -enable", "disableCommand": test_error_file_name, "rebootAfterInstall": False, "reportHeartbeat": False, "continueOnUpdateFailure": True } manifest = HandlerManifest({'handlerManifest': handler_json}) # Script prints env variables passed to this process and prints all starting with ConfigSequenceNumber test_file = """ printenv | grep AZURE_ """ exit_code = 151 test_error_content = """ exit %s """ % exit_code error_dir = os.path.join(conf.get_lib_dir(), 'OSTCExtensions.ExampleHandlerLinux-1.0.0') if not os.path.exists(error_dir): os.mkdir(error_dir) self.create_script(os.path.join(error_dir, test_error_file_name), test_error_content) test_data, exthandlers_handler, protocol = self._set_up_update_test_and_update_gs(Mock(), *args) # pylint: disable=unused-variable base_dir = os.path.join(conf.get_lib_dir(), 'OSTCExtensions.ExampleHandlerLinux-1.0.1') if not os.path.exists(base_dir): os.mkdir(base_dir) self.create_script(os.path.join(base_dir, test_file_name), test_file) with patch("azurelinuxagent.ga.exthandlers.ExtHandlerInstance.load_manifest", return_value=manifest): with patch.object(ExtHandlerInstance, 'report_event') as mock_report_event: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() update_kwargs = next(kwargs for _, kwargs in mock_report_event.call_args_list if "Command: testfile.sh -update" in kwargs['message']) install_kwargs = next(kwargs for _, kwargs in mock_report_event.call_args_list if "Command: testfile.sh -install" in kwargs['message']) enable_kwargs = next(kwargs for _, kwargs in mock_report_event.call_args_list if "Command: testfile.sh -enable" in kwargs['message']) self.assertIn("%s=%s" % (ExtCommandEnvVariable.DisableReturnCode, exit_code), update_kwargs['message']) self.assertIn("%s=%s" % (ExtCommandEnvVariable.UninstallReturnCode, exit_code), install_kwargs['message']) self.assertIn("%s=%s" % (ExtCommandEnvVariable.UninstallReturnCode, exit_code), enable_kwargs['message']) def test_it_should_persist_goal_state_aggregate_status_until_new_incarnation(self, mock_get, mock_crypt_util, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) exthandlers_handler, protocol = self._create_mock(test_data, mock_get, mock_crypt_util, *args) exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", expected_ext_count=1, version="1.0.0") args, _ = protocol.report_vm_status.call_args gs_aggregate_status = args[0].vmAgent.vm_artifacts_aggregate_status.goal_state_aggregate_status self.assertIsNotNone(gs_aggregate_status, "Goal State Aggregate status not reported") self.assertEqual(gs_aggregate_status.status, GoalStateStatus.Success, "Wrong status reported") self.assertEqual(gs_aggregate_status.in_svd_seq_no, "1", "Incorrect seq no") # Update incarnation and ensure the gs_aggregate_status is modified too test_data.set_incarnation(2) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", expected_ext_count=1, version="1.0.0") args, _ = protocol.report_vm_status.call_args new_gs_aggregate_status = args[0].vmAgent.vm_artifacts_aggregate_status.goal_state_aggregate_status self.assertIsNotNone(new_gs_aggregate_status, "New Goal State Aggregate status not reported") self.assertNotEqual(gs_aggregate_status, new_gs_aggregate_status, "The gs_aggregate_status should be different") self.assertEqual(new_gs_aggregate_status.status, GoalStateStatus.Success, "Wrong status reported") self.assertEqual(new_gs_aggregate_status.in_svd_seq_no, "2", "Incorrect seq no") def test_it_should_parse_required_features_properly(self, mock_get, mock_crypt_util, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_REQUIRED_FEATURES) _, protocol = self._create_mock(test_data, mock_get, mock_crypt_util, *args) required_features = protocol.get_goal_state().extensions_goal_state.required_features self.assertEqual(3, len(required_features), "Incorrect features parsed") for i, feature in enumerate(required_features): self.assertEqual(feature, "TestRequiredFeature{0}".format(i+1), "Name mismatch") def test_it_should_fail_goal_state_if_required_features_not_supported(self, mock_get, mock_crypt_util, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_REQUIRED_FEATURES) exthandlers_handler, protocol = self._create_mock(test_data, mock_get, mock_crypt_util, *args) exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() args, _ = protocol.report_vm_status.call_args gs_aggregate_status = args[0].vmAgent.vm_artifacts_aggregate_status.goal_state_aggregate_status self.assertEqual(0, len(args[0].vmAgent.extensionHandlers), "No extensions should be reported") self.assertIsNotNone(gs_aggregate_status, "GS Aggregagte status should be reported") self.assertEqual(gs_aggregate_status.status, GoalStateStatus.Failed, "GS should be failed") self.assertEqual(gs_aggregate_status.code, GoalStateAggregateStatusCodes.GoalStateUnsupportedRequiredFeatures, "Incorrect error code set properly for GS failure") self.assertEqual(gs_aggregate_status.in_svd_seq_no, "1", "Sequence Number is wrong") @patch("azurelinuxagent.common.protocol.wire.CryptUtil") @patch("azurelinuxagent.common.utils.restutil.http_get") class TestExtensionSequencing(AgentTestCase): def _create_mock(self, mock_http_get, MockCryptUtil): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) # Mock protocol to return test data mock_http_get.side_effect = test_data.mock_http_get MockCryptUtil.side_effect = test_data.mock_crypt_util protocol = WireProtocol(KNOWN_WIRESERVER_IP) protocol.detect() protocol.report_vm_status = MagicMock() handler = get_exthandlers_handler(protocol) return handler def _set_dependency_levels(self, dependency_levels, exthandlers_handler): """ Creates extensions with the given dependencyLevel """ handler_map = {} all_handlers = [] for handler_name, level in dependency_levels: if handler_map.get(handler_name) is None: handler = Extension(name=handler_name) extension = ExtensionSettings(name=handler_name) handler.state = ExtensionRequestedState.Enabled handler.settings.append(extension) handler_map[handler_name] = handler all_handlers.append(handler) handler = handler_map[handler_name] for ext in handler.settings: ext.dependencyLevel = level exthandlers_handler.protocol.get_goal_state().extensions_goal_state._extensions *= 0 exthandlers_handler.protocol.get_goal_state().extensions_goal_state.extensions.extend(all_handlers) def _validate_extension_sequence(self, expected_sequence, exthandlers_handler): installed_extensions = [a[0].ext_handler.name for a, _ in exthandlers_handler.handle_ext_handler.call_args_list] self.assertListEqual(expected_sequence, installed_extensions, "Expected and actual list of extensions are not equal") def _run_test(self, extensions_to_be_failed, expected_sequence, exthandlers_handler): """ Mocks get_ext_handling_status() to mimic error status for a given extension. Calls ExtHandlersHandler.run() Verifies if the ExtHandlersHandler.handle_ext_handler() was called with appropriate extensions in the expected order. """ def get_ext_handling_status(ext): status = "error" if ext.name in extensions_to_be_failed else "success" return status exthandlers_handler.handle_ext_handler = MagicMock() with patch.object(ExtHandlerInstance, "get_ext_handling_status", side_effect=get_ext_handling_status): with patch.object(ExtHandlerInstance, "get_handler_status", ExtHandlerStatus): with patch('azurelinuxagent.ga.exthandlers._DEFAULT_EXT_TIMEOUT_MINUTES', 0.01): exthandlers_handler.run() self._validate_extension_sequence(expected_sequence, exthandlers_handler) def test_handle_ext_handlers(self, *args): """ Tests extension sequencing among multiple extensions with dependencies. This test introduces failure in all possible levels and extensions. Verifies that the sequencing is in the expected order and a failure in one extension skips the rest of the extensions in the sequence. """ exthandlers_handler = self._create_mock(*args) # pylint: disable=no-value-for-parameter self._set_dependency_levels([("A", 3), ("B", 2), ("C", 2), ("D", 1), ("E", 1), ("F", 1), ("G", 1)], exthandlers_handler) extensions_to_be_failed = [] expected_sequence = ["D", "E", "F", "G", "B", "C", "A"] self._run_test(extensions_to_be_failed, expected_sequence, exthandlers_handler) extensions_to_be_failed = ["D"] expected_sequence = ["D"] self._run_test(extensions_to_be_failed, expected_sequence, exthandlers_handler) extensions_to_be_failed = ["E"] expected_sequence = ["D", "E"] self._run_test(extensions_to_be_failed, expected_sequence, exthandlers_handler) extensions_to_be_failed = ["F"] expected_sequence = ["D", "E", "F"] self._run_test(extensions_to_be_failed, expected_sequence, exthandlers_handler) extensions_to_be_failed = ["G"] expected_sequence = ["D", "E", "F", "G"] self._run_test(extensions_to_be_failed, expected_sequence, exthandlers_handler) extensions_to_be_failed = ["B"] expected_sequence = ["D", "E", "F", "G", "B"] self._run_test(extensions_to_be_failed, expected_sequence, exthandlers_handler) extensions_to_be_failed = ["C"] expected_sequence = ["D", "E", "F", "G", "B", "C"] self._run_test(extensions_to_be_failed, expected_sequence, exthandlers_handler) extensions_to_be_failed = ["A"] expected_sequence = ["D", "E", "F", "G", "B", "C", "A"] self._run_test(extensions_to_be_failed, expected_sequence, exthandlers_handler) def test_handle_ext_handlers_with_uninstallation(self, *args): """ Tests extension sequencing among multiple extensions with dependencies when some extension are to be uninstalled. Verifies that the sequencing is in the expected order and the uninstallation takes place prior to all the installation/enable. """ exthandlers_handler = self._create_mock(*args) # pylint: disable=no-value-for-parameter # "A", "D" and "F" are marked as to be uninstalled self._set_dependency_levels([("A", 0), ("B", 2), ("C", 2), ("D", 0), ("E", 1), ("F", 0), ("G", 1)], exthandlers_handler) extensions_to_be_failed = [] expected_sequence = ["A", "D", "F", "E", "G", "B", "C"] self._run_test(extensions_to_be_failed, expected_sequence, exthandlers_handler) def test_handle_ext_handlers_fallback(self, *args): """ This test makes sure that the extension sequencing is applied only when the user specifies dependency information in the extension. When there is no dependency specified, the agent is expected to assign dependencyLevel=0 to all extension. Also, it is expected to install all the extension no matter if there is any failure in any of the extensions. """ exthandlers_handler = self._create_mock(*args) # pylint: disable=no-value-for-parameter self._set_dependency_levels([("A", 1), ("B", 1), ("C", 1), ("D", 1), ("E", 1), ("F", 1), ("G", 1)], exthandlers_handler) # Expected sequence must contain all the extensions in the given order. # The following test cases verfy against this same expected sequence no matter if any extension failed expected_sequence = ["A", "B", "C", "D", "E", "F", "G"] # Make sure that failure in any extension does not prevent other extensions to be installed extensions_to_be_failed = [] self._run_test(extensions_to_be_failed, expected_sequence, exthandlers_handler) extensions_to_be_failed = ["A"] self._run_test(extensions_to_be_failed, expected_sequence, exthandlers_handler) extensions_to_be_failed = ["B"] self._run_test(extensions_to_be_failed, expected_sequence, exthandlers_handler) extensions_to_be_failed = ["C"] self._run_test(extensions_to_be_failed, expected_sequence, exthandlers_handler) extensions_to_be_failed = ["D"] self._run_test(extensions_to_be_failed, expected_sequence, exthandlers_handler) extensions_to_be_failed = ["E"] self._run_test(extensions_to_be_failed, expected_sequence, exthandlers_handler) extensions_to_be_failed = ["F"] self._run_test(extensions_to_be_failed, expected_sequence, exthandlers_handler) extensions_to_be_failed = ["G"] self._run_test(extensions_to_be_failed, expected_sequence, exthandlers_handler) class TestInVMArtifactsProfile(AgentTestCase): def test_it_should_parse_boolean_values(self): profile_json = '{ "onHold": true }' profile = InVMArtifactsProfile(profile_json) self.assertTrue(profile.is_on_hold(), "Failed to parse '{0}'".format(profile_json)) profile_json = '{ "onHold": false }' profile = InVMArtifactsProfile(profile_json) self.assertFalse(profile.is_on_hold(), "Failed to parse '{0}'".format(profile_json)) def test_it_should_parse_boolean_values_encoded_as_strings(self): profile_json = '{ "onHold": "true" }' profile = InVMArtifactsProfile(profile_json) self.assertTrue(profile.is_on_hold(), "Failed to parse '{0}'".format(profile_json)) profile_json = '{ "onHold": "false" }' profile = InVMArtifactsProfile(profile_json) self.assertFalse(profile.is_on_hold(), "Failed to parse '{0}'".format(profile_json)) profile_json = '{ "onHold": "TRUE" }' profile = InVMArtifactsProfile(profile_json) self.assertTrue(profile.is_on_hold(), "Failed to parse '{0}'".format(profile_json)) class TestExtensionUpdateOnFailure(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) self.mock_sleep = patch("time.sleep", lambda *_: mock_sleep(0.0001)) self.mock_sleep.start() def tearDown(self): self.mock_sleep.stop() AgentTestCase.tearDown(self) @staticmethod def _do_upgrade_scenario_and_get_order(first_ext, upgraded_ext): """ Given the provided ExtensionEmulator objects, installs the first and then attempts to update to the second. StatusBlobs and command invocations for each actor can be checked with {emulator}.status_blobs and {emulator}.actions[{command_name}] respectively. Note that this method assumes the first extension's install command should succeed. Don't use this method if your test is attempting to emulate a fresh install (i.e. not an upgrade) with a failing install. """ with mock_wire_protocol(DATA_FILE, http_put_handler=generate_put_handler(first_ext, upgraded_ext)) as protocol: exthandlers_handler = get_exthandlers_handler(protocol) with enable_invocations(first_ext, upgraded_ext) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() invocation_record.compare( (first_ext, ExtensionCommandNames.INSTALL), # Note that if installCommand is supposed to fail, this will erroneously raise. (first_ext, ExtensionCommandNames.ENABLE) ) protocol.mock_wire_data.set_extensions_config_version(upgraded_ext.version) protocol.mock_wire_data.set_incarnation(2) protocol.client.update_goal_state() with enable_invocations(first_ext, upgraded_ext) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() return invocation_record def test_non_enabled_ext_should_not_be_disabled_at_ver_update(self): _, enable_action = Actions.generate_unique_fail() first_ext = extension_emulator(enable_action=enable_action) second_ext = extension_emulator(version="1.1.0") invocation_record = TestExtensionUpdateOnFailure._do_upgrade_scenario_and_get_order(first_ext, second_ext) invocation_record.compare( (second_ext, ExtensionCommandNames.UPDATE), (first_ext, ExtensionCommandNames.UNINSTALL), (second_ext, ExtensionCommandNames.INSTALL), (second_ext, ExtensionCommandNames.ENABLE) ) def test_disable_failed_env_variable_should_be_set_for_update_cmd_when_continue_on_update_failure_is_true(self): exit_code, disable_action = Actions.generate_unique_fail() first_ext = extension_emulator(disable_action=disable_action) second_ext = extension_emulator(version="1.1.0", continue_on_update_failure=True) invocation_record = TestExtensionUpdateOnFailure._do_upgrade_scenario_and_get_order(first_ext, second_ext) invocation_record.compare( (first_ext, ExtensionCommandNames.DISABLE), (second_ext, ExtensionCommandNames.UPDATE), (first_ext, ExtensionCommandNames.UNINSTALL), (second_ext, ExtensionCommandNames.INSTALL), (second_ext, ExtensionCommandNames.ENABLE) ) _, kwargs = second_ext.actions[ExtensionCommandNames.UPDATE].call_args self.assertEqual(kwargs["env"][ExtCommandEnvVariable.DisableReturnCode], exit_code, "DisableAction's return code should be in updateAction's env.") def test_uninstall_failed_env_variable_should_set_for_install_when_continue_on_update_failure_is_true(self): exit_code, uninstall_action = Actions.generate_unique_fail() first_ext = extension_emulator(uninstall_action=uninstall_action) second_ext = extension_emulator(version="1.1.0", continue_on_update_failure=True) invocation_record = TestExtensionUpdateOnFailure._do_upgrade_scenario_and_get_order(first_ext, second_ext) invocation_record.compare( (first_ext, ExtensionCommandNames.DISABLE), (second_ext, ExtensionCommandNames.UPDATE), (first_ext, ExtensionCommandNames.UNINSTALL), (second_ext, ExtensionCommandNames.INSTALL), (second_ext, ExtensionCommandNames.ENABLE) ) _, kwargs = second_ext.actions[ExtensionCommandNames.INSTALL].call_args self.assertEqual(kwargs["env"][ExtCommandEnvVariable.UninstallReturnCode], exit_code, "UninstallAction's return code should be in updateAction's env.") def test_extension_error_should_be_raised_when_continue_on_update_failure_is_false_on_disable_failure(self): exit_code, disable_action = Actions.generate_unique_fail() first_ext = extension_emulator(disable_action=disable_action) second_ext = extension_emulator(version="1.1.0", continue_on_update_failure=False) invocation_record = TestExtensionUpdateOnFailure._do_upgrade_scenario_and_get_order(first_ext, second_ext) invocation_record.compare( (first_ext, ExtensionCommandNames.DISABLE) ) self.assertEqual(len(first_ext.status_blobs), 1, "The first extension should not have submitted a second status.") self.assertEqual(len(second_ext.status_blobs), 1, "The second extension should have a single submitted status.") self.assertTrue(exit_code in second_ext.status_blobs[0]["formattedMessage"]["message"], "DisableAction's error code should be propagated to the status blob.") def test_extension_error_should_be_raised_when_continue_on_update_failure_is_false_on_uninstall_failure(self): exit_code, uninstall_action = Actions.generate_unique_fail() first_ext = extension_emulator(uninstall_action=uninstall_action) second_ext = extension_emulator(version="1.1.0", continue_on_update_failure=False) invocation_record = TestExtensionUpdateOnFailure._do_upgrade_scenario_and_get_order(first_ext, second_ext) invocation_record.compare( (first_ext, ExtensionCommandNames.DISABLE), (second_ext, ExtensionCommandNames.UPDATE), (first_ext, ExtensionCommandNames.UNINSTALL) ) self.assertEqual(len(first_ext.status_blobs), 1, "The first extension should not have submitted a second status.") self.assertEqual(len(second_ext.status_blobs), 1, "The second extension should have a single submitted status.") self.assertTrue(exit_code in second_ext.status_blobs[0]["formattedMessage"]["message"], "UninstallAction's error code should be propagated to the status blob.") def test_extension_error_should_be_raised_when_continue_on_update_failure_is_true_on_disable_and_update_failure(self): exit_codes = { } exit_codes["disable"], disable_action = Actions.generate_unique_fail() exit_codes["update"], update_action = Actions.generate_unique_fail() first_ext = extension_emulator(disable_action=disable_action) second_ext = extension_emulator(version="1.1.0", update_action=update_action, continue_on_update_failure=True) invocation_record = TestExtensionUpdateOnFailure._do_upgrade_scenario_and_get_order(first_ext, second_ext) invocation_record.compare( (first_ext, ExtensionCommandNames.DISABLE), (second_ext, ExtensionCommandNames.UPDATE) ) self.assertEqual(len(first_ext.status_blobs), 1, "The first extension should not have submitted a second status.") self.assertEqual(len(second_ext.status_blobs), 1, "The second extension should have a single submitted status.") self.assertTrue(exit_codes["update"] in second_ext.status_blobs[0]["formattedMessage"]["message"], "UpdateAction's error code should be propagated to the status blob.") def test_extension_error_should_be_raised_when_continue_on_update_failure_is_true_on_uninstall_and_install_failure(self): exit_codes = { } exit_codes["install"], install_action = Actions.generate_unique_fail() exit_codes["uninstall"], uninstall_action = Actions.generate_unique_fail() first_ext = extension_emulator(uninstall_action=uninstall_action) second_ext = extension_emulator(version="1.1.0", install_action=install_action, continue_on_update_failure=True) invocation_record = TestExtensionUpdateOnFailure._do_upgrade_scenario_and_get_order(first_ext, second_ext) invocation_record.compare( (first_ext, ExtensionCommandNames.DISABLE), (second_ext, ExtensionCommandNames.UPDATE), (first_ext, ExtensionCommandNames.UNINSTALL), (second_ext, ExtensionCommandNames.INSTALL) ) self.assertEqual(len(first_ext.status_blobs), 1, "The first extension should not have submitted a second status.") self.assertEqual(len(second_ext.status_blobs), 1, "The second extension should have a single submitted status.") self.assertTrue(exit_codes["install"] in second_ext.status_blobs[0]["formattedMessage"]["message"], "InstallAction's error code should be propagated to the status blob.") def test_failed_env_variables_should_be_set_from_within_extension_commands(self): """ This test will test from the perspective of the extensions command weather the env variables are being set for those processes """ exit_codes = { } exit_codes["disable"], disable_action = Actions.generate_unique_fail() exit_codes["uninstall"], uninstall_action = Actions.generate_unique_fail() first_ext = extension_emulator(disable_action=disable_action, uninstall_action=uninstall_action) second_ext = extension_emulator(version="1.1.0", continue_on_update_failure=True) invocation_record = TestExtensionUpdateOnFailure._do_upgrade_scenario_and_get_order(first_ext, second_ext) invocation_record.compare( (first_ext, ExtensionCommandNames.DISABLE), (second_ext, ExtensionCommandNames.UPDATE), (first_ext, ExtensionCommandNames.UNINSTALL), (second_ext, ExtensionCommandNames.INSTALL), (second_ext, ExtensionCommandNames.ENABLE) ) _, update_kwargs = second_ext.actions[ExtensionCommandNames.UPDATE].call_args _, install_kwargs = second_ext.actions[ExtensionCommandNames.INSTALL].call_args second_extension_dir = os.path.join( conf.get_lib_dir(), "{0}-{1}".format(second_ext.name, second_ext.version) ) # Ensure we're checking variables for update scenario self.assertEqual(update_kwargs["env"][ExtCommandEnvVariable.DisableReturnCode], exit_codes["disable"], "DisableAction's return code should be present in updateAction's env.") self.assertTrue(ExtCommandEnvVariable.UninstallReturnCode not in update_kwargs["env"], "UninstallAction's return code should not be in updateAction's env.") self.assertEqual(update_kwargs["env"][ExtCommandEnvVariable.ExtensionPath], second_extension_dir, "The second extension's directory should be present in updateAction's env.") self.assertEqual(update_kwargs["env"][ExtCommandEnvVariable.ExtensionVersion], "1.1.0", "The second extension's version should be present in updateAction's env.") # Ensure we're checking variables for install scenario self.assertEqual(install_kwargs["env"][ExtCommandEnvVariable.UninstallReturnCode], exit_codes["uninstall"], "UninstallAction's return code should be present in installAction's env.") self.assertTrue(ExtCommandEnvVariable.DisableReturnCode not in install_kwargs["env"], "DisableAction's return code should not be in installAction's env.") self.assertEqual(install_kwargs["env"][ExtCommandEnvVariable.ExtensionPath], second_extension_dir, "The second extension's directory should be present in installAction's env.") self.assertEqual(install_kwargs["env"][ExtCommandEnvVariable.ExtensionVersion], "1.1.0", "The second extension's version should be present in installAction's env.") def test_correct_exit_code_should_set_on_disable_cmd_failure(self): exit_code, disable_action = Actions.generate_unique_fail() first_ext = extension_emulator(disable_action=disable_action) second_ext = extension_emulator(version="1.1.0", continue_on_update_failure=True, update_mode="UpdateWithoutInstall") invocation_record = TestExtensionUpdateOnFailure._do_upgrade_scenario_and_get_order(first_ext, second_ext) invocation_record.compare( (first_ext, ExtensionCommandNames.DISABLE), (second_ext, ExtensionCommandNames.UPDATE), (first_ext, ExtensionCommandNames.UNINSTALL), (second_ext, ExtensionCommandNames.ENABLE) ) _, update_kwargs = second_ext.actions[ExtensionCommandNames.UPDATE].call_args self.assertEqual(update_kwargs["env"][ExtCommandEnvVariable.DisableReturnCode], exit_code, "DisableAction's return code should be present in UpdateAction's env.") def test_timeout_code_should_set_on_cmd_timeout(self): # Return None to every poll, forcing a timeout after 900 seconds (actually very quick because sleep(*) is mocked) force_timeout = lambda *args, **kwargs: None first_ext = extension_emulator(disable_action=force_timeout, uninstall_action=force_timeout) second_ext = extension_emulator(version="1.1.0", continue_on_update_failure=True) with patch("os.killpg"): with patch("os.getpgid"): invocation_record = TestExtensionUpdateOnFailure._do_upgrade_scenario_and_get_order(first_ext, second_ext) invocation_record.compare( (first_ext, ExtensionCommandNames.DISABLE), (second_ext, ExtensionCommandNames.UPDATE), (first_ext, ExtensionCommandNames.UNINSTALL), (second_ext, ExtensionCommandNames.INSTALL), (second_ext, ExtensionCommandNames.ENABLE) ) _, update_kwargs = second_ext.actions[ExtensionCommandNames.UPDATE].call_args _, install_kwargs = second_ext.actions[ExtensionCommandNames.INSTALL].call_args # Verify both commands are reported as timeouts. self.assertEqual(update_kwargs["env"][ExtCommandEnvVariable.DisableReturnCode], str(ExtensionErrorCodes.PluginHandlerScriptTimedout), "DisableAction's return code should be marked as a timeout in UpdateAction's env.") self.assertEqual(install_kwargs["env"][ExtCommandEnvVariable.UninstallReturnCode], str(ExtensionErrorCodes.PluginHandlerScriptTimedout), "UninstallAction's return code should be marked as a timeout in installAction's env.") def test_success_code_should_set_in_env_variables_on_cmd_success(self): first_ext = extension_emulator() second_ext = extension_emulator(version="1.1.0") invocation_record = TestExtensionUpdateOnFailure._do_upgrade_scenario_and_get_order(first_ext, second_ext) invocation_record.compare( (first_ext, ExtensionCommandNames.DISABLE), (second_ext, ExtensionCommandNames.UPDATE), (first_ext, ExtensionCommandNames.UNINSTALL), (second_ext, ExtensionCommandNames.INSTALL), (second_ext, ExtensionCommandNames.ENABLE) ) _, update_kwargs = second_ext.actions[ExtensionCommandNames.UPDATE].call_args _, install_kwargs = second_ext.actions[ExtensionCommandNames.INSTALL].call_args self.assertEqual(update_kwargs["env"][ExtCommandEnvVariable.DisableReturnCode], "0", "DisableAction's return code in updateAction's env should be 0.") self.assertEqual(install_kwargs["env"][ExtCommandEnvVariable.UninstallReturnCode], "0", "UninstallAction's return code in installAction's env should be 0.") class TestCollectExtensionStatus(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) self.lib_dir = tempfile.mkdtemp() self.mock_sleep = patch("time.sleep", lambda *_: mock_sleep(0.001)) self.mock_sleep.start() def tearDown(self): self.mock_sleep.stop() AgentTestCase.tearDown(self) def _setup_extension_for_validating_collect_ext_status(self, mock_lib_dir, status_file=None): handler_name = "TestHandler" handler_version = "1.0.0" mock_lib_dir.return_value = self.lib_dir fileutil.mkdir(os.path.join(self.lib_dir, handler_name + "-" + handler_version, "config")) fileutil.mkdir(os.path.join(self.lib_dir, handler_name + "-" + handler_version, "status")) shutil.copy(tempfile.mkstemp(prefix="test-file")[1], os.path.join(self.lib_dir, handler_name + "-" + handler_version, "config", "0.settings")) if status_file is not None: shutil.copy(os.path.join(data_dir, "ext", status_file), os.path.join(self.lib_dir, handler_name + "-" + handler_version, "status", "0.status")) with mock_wire_protocol(DATA_FILE) as protocol: exthandler = Extension(name=handler_name) exthandler.version = handler_version extension = ExtensionSettings(name=handler_name, sequenceNumber=0) exthandler.settings.append(extension) return ExtHandlerInstance(exthandler, protocol), extension @patch("azurelinuxagent.common.conf.get_lib_dir") def test_collect_ext_status(self, mock_lib_dir): """ This test validates that collect_ext_status correctly picks up the status file (sample-status.json) and then parses it correctly. """ ext_handler_i, extension = self._setup_extension_for_validating_collect_ext_status(mock_lib_dir, "sample-status.json") ext_status = ext_handler_i.collect_ext_status(extension) self.assertEqual(ext_status.code, SUCCESS_CODE_FROM_STATUS_FILE) self.assertEqual(ext_status.configurationAppliedTime, None) self.assertEqual(ext_status.operation, "Enable") self.assertEqual(ext_status.sequenceNumber, 0) self.assertEqual(ext_status.message, "Aenean semper nunc nisl, vitae sollicitudin felis consequat at. In " "lobortis elementum sapien, non commodo odio semper ac.") self.assertEqual(ext_status.status, ExtensionStatusValue.success) self.assertEqual(len(ext_status.substatusList), 1) sub_status = ext_status.substatusList[0] self.assertEqual(sub_status.code, "0") self.assertEqual(sub_status.message, None) self.assertEqual(sub_status.status, ExtensionStatusValue.success) @patch("azurelinuxagent.common.conf.get_lib_dir") def test_collect_ext_status_for_invalid_json(self, mock_lib_dir): """ This test validates that collect_ext_status correctly picks up the status file (sample-status-invalid-json-format.json) and then since the Json cannot be parsed correctly it extension status message should include 2000 bytes of status file and the line number in which it failed to parse. The uniqueMachineId tag comes from status file. """ ext_handler_i, extension = self._setup_extension_for_validating_collect_ext_status(mock_lib_dir, "sample-status-invalid-json-format.json") ext_status = ext_handler_i.collect_ext_status(extension) self.assertEqual(ext_status.code, ExtensionErrorCodes.PluginSettingsStatusInvalid) self.assertEqual(ext_status.configurationAppliedTime, None) self.assertEqual(ext_status.operation, None) self.assertEqual(ext_status.sequenceNumber, 0) self.assertRegex(ext_status.message, r".*The status reported by the extension TestHandler-1.0.0\(Sequence number 0\), " r"was in an incorrect format and the agent could not parse it correctly." r" Failed due to.*") self.assertIn("\"uniqueMachineId\": \"e5e5602b-48a6-4c35-9f96-752043777af1\"", ext_status.message) self.assertEqual(ext_status.status, ExtensionStatusValue.error) self.assertEqual(len(ext_status.substatusList), 0) @patch("azurelinuxagent.common.conf.get_lib_dir") def test_it_should_collect_ext_status_even_when_config_dir_deleted(self, mock_lib_dir): ext_handler_i, extension = self._setup_extension_for_validating_collect_ext_status(mock_lib_dir, "sample-status.json") shutil.rmtree(ext_handler_i.get_conf_dir(), ignore_errors=True) ext_status = ext_handler_i.collect_ext_status(extension) self.assertEqual(ext_status.code, SUCCESS_CODE_FROM_STATUS_FILE) self.assertEqual(ext_status.configurationAppliedTime, None) self.assertEqual(ext_status.operation, "Enable") self.assertEqual(ext_status.sequenceNumber, 0) self.assertEqual(ext_status.message, "Aenean semper nunc nisl, vitae sollicitudin felis consequat at. In " "lobortis elementum sapien, non commodo odio semper ac.") self.assertEqual(ext_status.status, ExtensionStatusValue.success) @patch("azurelinuxagent.common.conf.get_lib_dir") def test_collect_ext_status_very_large_status_message(self, mock_lib_dir): """ Testing collect_ext_status() with a very large status file (>128K) to see if it correctly parses the status without generating a really large message. """ ext_handler_i, extension = self._setup_extension_for_validating_collect_ext_status(mock_lib_dir, "sample-status-very-large.json") ext_status = ext_handler_i.collect_ext_status(extension) self.assertEqual(ext_status.code, SUCCESS_CODE_FROM_STATUS_FILE) self.assertEqual(ext_status.configurationAppliedTime, None) self.assertEqual(ext_status.operation, "Enable") self.assertEqual(ext_status.sequenceNumber, 0) # [TRUNCATED] comes from azurelinuxagent.ga.exthandlers._TRUNCATED_SUFFIX self.assertRegex(ext_status.message, r"Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non " r"lacinia urna, sit .*\[TRUNCATED\]") self.maxDiff = None self.assertEqual(ext_status.status, ExtensionStatusValue.success) self.assertEqual(len(ext_status.substatusList), 1) # NUM OF SUBSTATUS PARSED for sub_status in ext_status.substatusList: self.assertRegex(sub_status.name, r'\[\{"status"\: \{"status": "success", "code": "1", "snapshotInfo": ' r'\[\{"snapshotUri":.*') self.assertEqual(0, sub_status.code) self.assertRegex(sub_status.message, "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum " "non lacinia urna, sit amet venenatis orci.*") self.assertEqual(sub_status.status, ExtensionStatusValue.success) @patch("azurelinuxagent.common.conf.get_lib_dir") def test_collect_ext_status_very_large_status_file_with_multiple_substatus_nodes(self, mock_lib_dir): """ Testing collect_ext_status() with a very large status file (>128K) to see if it correctly parses the status without generating a really large message. This checks if the multiple substatus messages are correctly parsed and truncated. """ ext_handler_i, extension = self._setup_extension_for_validating_collect_ext_status( mock_lib_dir, "sample-status-very-large-multiple-substatuses.json") # ~470K bytes. ext_status = ext_handler_i.collect_ext_status(extension) self.assertEqual(ext_status.code, SUCCESS_CODE_FROM_STATUS_FILE) self.assertEqual(ext_status.configurationAppliedTime, None) self.assertEqual(ext_status.operation, "Enable") self.assertEqual(ext_status.sequenceNumber, 0) self.assertRegex(ext_status.message, "Lorem ipsum dolor sit amet, consectetur adipiscing elit. " "Vestibulum non lacinia urna, sit .*") self.assertEqual(ext_status.status, ExtensionStatusValue.success) self.assertEqual(len(ext_status.substatusList), 12) # The original file has 41 substatus nodes. for sub_status in ext_status.substatusList: self.assertRegex(sub_status.name, r'\[\{"status"\: \{"status": "success", "code": "1", "snapshotInfo": ' r'\[\{"snapshotUri":.*') self.assertEqual(0, sub_status.code) self.assertRegex(sub_status.message, "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum " "non lacinia urna, sit amet venenatis orci.*") self.assertEqual(ExtensionStatusValue.success, sub_status.status) @patch("azurelinuxagent.common.conf.get_lib_dir") def test_collect_ext_status_read_file_read_exceptions(self, mock_lib_dir): """ Testing collect_ext_status to validate the readfile exceptions. """ ext_handler_i, extension = self._setup_extension_for_validating_collect_ext_status(mock_lib_dir, "sample-status.json") original_read_file = read_file def mock_read_file(file_, *args, **kwargs): expected_status_file_path = os.path.join(self.lib_dir, ext_handler_i.ext_handler.name + "-" + ext_handler_i.ext_handler.version, "status", "0.status") if file_ == expected_status_file_path: raise IOError("No such file or directory: {0}".format(expected_status_file_path)) else: original_read_file(file_, *args, **kwargs) with patch('azurelinuxagent.common.utils.fileutil.read_file', mock_read_file): ext_status = ext_handler_i.collect_ext_status(extension) self.assertEqual(ext_status.code, ExtensionErrorCodes.PluginUnknownFailure) self.assertEqual(ext_status.configurationAppliedTime, None) self.assertEqual(ext_status.operation, None) self.assertEqual(ext_status.sequenceNumber, 0) self.assertRegex(ext_status.message, r".*We couldn't read any status for {0}-{1} extension, for the " r"sequence number {2}. It failed due to". format("TestHandler", "1.0.0", 0)) self.assertEqual(ext_status.status, ExtensionStatusValue.error) self.assertEqual(len(ext_status.substatusList), 0) @patch("azurelinuxagent.common.conf.get_lib_dir") def test_collect_ext_status_json_exceptions(self, mock_lib_dir): """ Testing collect_ext_status() with a malformed json status file. """ ext_handler_i, extension = self._setup_extension_for_validating_collect_ext_status(mock_lib_dir, "sample-status-invalid-format-emptykey-line7.json") ext_status = ext_handler_i.collect_ext_status(extension) self.assertEqual(ext_status.code, ExtensionErrorCodes.PluginSettingsStatusInvalid) self.assertEqual(ext_status.configurationAppliedTime, None) self.assertEqual(ext_status.operation, None) self.assertEqual(ext_status.sequenceNumber, 0) self.assertRegex(ext_status.message, r".*The status reported by the extension {0}-{1}\(Sequence number {2}\), " "was in an incorrect format and the agent could not parse it correctly." " Failed due to.*". format("TestHandler", "1.0.0", 0)) self.assertEqual(ext_status.status, ExtensionStatusValue.error) self.assertEqual(len(ext_status.substatusList), 0) @patch("azurelinuxagent.common.conf.get_lib_dir") def test_collect_ext_status_parse_ext_status_exceptions(self, mock_lib_dir): """ Testing collect_ext_status() with a malformed json status file. """ ext_handler_i, extension = self._setup_extension_for_validating_collect_ext_status(mock_lib_dir, "sample-status-invalid-status-no-status-status-key.json") ext_status = ext_handler_i.collect_ext_status(extension) self.assertEqual(ext_status.code, ExtensionErrorCodes.PluginSettingsStatusInvalid) self.assertEqual(ext_status.configurationAppliedTime, None) self.assertEqual(ext_status.operation, None) self.assertEqual(ext_status.sequenceNumber, 0) self.assertRegex(ext_status.message, "Could not get a valid status from the extension {0}-{1}. " "Encountered the following error".format("TestHandler", "1.0.0")) self.assertEqual(ext_status.status, ExtensionStatusValue.error) self.assertEqual(len(ext_status.substatusList), 0) @patch("azurelinuxagent.common.conf.get_lib_dir") def test_it_should_report_transitioning_if_status_file_not_found(self, mock_lib_dir): """ Testing collect_ext_status() with a missing status file. """ ext_handler_i, extension = self._setup_extension_for_validating_collect_ext_status(mock_lib_dir) ext_status = ext_handler_i.collect_ext_status(extension) self.assertEqual(ext_status.code, ExtensionErrorCodes.PluginSuccess) self.assertEqual(ext_status.configurationAppliedTime, None) self.assertEqual(ext_status.operation, None) self.assertEqual(ext_status.sequenceNumber, 0) self.assertIn("This status is being reported by the Guest Agent since no status file was reported by extension {0}". format("TestHandler"), ext_status.message) self.assertEqual(ext_status.status, ExtensionStatusValue.transitioning) self.assertEqual(len(ext_status.substatusList), 0) class TestAdditionalLocationsExtensions(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) self.test_data = DATA_FILE_EXT_ADDITIONAL_LOCATIONS.copy() def tearDown(self): AgentTestCase.tearDown(self) @patch('time.sleep') def test_additional_locations_node_is_consumed(self, _): location_uri_pattern = r'https?://mock-goal-state/(?P{0})/(?P\d)/manifest.xml'\ .format(r'(location)|(failoverlocation)|(additionalLocation)') location_uri_regex = re.compile(location_uri_pattern) manifests_used = [ ('location', '1'), ('failoverlocation', '2'), ('additionalLocation', '3'), ('additionalLocation', '4') ] def manifest_location_handler(url, **kwargs): url_match = location_uri_regex.match(url) if not url_match: if "extensionArtifact" in url: wrapped_url = kwargs.get("headers", {}).get("x-ms-artifact-location") if wrapped_url and location_uri_regex.match(wrapped_url): return Exception("Ignoring host plugin requests for testing purposes.") return None location_type, manifest_num = url_match.group("location_type", "manifest_num") try: manifests_used.remove((location_type, manifest_num)) except ValueError: raise AssertionError("URI '{0}' used multiple times".format(url)) if manifests_used: # Still locations to try in the list; throw a fake # error to make sure all of the locations get called. return Exception("Failing manifest fetch from uri '{0}' for testing purposes.".format(url)) return None with mock_wire_protocol(self.test_data, http_get_handler=manifest_location_handler) as protocol: exthandlers_handler = get_exthandlers_handler(protocol) exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() def test_fetch_manifest_timeout_is_respected(self): location_uri_pattern = r'https?://mock-goal-state/(?P{0})/(?P\d)/manifest.xml'\ .format(r'(location)|(failoverlocation)|(additionalLocation)') location_uri_regex = re.compile(location_uri_pattern) def manifest_location_handler(url, **kwargs): url_match = location_uri_regex.match(url) if not url_match: if "extensionArtifact" in url: wrapped_url = kwargs.get("headers", {}).get("x-ms-artifact-location") if wrapped_url and location_uri_regex.match(wrapped_url): return Exception("Ignoring host plugin requests for testing purposes.") return None if manifest_location_handler.num_times_called == 0: time.sleep(.3) manifest_location_handler.num_times_called += 1 return Exception("Failing manifest fetch from uri '{0}' for testing purposes.".format(url)) return None manifest_location_handler.num_times_called = 0 with mock_wire_protocol(self.test_data, http_get_handler=manifest_location_handler) as protocol: ext_handlers = protocol.get_goal_state().extensions_goal_state.extensions download_timeout = wire._DOWNLOAD_TIMEOUT wire._DOWNLOAD_TIMEOUT = datetime.timedelta(minutes=0) try: with self.assertRaises(ExtensionDownloadError): protocol.client.fetch_manifest("extension", ext_handlers[0].manifest_uris, use_verify_header=False) finally: wire._DOWNLOAD_TIMEOUT = download_timeout # New test cases should be added here.This class uses mock_wire_protocol class TestExtension(TestExtensionBase, HttpRequestPredicates): @classmethod def setUpClass(cls): AgentTestCase.setUpClass() cls.mock_sleep = patch('time.sleep', side_effect=lambda _: mock_sleep(0.001)) cls.mock_sleep.start() @classmethod def tearDownClass(cls): cls.mock_sleep.stop() def setUp(self): AgentTestCase.setUp(self) def tearDown(self): AgentTestCase.tearDown(self) @patch('time.gmtime', MagicMock(return_value=time.gmtime(0))) @patch("azurelinuxagent.common.version.get_daemon_version", return_value=FlexibleVersion("0.0.0.0")) def test_ext_handler_reporting_status_file(self, _): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: def mock_http_put(url, *args, **_): if HttpRequestPredicates.is_host_plugin_status_request(url): # Skip reading the HostGA request data as its encoded return MockHttpResponse(status=500) protocol.aggregate_status = json.loads(args[0]) return MockHttpResponse(status=201) protocol.aggregate_status = None protocol.set_http_handlers(http_put_handler=mock_http_put) exthandlers_handler = get_exthandlers_handler(protocol) # creating supported features list that is sent to crp supported_features = [] for _, feature in get_agent_supported_features_list_for_crp().items(): supported_features.append( { "Key": feature.name, "Value": feature.version } ) expected_status = { "version": "1.1", "timestampUTC": "1970-01-01T00:00:00Z", "aggregateStatus": { "guestAgentStatus": { "version": AGENT_VERSION, "status": "Ready", "formattedMessage": { "lang": "en-US", "message": "Guest Agent is running" } }, "handlerAggregateStatus": [ { "handlerVersion": "1.0.0", "handlerName": "OSTCExtensions.ExampleHandlerLinux", "status": "Ready", "code": 0, "useExactVersion": True, "formattedMessage": { "lang": "en-US", "message": "Plugin enabled" }, "runtimeSettingsStatus": { "settingsStatus": { "status": { "name": "OSTCExtensions.ExampleHandlerLinux", "configurationAppliedTime": None, "operation": None, "status": "success", "code": 0, "formattedMessage": { "lang": "en-US", "message": None } }, "version": 1.0, "timestampUTC": "1970-01-01T00:00:00Z" }, "sequenceNumber": 0 } } ], "vmArtifactsAggregateStatus": { "goalStateAggregateStatus": { "formattedMessage": { "lang": "en-US", "message": "GoalState executed successfully" }, "timestampUTC": "1970-01-01T00:00:00Z", "inSvdSeqNo": "1", "status": "Success", "code": 0 } } }, "guestOSInfo": None, "supportedFeatures": supported_features } exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() actual_status = json.loads(protocol.get_status_blob_data()) # Don't compare the guestOSInfo self.assertIsNotNone(actual_status.get("guestOSInfo"), "The status file is missing the guestOSInfo property") actual_status["guestOSInfo"] = None self.assertEqual(expected_status, actual_status) def test_it_should_process_extensions_only_if_allowed(self): def assert_extensions_called(exthandlers_handler, expected_call_count=0): extension_name = 'OSTCExtensions.ExampleHandlerLinux' extension_calls = [] original_popen = subprocess.Popen def mock_popen(*args, **kwargs): if extension_name in args[0]: extension_calls.append(args[0]) return original_popen(*args, **kwargs) with patch('subprocess.Popen', side_effect=mock_popen): exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(expected_call_count, len(extension_calls), "Call counts dont match") with patch('time.sleep', side_effect=lambda _: mock_sleep(0.001)): def http_get_handler(url, *_, **kwargs): if self.is_in_vm_artifacts_profile_request(url) or self.is_host_plugin_in_vm_artifacts_profile_request(url, kwargs): return mock_in_vm_artifacts_profile_response return None mock_in_vm_artifacts_profile_response = MockHttpResponse(200, body='{ "onHold": false }'.encode('utf-8')) with mock_wire_protocol(wire_protocol_data.DATA_FILE_IN_VM_ARTIFACTS_PROFILE, http_get_handler=http_get_handler) as protocol: protocol.report_vm_status = MagicMock() exthandlers_handler = get_exthandlers_handler(protocol) # Extension called once for Install and once for Enable assert_extensions_called(exthandlers_handler, expected_call_count=2) self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0") # Update GoalState protocol.mock_wire_data.set_incarnation(2) protocol.client.update_goal_state() with patch.object(conf, 'get_extensions_enabled', return_value=False): assert_extensions_called(exthandlers_handler, expected_call_count=0) # Disabled over-provisioning in configuration # In this case we should process GoalState as incarnation changed with patch.object(conf, 'get_extensions_enabled', return_value=True): with patch.object(conf, 'get_enable_overprovisioning', return_value=False): # 1 expected call count for Enable command assert_extensions_called(exthandlers_handler, expected_call_count=1) # Enabled on_hold property in artifact_blob mock_in_vm_artifacts_profile_response = MockHttpResponse(200, body='{ "onHold": true }'.encode('utf-8')) protocol.client.reset_goal_state() with patch.object(conf, 'get_extensions_enabled', return_value=True): with patch.object(conf, "get_enable_overprovisioning", return_value=True): assert_extensions_called(exthandlers_handler, expected_call_count=0) # Disabled on_hold property in artifact_blob mock_in_vm_artifacts_profile_response = MockHttpResponse(200, body='{ "onHold": false }'.encode('utf-8')) protocol.client.reset_goal_state() with patch.object(conf, 'get_extensions_enabled', return_value=True): with patch.object(conf, "get_enable_overprovisioning", return_value=True): # 1 expected call count for Enable command assert_extensions_called(exthandlers_handler, expected_call_count=1) def test_it_should_process_extensions_appropriately_on_artifact_hold(self): with patch('time.sleep', side_effect=lambda _: mock_sleep(0.001)): with patch("azurelinuxagent.common.conf.get_enable_overprovisioning", return_value=True): with mock_wire_protocol(wire_protocol_data.DATA_FILE_IN_VM_ARTIFACTS_PROFILE) as protocol: protocol.report_vm_status = MagicMock() exthandlers_handler = get_exthandlers_handler(protocol) # # The test data sets onHold to True; extensions should not be processed # exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() vm_agent_status = protocol.report_vm_status.call_args[0][0].vmAgent self.assertEqual(vm_agent_status.status, "Ready", "Agent should report ready") self.assertEqual(0, len(vm_agent_status.extensionHandlers), "No extensions should be reported as on_hold is True") self.assertIsNone(vm_agent_status.vm_artifacts_aggregate_status.goal_state_aggregate_status, "No GS Aggregate status should be reported") # # Now force onHold to False; extensions should be processed # def http_get_handler(url, *_, **kwargs): if self.is_in_vm_artifacts_profile_request(url) or self.is_host_plugin_in_vm_artifacts_profile_request(url, kwargs): return MockHttpResponse(200, body='{ "onHold": false }'.encode('utf-8')) return None protocol.set_http_handlers(http_get_handler=http_get_handler) protocol.client.reset_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0") self.assertEqual("1", protocol.report_vm_status.call_args[0][0].vmAgent.vm_artifacts_aggregate_status.goal_state_aggregate_status.in_svd_seq_no, "SVD sequence number mismatch") def test_it_should_redact_access_tokens_in_extension_status(self): original = r'''ONE https://foo.blob.core.windows.net/bar?sv=2000&ss=bfqt&srt=sco&sp=rw&se=2025&st=2022&spr=https&sig=SI%3D TWO:HTTPS://bar.blob.core.com/foo/bar/foo.txt?sv=2018&sr=b&sig=Yx%3D&st=2023%3A52Z&se=9999%3A59%3A59Z&sp=r TWO https://bar.com/foo?uid=2018&sr=b THREE''' expected = r'''ONE https://foo.blob.core.windows.net/bar? TWO:HTTPS://bar.blob.core.com/foo/bar/foo.txt? TWO https://bar.com/foo?uid=2018&sr=b THREE''' with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: exthandlers_handler = get_exthandlers_handler(protocol) original_popen = subprocess.Popen def mock_popen(cmd, *args, **kwargs): if cmd.endswith("sample.py -enable"): cmd = "echo '{0}'; >&2 echo '{0}'; exit 1".format(original) return original_popen(cmd, *args, **kwargs) with patch.object(subprocess, 'Popen', side_effect=mock_popen): exthandlers_handler.run() status = exthandlers_handler.report_ext_handlers_status() self.assertEqual(1, len(status.vmAgent.extensionHandlers), 'Expected exactly 1 extension status') message = status.vmAgent.extensionHandlers[0].message self.assertIn('[stdout]\n{0}'.format(expected), message, "The extension's stdout was not redacted correctly") self.assertIn('[stderr]\n{0}'.format(expected), message, "The extension's stderr was not redacted correctly") class TestExtensionHandlerManifest(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) self.ext_handler = Extension(name='foo') self.ext_handler.version = "1.2.3" self.ext_handler_instance = ExtHandlerInstance(ext_handler=self.ext_handler, protocol=WireProtocol("1.2.3.4")) self.test_file = os.path.join(self.tmp_dir, "HandlerManifest.json") def test_handler_manifest_parsed_correctly(self): shutil.copyfile(os.path.join(data_dir, "ext", "handler_manifest", "valid_manifest.json"), self.test_file) with patch("azurelinuxagent.ga.exthandlers.ExtHandlerInstance.get_manifest_file", return_value=self.test_file): manifest = self.ext_handler_instance.load_manifest() self.assertEqual(manifest.get_install_command(), "install_cmd") self.assertEqual(manifest.get_enable_command(), "enable_cmd") self.assertEqual(manifest.get_uninstall_command(), "uninstall_cmd") self.assertEqual(manifest.get_update_command(), "update_cmd") self.assertEqual(manifest.get_disable_command(), "disable_cmd") self.assertTrue(manifest.is_continue_on_update_failure()) self.assertTrue(manifest.is_report_heartbeat()) self.assertTrue(manifest.supports_multiple_extensions()) def test_handler_manifest_defaults(self): # Set only the required fields shutil.copyfile(os.path.join(data_dir, "ext", "handler_manifest", "manifest_no_optional_fields.json"), self.test_file) with patch("azurelinuxagent.ga.exthandlers.ExtHandlerInstance.get_manifest_file", return_value=self.test_file): manifest = self.ext_handler_instance.load_manifest() self.assertFalse(manifest.is_continue_on_update_failure()) self.assertFalse(manifest.is_report_heartbeat()) self.assertFalse(manifest.supports_multiple_extensions()) def test_handler_manifest_boolean_fields(self): # Set the boolean fields to strings shutil.copyfile(os.path.join(data_dir, "ext", "handler_manifest", "manifest_boolean_fields_strings.json"), self.test_file) with patch("azurelinuxagent.ga.exthandlers.ExtHandlerInstance.get_manifest_file", return_value=self.test_file): manifest = self.ext_handler_instance.load_manifest() self.assertTrue(manifest.is_continue_on_update_failure()) self.assertTrue(manifest.is_report_heartbeat()) self.assertTrue(manifest.supports_multiple_extensions()) # set the boolean fields to invalid values shutil.copyfile(os.path.join(data_dir, "ext", "handler_manifest", "manifest_boolean_fields_invalid.json"), self.test_file) with patch("azurelinuxagent.ga.exthandlers.ExtHandlerInstance.get_manifest_file", return_value=self.test_file): manifest = self.ext_handler_instance.load_manifest() self.assertFalse(manifest.is_continue_on_update_failure()) self.assertFalse(manifest.is_report_heartbeat()) self.assertFalse(manifest.supports_multiple_extensions()) # set the boolean fields to 'false' string shutil.copyfile(os.path.join(data_dir, "ext", "handler_manifest", "manifest_boolean_fields_false.json"), self.test_file) with patch("azurelinuxagent.ga.exthandlers.ExtHandlerInstance.get_manifest_file", return_value=self.test_file): manifest = self.ext_handler_instance.load_manifest() self.assertFalse(manifest.is_continue_on_update_failure()) self.assertFalse(manifest.is_report_heartbeat()) self.assertFalse(manifest.supports_multiple_extensions()) def test_report_msg_if_handler_manifest_contains_invalid_values(self): # Set the boolean fields to invalid values shutil.copyfile(os.path.join(data_dir, "ext", "handler_manifest", "manifest_boolean_fields_invalid.json"), self.test_file) with patch("azurelinuxagent.ga.exthandlers.ExtHandlerInstance.get_manifest_file", return_value=self.test_file): with patch("azurelinuxagent.ga.exthandlers.add_event") as mock_add_event: manifest = self.ext_handler_instance.load_manifest() manifest.report_invalid_boolean_properties("test_ext") kw_messages = [kw for _, kw in mock_add_event.call_args_list if kw.get('op') == 'ExtensionHandlerManifest'] self.assertEqual(3, len(kw_messages)) self.assertIn("'reportHeartbeat' has a non-boolean value", kw_messages[0]['message']) self.assertIn("'continueOnUpdateFailure' has a non-boolean value", kw_messages[1]['message']) self.assertIn("'supportsMultipleExtensions' has a non-boolean value", kw_messages[2]['message']) class TestExtensionPolicy(TestExtensionBase): def setUp(self): AgentTestCase.setUp(self) self.mock_sleep = patch("time.sleep", lambda *_: mock_sleep(0.01)) self.mock_sleep.start() self.policy_path = os.path.join(self.tmp_dir, "waagent_policy.json") # Patch attributes to enable policy feature self.patch_policy_path = patch('azurelinuxagent.common.conf.get_policy_file_path', return_value=str(self.policy_path)) self.patch_policy_path.start() self.patch_conf_flag = patch('azurelinuxagent.ga.policy.policy_engine.conf.get_extension_policy_enabled', return_value=True) self.patch_conf_flag.start() self.maxDiff = None # When long error messages don't match, display the entire diff. def tearDown(self): patch.stopall() AgentTestCase.tearDown(self) def _create_policy_file(self, policy): with open(self.policy_path, mode='w') as policy_file: if isinstance(policy, dict): json.dump(policy, policy_file, indent=4) else: policy_file.write(policy) policy_file.flush() def _test_policy_case(self, policy, op, expected_status_code, expected_handler_status, expected_ext_count, expected_status_msg=None): # Set up a mock protocol instance. with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: if op == ExtensionRequestedState.Uninstall: # Generate a new mock goal state to uninstall the extension - increment the incarnation protocol.mock_wire_data.set_incarnation(2) protocol.mock_wire_data.set_extensions_config_state(ExtensionRequestedState.Uninstall) protocol.client.update_goal_state() protocol.aggregate_status = None protocol.report_vm_status = MagicMock() exthandlers_handler = get_exthandlers_handler(protocol) # Create policy file and process extensions. self._create_policy_file(policy) exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() # Assert that agent is reporting the expected handler status report_vm_status = protocol.report_vm_status self.assertTrue(report_vm_status.called) self._assert_handler_status(report_vm_status, expected_handler_status, expected_ext_count=expected_ext_count, version="1.0.0", expected_handler_name='OSTCExtensions.ExampleHandlerLinux', expected_msg=expected_status_msg, expected_code=expected_status_code) def test_should_fail_enable_if_extension_disallowed(self): policy = \ { "policyVersion": "0.1.0", "extensionPolicies": { "allowListedExtensionsOnly": True, } } expected_msg = "failed to run extension 'OSTCExtensions.ExampleHandlerLinux' because it is not specified as an allowed extension." self._test_policy_case(policy=policy, op=ExtensionRequestedState.Enabled, expected_status_code=ExtensionErrorCodes.PluginEnableProcessingFailed, expected_handler_status='NotReady', expected_ext_count=1, expected_status_msg=expected_msg) def test_should_fail_enable_for_invalid_policy(self): policy = \ { "policyVersion": "0.1.0", "extensionPolicies": { "allowListedExtensionsOnly": "False" } } expected_msg = "attribute 'extensionPolicies.allowListedExtensionsOnly'; must be 'boolean'" self._test_policy_case(policy=policy, op=ExtensionRequestedState.Enabled, expected_status_code=ExtensionErrorCodes.PluginEnableProcessingFailed, expected_handler_status='NotReady', expected_ext_count=1, expected_status_msg=expected_msg) def test_should_fail_extension_if_error_thrown_during_policy_engine_init(self): policy = \ { "policyVersion": "0.1.0" } with patch('azurelinuxagent.ga.policy.policy_engine.ExtensionPolicyEngine.__init__', side_effect=Exception("mock exception")): expected_msg = "Extension will not be processed: mock exception" self._test_policy_case(policy=policy, op=ExtensionRequestedState.Enabled, expected_status_code=ExtensionErrorCodes.PluginEnableProcessingFailed, expected_handler_status='NotReady', expected_ext_count=1, expected_status_msg=expected_msg) def test_should_fail_uninstall_if_extension_disallowed(self): policy = \ { "policyVersion": "0.1.0", "extensionPolicies": { "allowListedExtensionsOnly": True, "signatureRequired": False, "extensions": {} }, } expected_msg = "failed to uninstall extension 'OSTCExtensions.ExampleHandlerLinux' because it is not specified as an allowed extension." self._test_policy_case(policy=policy, op=ExtensionRequestedState.Uninstall, expected_status_code=ExtensionErrorCodes.PluginDisableProcessingFailed, expected_handler_status='NotReady', expected_ext_count=1, expected_status_msg=expected_msg) def test_should_fail_enable_if_dependent_extension_disallowed(self): self._create_policy_file({ "policyVersion": "0.1.0", "extensionPolicies": { "allowListedExtensionsOnly": True, "extensions": { "OSTCExtensions.ExampleHandlerLinux": {} } } }) with mock_wire_protocol(wire_protocol_data.DATA_FILE_EXT_SEQUENCING) as protocol: protocol.aggregate_status = None protocol.report_vm_status = MagicMock() exthandlers_handler = get_exthandlers_handler(protocol) dep_ext_level_2 = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux") dep_ext_level_1 = extension_emulator(name="OSTCExtensions.OtherExampleHandlerLinux") exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() # OtherExampleHandlerLinux should be disallowed by policy, ExampleHandlerLinux should be skipped because # dependent extension failed self._assert_handler_status(protocol.report_vm_status, expected_status="NotReady", expected_ext_count=1, version="1.0.0", expected_handler_name="OSTCExtensions.OtherExampleHandlerLinux", expected_msg=("failed to run extension 'OSTCExtensions.OtherExampleHandlerLinux' " "because it is not specified as an allowed extension.")) self._assert_handler_status(protocol.report_vm_status, expected_status="NotReady", expected_ext_count=0, version="1.0.0", expected_handler_name="OSTCExtensions.ExampleHandlerLinux", expected_msg="Skipping processing of extensions since execution of dependent " "extension OSTCExtensions.OtherExampleHandlerLinux failed") # check handler list and dependency levels self.assertTrue(exthandlers_handler.ext_handlers is not None) self.assertTrue(exthandlers_handler.ext_handlers is not None) self.assertEqual(len(exthandlers_handler.ext_handlers), 2) self.assertEqual(1, next(handler for handler in exthandlers_handler.ext_handlers if handler.name == dep_ext_level_1.name).settings[0].dependencyLevel) self.assertEqual(2, next(handler for handler in exthandlers_handler.ext_handlers if handler.name == dep_ext_level_2.name).settings[0].dependencyLevel) def test_enable_should_succeed_if_extension_allowed(self): policy_cases = [ { "policyVersion": "0.1.0", "extensionPolicies": { "allowListedExtensionsOnly": False, } }, { "policyVersion": "0.1.0", "extensionPolicies": { "allowListedExtensionsOnly": True, "extensions": { "OSTCExtensions.ExampleHandlerLinux": {} } } } ] for policy in policy_cases: self._test_policy_case(policy=policy, op=ExtensionRequestedState.Enabled, expected_status_code=0, expected_handler_status='Ready', expected_ext_count=1) def test_uninstall_should_succeed_if_extension_allowed(self): policy_cases = [ { "policyVersion": "0.1.0", "extensionPolicies": { "allowListedExtensionsOnly": False, } }, { "policyVersion": "0.1.0", "extensionPolicies": { "allowListedExtensionsOnly": True, "extensions": { "OSTCExtensions.ExampleHandlerLinux": {} } } } ] for policy in policy_cases: with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: # Generate a new mock goal state to uninstall the extension - increment the incarnation protocol.mock_wire_data.set_incarnation(2) protocol.mock_wire_data.set_extensions_config_state(ExtensionRequestedState.Uninstall) protocol.client.update_goal_state() protocol.aggregate_status = None protocol.report_vm_status = MagicMock() exthandlers_handler = get_exthandlers_handler(protocol) # Create policy file and process extensions. self._create_policy_file(policy) exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() # Assert that no status is being reported for the extension, to confirm that uninstall was successful. report_vm_status = protocol.report_vm_status self.assertTrue(report_vm_status.called) args, kw = report_vm_status.call_args # pylint: disable=unused-variable vm_status = args[0] self.assertEqual(0, len(vm_status.vmAgent.extensionHandlers)) def test_should_report_both_policy_failure_and_heartbeat_in_status(self): """ If an extension reporting heartbeat is blocked by policy, the agent should report policy failure status and concatenate the extension heartbeat. """ # Mock collect_heartbeat() to return heartbeat in test file test_file = os.path.join(self.tmp_dir, "ext_heartbeat.json") def mock_collect_heartbeat(): heartbeat_json = fileutil.read_file(test_file) heartbeat = json.loads(heartbeat_json)[0]['heartbeat'] return heartbeat # Create a mock heartbeat file reported by an extension extension_heartbeat = [ { "version": 1.0, "heartbeat": { "status": "ready", "code": 0, "formattedMessage": { "lang": "en-US", "message": "This is a heartbeat message" } } } ] with open(test_file, 'w') as out_file: out_file.write(json.dumps(extension_heartbeat)) # Try to install a disallowed extension, then assert that policy failure status is reported instead of heartbeat with patch("azurelinuxagent.ga.exthandlers.ExtHandlerInstance.get_heartbeat_file", return_value=test_file): with patch("azurelinuxagent.ga.exthandlers.ExtHandlerInstance.collect_heartbeat", side_effect=mock_collect_heartbeat): with patch("azurelinuxagent.ga.exthandlers.ExtHandlerInstance.get_handler_state", return_value=ExtHandlerState.Enabled): policy = \ { "policyVersion": "0.1.0", "extensionPolicies": { "allowListedExtensionsOnly": True } } with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: exthandlers_handler = get_exthandlers_handler(protocol) self._create_policy_file(policy) exthandlers_handler.run() # Assert that agent is reporting the expected handler status, heartbeat should be concatenated to policy failure message vm_status = exthandlers_handler.report_ext_handlers_status() ext_handler = vm_status.vmAgent.extensionHandlers[0] self.assertTrue("failed to run extension 'OSTCExtensions.ExampleHandlerLinux' because it is not specified as an allowed extension" in ext_handler.message, "Should have reported policy failure") self.assertTrue("Extension was previously enabled and reported the following heartbeat:" in ext_handler.message, "Should have concatenated heartbeat to handler status message") self.assertEqual(ext_handler.status, "NotReady", "Heartbeat should not have overridden policy failure status") self.assertEqual(ext_handler.code, ExtensionErrorCodes.PluginEnableProcessingFailed, "Heartbeat should not have overridden policy failure code") def test_should_save_policy_file_to_history_directory(self): policy_file_name = "waagent_policy.json" policy = { "policyVersion": "0.1.0" } self._create_policy_file(policy) # Policy file should be written to history folder when extensions are processed with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS, save_to_history=True) as protocol: # Update goal state with incarnation and etag, so we can search for the correct history folder protocol.mock_wire_data.set_incarnation(999) protocol.mock_wire_data.set_etag(888) protocol.client.update_goal_state() exthandlers_handler = get_exthandlers_handler(protocol) with patch('azurelinuxagent.ga.policy.policy_engine.conf.get_enable_overprovisioning', return_value=False): exthandlers_handler.run() # Assert that policy file was copied to history folder. root_dir = os.path.join(conf.get_lib_dir(), ARCHIVE_DIRECTORY_NAME) matches = glob.glob(os.path.join(root_dir, "*_999-888")) self.assertTrue(len(matches) == 1) history_dir = matches[0] file_path = os.path.join(history_dir, policy_file_name) self.assertTrue(os.path.exists(file_path), "Policy file was not copied to history folder") with open(file_path, mode='r') as f: self.assertEqual(policy, json.load(f)) class TestSignatureValidationNotEnforced(TestExtensionBase): """ This tests expected behavior when extension package signature validation is enabled, but not enforced. """ def setUp(self): AgentTestCase.setUp(self) self.mock_sleep = patch("time.sleep", lambda *_: mock_sleep(0.01)) self.mock_sleep.start() self.patch_conf_flag = patch('azurelinuxagent.ga.exthandlers.conf.get_signature_validation_enabled', return_value=True) self.patch_conf_flag.start() write_signing_certificates() self.base_dir = os.path.join(conf.get_lib_dir(), 'OSTCExtensions.ExampleHandlerLinux-1.0.0') def tearDown(self): patch.stopall() AgentTestCase.tearDown(self) @staticmethod def _make_http_get_handler(data_file): def http_get_handler(url, *_, **__): resp = MagicMock() resp.status = 200 if "manifest.xml" in url: content = load_data(data_file.get("manifest")) resp.read = Mock(return_value=content.encode("utf-8")) resp.getheaders = Mock(return_value=[]) return resp if "VMAccess" in url: content = load_bin_data(data_file.get("test_ext")) resp.read = Mock(return_value=content) return resp return None return http_get_handler def _assert_telemetry_sent(self, patched_add_event, name, version, op, is_success, msg=None): telemetry = [] for _, kw in patched_add_event.call_args_list: if ( kw['name'] == name and kw['version'] == version and kw['op'] == op and kw['is_success'] == is_success and (msg is None or msg in kw['message']) ): telemetry.append(kw) self.assertEqual(1, len(telemetry), "Signature validation event (operation '{0}') not sent as telemetry".format(op)) def _assert_no_error_telemetry_sent(self, patched_add_event, name, version): errors = [] for _, kw in patched_add_event.call_args_list: if ( kw['op'] in (WALAEventOperation.PackageSignatureResult, WALAEventOperation.PackageSigningInfoResult) and kw['name'] == name and kw['version'] == version and not kw['is_success'] ): errors.append(kw) self.assertEqual(0, len(errors), "Signature validation should have completed with no errors. Errors: {0}".format(errors)) def _test_enable_extension(self, data_file, signature_validation_should_succeed, expected_status_code, expected_handler_status, expected_ext_count, expected_status_msg=None, expected_handler_name="OSTCExtensions.ExampleHandlerLinux", expected_version="1.0.0"): # Set up a mock protocol instance. with mock_wire_protocol(data_file) as protocol: protocol.aggregate_status = None protocol.report_vm_status = MagicMock() exthandlers_handler = get_exthandlers_handler(protocol) protocol.set_http_handlers(http_get_handler=self._make_http_get_handler(data_file)) exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() # Assert that agent is reporting the expected handler status report_vm_status = protocol.report_vm_status self.assertTrue(report_vm_status.called) self._assert_handler_status(report_vm_status, expected_handler_status, expected_ext_count=expected_ext_count, version=expected_version, expected_handler_name=expected_handler_name, expected_msg=expected_status_msg, expected_code=expected_status_code) # Assert signature validation state base_dir = os.path.join(conf.get_lib_dir(), '{0}-{1}'.format(expected_handler_name, expected_version)) self.assertEqual(signature_validation_should_succeed, signature_has_been_validated(base_dir)) def test_enable_should_succeed_and_send_telemetry_if_signature_validation_fails(self): # Signature validation fails, handler manifest validation succeeds -> enable, send telemetry, state should not be set data_file = wire_protocol_data.DATA_FILE.copy() data_file["test_ext"] = "signing/Microsoft.OSTCExtensions.Edp.VMAccessForLinux__1.7.0.zip" data_file["ext_conf"] = "wire/ext_conf-vm_access_with_invalid_signature.xml" data_file["manifest"] = "wire/manifest_vm_access.xml" handler_name = "Microsoft.OSTCExtensions.Edp.VMAccessForLinux" handler_version = "1.7.0" with patch('azurelinuxagent.ga.signature_validation_util.add_event') as patched_add_event: self._test_enable_extension(data_file=data_file, signature_validation_should_succeed=False, expected_status_code=0, expected_handler_status='Ready', expected_ext_count=1, expected_status_msg='Plugin enabled', expected_handler_name=handler_name, expected_version=handler_version) # Telemetry should report signature validation failure and manifest validation success self._assert_telemetry_sent(patched_add_event, handler_name, handler_version, WALAEventOperation.PackageSignatureResult, is_success=False) self._assert_telemetry_sent(patched_add_event, handler_name, handler_version, WALAEventOperation.PackageSigningInfoResult, is_success=True) def test_enable_should_succeed_and_send_telemetry_if_handler_manifest_validation_fails(self): # Signature validation succeeds, handler manifest validation fails -> enable, send telemetry, state should not be set data_file = wire_protocol_data.DATA_FILE.copy() data_file["test_ext"] = "signing/Microsoft.OSTCExtensions.Edp.VMAccessForLinux__1.7.0.zip" data_file["ext_conf"] = "wire/ext_conf-vm_access_with_signature.xml" data_file["manifest"] = "wire/manifest_vm_access.xml" handler_name = "Microsoft.OSTCExtensions.Edp.VMAccessForLinux" handler_version = "1.7.0" manifest_data = \ { "version": 1.0, "handlerManifest": { "disableCommand": "extension_shim.sh -c ./vmaccess.py -d", "enableCommand": "extension_shim.sh -c ./vmaccess.py -e", "installCommand": "extension_shim.sh -c ./vmaccess.py -i", "uninstallCommand": "extension_shim.sh -c ./vmaccess.py -u", "updateCommand": "extension_shim.sh -c ./vmaccess.py -p", "rebootAfterInstall": False, "reportHeartbeat": False }, "signingInfo": { "version": "1.5.0", # Does not match the version specified in goal state (1.7.0) "type": "VMAccessForLinux", "publisher": "Microsoft.OSTCExtensions.Edp" } } manifest = HandlerManifest(manifest_data) with patch('azurelinuxagent.ga.signature_validation_util.add_event') as patched_add_event: with patch('azurelinuxagent.ga.exthandlers.ExtHandlerInstance.load_manifest', return_value=manifest): self._test_enable_extension(data_file=data_file, signature_validation_should_succeed=False, expected_status_code=0, expected_handler_status='Ready', expected_ext_count=1, expected_status_msg='Plugin enabled', expected_handler_name=handler_name, expected_version=handler_version) # Telemetry should report successful signature validation and failed maniest validation self._assert_telemetry_sent(patched_add_event, handler_name, handler_version, WALAEventOperation.PackageSignatureResult, is_success=True) self._assert_telemetry_sent(patched_add_event, handler_name, handler_version, WALAEventOperation.PackageSigningInfoResult, is_success=False, msg="expected extension version '1.7.0' does not match downloaded package version '1.5.0'") def test_enable_should_succeed_and_send_telemetry_if_signature_and_handler_manifest_validation_fails(self): # Signature validation fails, handler manifest validation fails -> enable, send telemetry, state should not be set data_file = wire_protocol_data.DATA_FILE.copy() data_file["test_ext"] = "signing/Microsoft.OSTCExtensions.Edp.VMAccessForLinux__1.7.0.zip" data_file["ext_conf"] = "wire/ext_conf-vm_access_with_invalid_signature.xml" data_file["manifest"] = "wire/manifest_vm_access.xml" handler_name = "Microsoft.OSTCExtensions.Edp.VMAccessForLinux" handler_version = "1.7.0" manifest_data = \ { "version": 1.0, "handlerManifest": { "disableCommand": "extension_shim.sh -c ./vmaccess.py -d", "enableCommand": "extension_shim.sh -c ./vmaccess.py -e", "installCommand": "extension_shim.sh -c ./vmaccess.py -i", "uninstallCommand": "extension_shim.sh -c ./vmaccess.py -u", "updateCommand": "extension_shim.sh -c ./vmaccess.py -p", "rebootAfterInstall": False, "reportHeartbeat": False }, "signingInfo": { "version": "1.5.0", # Does not match the version specified in goal state (1.7.0) "type": "VMAccessForLinux", "publisher": "Microsoft.OSTCExtensions.Edp" } } manifest = HandlerManifest(manifest_data) with patch('azurelinuxagent.ga.signature_validation_util.add_event') as patched_add_event: with patch('azurelinuxagent.ga.exthandlers.ExtHandlerInstance.load_manifest', return_value=manifest): self._test_enable_extension(data_file=data_file, signature_validation_should_succeed=False, expected_status_code=0, expected_handler_status='Ready', expected_ext_count=1, expected_status_msg='Plugin enabled', expected_handler_name=handler_name, expected_version=handler_version) # Telemetry should report signature validation failure and manifest validation failure self._assert_telemetry_sent(patched_add_event, handler_name, handler_version, WALAEventOperation.PackageSignatureResult, is_success=False) self._assert_telemetry_sent(patched_add_event, handler_name, handler_version, WALAEventOperation.PackageSigningInfoResult, is_success=False, msg="expected extension version '1.7.0' does not match downloaded package version '1.5.0'") def test_enable_should_succeed_if_signature_validation_succeeds(self): # Signature validation succeeds, handler manifest validation succeeds -> enable, send telemetry, state should be set data_file = wire_protocol_data.DATA_FILE.copy() data_file["test_ext"] = "signing/Microsoft.OSTCExtensions.Edp.VMAccessForLinux__1.7.0.zip" data_file["ext_conf"] = "wire/ext_conf-vm_access_with_signature.xml" data_file["manifest"] = "wire/manifest_vm_access.xml" handler_name = "Microsoft.OSTCExtensions.Edp.VMAccessForLinux" handler_version = "1.7.0" with patch('azurelinuxagent.ga.signature_validation_util.add_event') as patched_add_event: self._test_enable_extension(data_file=data_file, signature_validation_should_succeed=True, expected_status_code=0, expected_handler_status='Ready', expected_ext_count=1, expected_status_msg='Plugin enabled', expected_handler_name="Microsoft.OSTCExtensions.Edp.VMAccessForLinux", expected_version="1.7.0") # Should send telemetry for successful signature validation and handler manifest validation self._assert_telemetry_sent(patched_add_event, handler_name, handler_version, WALAEventOperation.PackageSignatureResult, is_success=True) self._assert_telemetry_sent(patched_add_event, handler_name, handler_version, WALAEventOperation.PackageSigningInfoResult, is_success=True) self._assert_no_error_telemetry_sent(patched_add_event, handler_name, handler_version) def test_enable_should_succeed_if_extension_unsigned(self): # Extension is unsigned, so signature is not validated -> enable, send telemetry, state should not be set data_file = DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf-no_encoded_signature.xml" with patch('azurelinuxagent.ga.exthandlers.add_event') as add_event: self._test_enable_extension(data_file=data_file, signature_validation_should_succeed=False, expected_status_code=0, expected_handler_status='Ready', expected_ext_count=1) # Should not have reported any signature validation errors self._assert_no_error_telemetry_sent(add_event, "OSTCExtensions.ExampleHandlerLinux", "1.0.0") def test_enable_should_succeed_and_not_validate_signature_if_openssl_version_is_unsupported(self): # If OpenSSL version is not supported for signature validation, we should not validate signature (state should not be set). # Since signature validation is not being enforced, enable should succeed. We also do not send telemetry in this case, # because OpenSSL version is sent elsewhere in telemetry. data_file = wire_protocol_data.DATA_FILE.copy() data_file["test_ext"] = "signing/Microsoft.OSTCExtensions.Edp.VMAccessForLinux__1.7.0.zip" data_file["ext_conf"] = "wire/ext_conf-vm_access_with_signature.xml" data_file["manifest"] = "wire/manifest_vm_access.xml" with patch("azurelinuxagent.ga.signature_validation_util._get_openssl_version", return_value="1.0.2"): with patch('azurelinuxagent.ga.exthandlers.event.error') as patched_add_event: self._test_enable_extension(data_file=data_file, signature_validation_should_succeed=False, expected_status_code=0, expected_handler_status='Ready', expected_ext_count=1, expected_status_msg='Plugin enabled', expected_handler_name="Microsoft.OSTCExtensions.Edp.VMAccessForLinux", expected_version="1.7.0") # Should not have sent any telemetry errors = [kw for _, kw in patched_add_event.call_args_list if kw['op'] == WALAEventOperation.SignatureValidation] self.assertEqual(0, len(errors), "Should not have sent any telemetry for OpenSSL version mismatch") def test_enable_should_succeed_and_not_validate_signature_if_conf_flag_disabled(self): # If conf flag is set to false, enable should succeed but signature validation state should not be set. self.patch_conf_flag.stop() data_file = wire_protocol_data.DATA_FILE.copy() data_file["test_ext"] = "signing/Microsoft.OSTCExtensions.Edp.VMAccessForLinux__1.7.0.zip" data_file["ext_conf"] = "wire/ext_conf-vm_access_with_signature.xml" data_file["manifest"] = "wire/manifest_vm_access.xml" with patch('azurelinuxagent.ga.exthandlers.conf.get_signature_validation_enabled', return_value=False): self._test_enable_extension(data_file=data_file, signature_validation_should_succeed=False, expected_status_code=0, expected_handler_status='Ready', expected_ext_count=1, expected_status_msg='Plugin enabled', expected_handler_name="Microsoft.OSTCExtensions.Edp.VMAccessForLinux", expected_version="1.7.0") def test_uninstall_should_succeed_for_unsigned_extension(self): data_file = DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf-no_encoded_signature.xml" with mock_wire_protocol(data_file) as protocol: # Set up mock protocol protocol.aggregate_status = None protocol.report_vm_status = MagicMock() exthandlers_handler = get_exthandlers_handler(protocol) # Enable extension - signature validation should fail and state should not be set exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() report_vm_status = protocol.report_vm_status self.assertTrue(report_vm_status.called) self._assert_handler_status(report_vm_status, "Ready", expected_ext_count=1, version="1.0.0", expected_handler_name="OSTCExtensions.ExampleHandlerLinux", expected_msg="Plugin enabled", expected_code=0) self.assertFalse(signature_has_been_validated(self.base_dir)) # Generate a new mock goal state to uninstall the extension - increment the incarnation protocol.mock_wire_data.set_incarnation(2) protocol.mock_wire_data.set_extensions_config_state(ExtensionRequestedState.Uninstall) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() # Check that uninstall was successful and handler is no longer reporting status report_vm_status = protocol.report_vm_status self.assertTrue(report_vm_status.called) args, _ = report_vm_status.call_args vm_status = args[0] self.assertEqual(0, len(vm_status.vmAgent.extensionHandlers)) def test_uninstall_should_succeed_for_extension_failing_signature_validation(self): with mock_wire_protocol(DATA_FILE) as protocol: # Set up mock protocol protocol.aggregate_status = None protocol.report_vm_status = MagicMock() exthandlers_handler = get_exthandlers_handler(protocol) # Enable extension - signature validation should fail and state should not be set exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() report_vm_status = protocol.report_vm_status self.assertTrue(report_vm_status.called) self._assert_handler_status(report_vm_status, "Ready", expected_ext_count=1, version="1.0.0", expected_handler_name="OSTCExtensions.ExampleHandlerLinux", expected_msg="Plugin enabled", expected_code=0) self.assertFalse(signature_has_been_validated(self.base_dir)) # Generate a new mock goal state to uninstall the extension - increment the incarnation protocol.mock_wire_data.set_incarnation(2) protocol.mock_wire_data.set_extensions_config_state(ExtensionRequestedState.Uninstall) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() # Check that uninstall was successful and handler is no longer reporting status report_vm_status = protocol.report_vm_status self.assertTrue(report_vm_status.called) args, _ = report_vm_status.call_args vm_status = args[0] self.assertEqual(0, len(vm_status.vmAgent.extensionHandlers)) def test_uninstall_should_succeed_for_extension_with_signature_validated(self): data_file = wire_protocol_data.DATA_FILE.copy() data_file["test_ext"] = "signing/Microsoft.OSTCExtensions.Edp.VMAccessForLinux__1.7.0.zip" data_file["ext_conf"] = "wire/ext_conf-vm_access_with_signature.xml" data_file["manifest"] = "wire/manifest_vm_access.xml" with mock_wire_protocol(data_file) as protocol: # Set up mock protocol protocol.aggregate_status = None protocol.report_vm_status = MagicMock() exthandlers_handler = get_exthandlers_handler(protocol) # Enable extension - extension signature validation should succeed and state should be set protocol.set_http_handlers(http_get_handler=self._make_http_get_handler(data_file)) exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() report_vm_status = protocol.report_vm_status self.assertTrue(report_vm_status.called) self._assert_handler_status(report_vm_status, "Ready", expected_ext_count=1, version="1.7.0", expected_handler_name="Microsoft.OSTCExtensions.Edp.VMAccessForLinux", expected_msg="Plugin enabled", expected_code=0) # Assert signature validation state base_dir = os.path.join(conf.get_lib_dir(), 'Microsoft.OSTCExtensions.Edp.VMAccessForLinux-1.7.0') self.assertTrue(signature_has_been_validated(base_dir)) # Generate a new mock goal state to uninstall the extension - increment the incarnation protocol.mock_wire_data.set_incarnation(2) protocol.mock_wire_data.set_extensions_config_state(ExtensionRequestedState.Uninstall) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() # Check that uninstall was successful and handler is no longer reporting status report_vm_status = protocol.report_vm_status self.assertTrue(report_vm_status.called) args, _ = report_vm_status.call_args vm_status = args[0] self.assertEqual(0, len(vm_status.vmAgent.extensionHandlers)) def test_should_enable_existing_zip_package_if_signature_validation_succeeds(self): # If an extension zip package already exists but has not been extracted, signature should be validated successfully, # and extension should be enabled. package_file = os.path.join(self.tmp_dir, "Microsoft.OSTCExtensions.Edp.VMAccessForLinux__1.7.0.zip") test_zip = os.path.join(data_dir, "signing/Microsoft.OSTCExtensions.Edp.VMAccessForLinux__1.7.0.zip") shutil.copy(test_zip, package_file) data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf-vm_access_with_signature.xml" data_file["manifest"] = "wire/manifest_vm_access.xml" handler_name = "Microsoft.OSTCExtensions.Edp.VMAccessForLinux" handler_version = "1.7.0" with patch('azurelinuxagent.ga.signature_validation_util.add_event') as patched_add_event: self._test_enable_extension(data_file=data_file, signature_validation_should_succeed=True, expected_status_code=0, expected_handler_status='Ready', expected_ext_count=1, expected_status_msg='Plugin enabled', expected_handler_name=handler_name, expected_version=handler_version) # Telemetry should report successful signature validation and manifest validation self._assert_telemetry_sent(patched_add_event, handler_name, handler_version, WALAEventOperation.PackageSignatureResult, is_success=True) self._assert_telemetry_sent(patched_add_event, handler_name, handler_version, WALAEventOperation.PackageSigningInfoResult, is_success=True) # Should not have reported any signature validation errors self._assert_no_error_telemetry_sent(patched_add_event, handler_name, handler_version) def test_should_enable_existing_zip_package_if_signature_validation_fails(self): # Signature validation should fail for existing zip package - extension should still be enabled because we are not enforcing signature. package_file = os.path.join(self.tmp_dir, "Microsoft.OSTCExtensions.Edp.VMAccessForLinux__1.7.0.zip") test_zip = os.path.join(data_dir, "signing/Microsoft.OSTCExtensions.Edp.VMAccessForLinux__1.7.0.zip") shutil.copy(test_zip, package_file) data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf-vm_access_with_invalid_signature.xml" data_file["manifest"] = "wire/manifest_vm_access.xml" handler_name = "Microsoft.OSTCExtensions.Edp.VMAccessForLinux" handler_version = "1.7.0" with patch('azurelinuxagent.ga.signature_validation_util.add_event') as patched_add_event: self._test_enable_extension(data_file=data_file, signature_validation_should_succeed=False, expected_status_code=0, expected_handler_status='Ready', expected_ext_count=1, expected_status_msg='Plugin enabled', expected_handler_name="Microsoft.OSTCExtensions.Edp.VMAccessForLinux", expected_version="1.7.0") # Should have reported signature validation error and successful handler manifest validation self._assert_telemetry_sent(patched_add_event, handler_name, handler_version, WALAEventOperation.PackageSignatureResult, is_success=False) self._assert_telemetry_sent(patched_add_event, handler_name, handler_version, WALAEventOperation.PackageSigningInfoResult, is_success=True) def test_should_enable_existing_zip_package_if_manifest_validation_fails(self): # Manifest validation should fail for existing zip package - extension should still be enabled because we are not enforcing signature. package_file = os.path.join(self.tmp_dir, "Microsoft.OSTCExtensions.Edp.VMAccessForLinux__1.7.0.zip") test_zip = os.path.join(data_dir, "signing/Microsoft.OSTCExtensions.Edp.VMAccessForLinux__1.7.0.zip") shutil.copy(test_zip, package_file) data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf-vm_access_with_signature.xml" data_file["manifest"] = "wire/manifest_vm_access.xml" handler_name = "Microsoft.OSTCExtensions.Edp.VMAccessForLinux" handler_version = "1.7.0" manifest_data = \ { "version": 1.0, "handlerManifest": { "disableCommand": "extension_shim.sh -c ./vmaccess.py -d", "enableCommand": "extension_shim.sh -c ./vmaccess.py -e", "installCommand": "extension_shim.sh -c ./vmaccess.py -i", "uninstallCommand": "extension_shim.sh -c ./vmaccess.py -u", "updateCommand": "extension_shim.sh -c ./vmaccess.py -p", "rebootAfterInstall": False, "reportHeartbeat": False }, "signingInfo": { "version": "1.5.0", # Does not match the version specified in goal state (1.7.0) "type": "VMAccessForLinux", "publisher": "Microsoft.OSTCExtensions.Edp" } } manifest = HandlerManifest(manifest_data) with patch('azurelinuxagent.ga.signature_validation_util.add_event') as patched_add_event: with patch('azurelinuxagent.ga.exthandlers.ExtHandlerInstance.load_manifest', return_value=manifest): self._test_enable_extension(data_file=data_file, signature_validation_should_succeed=False, expected_status_code=0, expected_handler_status='Ready', expected_ext_count=1, expected_status_msg='Plugin enabled', expected_handler_name="Microsoft.OSTCExtensions.Edp.VMAccessForLinux", expected_version="1.7.0") # Should report successful signature validation and failed manifest validation self._assert_telemetry_sent(patched_add_event, handler_name, handler_version, WALAEventOperation.PackageSignatureResult, is_success=True) self._assert_telemetry_sent(patched_add_event, handler_name, handler_version, WALAEventOperation.PackageSigningInfoResult, is_success=False, msg="expected extension version '1.7.0' does not match downloaded package version '1.5.0'") if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-a976115/tests/ga/test_exthandlers.py000066400000000000000000001032261510742556200230610ustar00rootroot00000000000000# Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import json import os import subprocess import time import uuid from azurelinuxagent.common.agent_supported_feature import AgentSupportedFeature from azurelinuxagent.common.event import AGENT_EVENT_FILE_EXTENSION, WALAEventOperation from azurelinuxagent.common.exception import ExtensionError, ExtensionErrorCodes from azurelinuxagent.common.protocol.restapi import ExtensionStatus, ExtensionSettings, Extension from azurelinuxagent.common.protocol.util import ProtocolUtil from azurelinuxagent.common.protocol.wire import WireProtocol from azurelinuxagent.common.utils import fileutil from azurelinuxagent.ga.extensionprocessutil import TELEMETRY_MESSAGE_MAX_LEN, format_stdout_stderr, \ read_output from azurelinuxagent.ga.exthandlers import parse_ext_status, ExtHandlerInstance, ExtCommandEnvVariable, \ ExtensionStatusError, _DEFAULT_SEQ_NO, get_exthandlers_handler, ExtHandlerState from tests.lib.mock_wire_protocol import mock_wire_protocol, wire_protocol_data from tests.lib.tools import AgentTestCase, patch, mock_sleep, clear_singleton_instances class TestExtHandlers(AgentTestCase): def setUp(self): super(TestExtHandlers, self).setUp() # Since ProtocolUtil is a singleton per thread, we need to clear it to ensure that the test cases do not # reuse a previous state clear_singleton_instances(ProtocolUtil) def test_parse_ext_status_should_raise_on_non_array(self): status = json.loads(''' {{ "status": {{ "status": "transitioning", "operation": "Enabling Handler", "code": 0, "name": "Microsoft.Azure.RecoveryServices.SiteRecovery.Linux" }}, "version": 1.0, "timestampUTC": "2020-01-14T15:04:43Z", "longText": "{0}" }}'''.format("*" * 5 * 1024)) with self.assertRaises(ExtensionStatusError) as context_manager: parse_ext_status(ExtensionStatus(seq_no=0), status) error_message = str(context_manager.exception) self.assertIn("The extension status must be an array", error_message) self.assertTrue(0 < len(error_message) - 64 < 4096, "The error message should not be much longer than 4K characters: [{0}]".format(error_message)) def test_parse_extension_status00(self): """ Parse a status report for a successful execution of an extension. """ s = '''[{ "status": { "status": "success", "formattedMessage": { "lang": "en-US", "message": "Command is finished." }, "operation": "Daemon", "code": "0", "name": "Microsoft.OSTCExtensions.CustomScriptForLinux" }, "version": "1.0", "timestampUTC": "2018-04-20T21:20:24Z" } ]''' ext_status = ExtensionStatus(seq_no=0) parse_ext_status(ext_status, json.loads(s)) self.assertEqual('0', ext_status.code) self.assertEqual(None, ext_status.configurationAppliedTime) self.assertEqual('Command is finished.', ext_status.message) self.assertEqual('Daemon', ext_status.operation) self.assertEqual('success', ext_status.status) self.assertEqual(0, ext_status.sequenceNumber) self.assertEqual(0, len(ext_status.substatusList)) def test_parse_extension_status01(self): """ Parse a status report for a failed execution of an extension. The extension returned a bad status/status of failed. The agent should handle this gracefully, and convert all unknown status/status values into an error. """ s = '''[{ "status": { "status": "failed", "formattedMessage": { "lang": "en-US", "message": "Enable failed: Failed with error: commandToExecute is empty or invalid ..." }, "operation": "Enable", "code": "0", "name": "Microsoft.OSTCExtensions.CustomScriptForLinux" }, "version": "1.0", "timestampUTC": "2018-04-20T20:50:22Z" }]''' ext_status = ExtensionStatus(seq_no=0) parse_ext_status(ext_status, json.loads(s)) self.assertEqual('0', ext_status.code) self.assertEqual(None, ext_status.configurationAppliedTime) self.assertEqual('Enable failed: Failed with error: commandToExecute is empty or invalid ...', ext_status.message) self.assertEqual('Enable', ext_status.operation) self.assertEqual('error', ext_status.status) self.assertEqual(0, ext_status.sequenceNumber) self.assertEqual(0, len(ext_status.substatusList)) def test_parse_ext_status_should_parse_missing_substatus_as_empty(self): status = '''[{ "status": { "status": "success", "formattedMessage": { "lang": "en-US", "message": "Command is finished." }, "operation": "Enable", "code": "0", "name": "Microsoft.OSTCExtensions.CustomScriptForLinux" }, "version": "1.0", "timestampUTC": "2018-04-20T21:20:24Z" } ]''' extension_status = ExtensionStatus(seq_no=0) parse_ext_status(extension_status, json.loads(status)) self.assertTrue(isinstance(extension_status.substatusList, list), 'substatus was not parsed correctly') self.assertEqual(0, len(extension_status.substatusList)) def test_parse_ext_status_should_parse_null_substatus_as_empty(self): status = '''[{ "status": { "status": "success", "formattedMessage": { "lang": "en-US", "message": "Command is finished." }, "operation": "Enable", "code": "0", "name": "Microsoft.OSTCExtensions.CustomScriptForLinux", "substatus": null }, "version": "1.0", "timestampUTC": "2018-04-20T21:20:24Z" } ]''' extension_status = ExtensionStatus(seq_no=0) parse_ext_status(extension_status, json.loads(status)) self.assertTrue(isinstance(extension_status.substatusList, list), 'substatus was not parsed correctly') self.assertEqual(0, len(extension_status.substatusList)) def test_parse_extension_status_with_empty_status(self): """ Parse a status report for a successful execution of an extension. """ # Validating empty status case s = '''[]''' ext_status = ExtensionStatus(seq_no=0) parse_ext_status(ext_status, json.loads(s)) self.assertEqual(None, ext_status.code) self.assertEqual(None, ext_status.configurationAppliedTime) self.assertEqual(None, ext_status.message) self.assertEqual(None, ext_status.operation) self.assertEqual(None, ext_status.status) self.assertEqual(0, ext_status.sequenceNumber) self.assertEqual(0, len(ext_status.substatusList)) # Validating None case ext_status = ExtensionStatus(seq_no=0) parse_ext_status(ext_status, None) self.assertEqual(None, ext_status.code) self.assertEqual(None, ext_status.configurationAppliedTime) self.assertEqual(None, ext_status.message) self.assertEqual(None, ext_status.operation) self.assertEqual(None, ext_status.status) self.assertEqual(0, ext_status.sequenceNumber) self.assertEqual(0, len(ext_status.substatusList)) @patch('azurelinuxagent.ga.exthandlers.ExtHandlerInstance._get_last_modified_seq_no_from_config_files') def assert_extension_sequence_number(self, patch_get_largest_seq=None, goal_state_sequence_number=None, disk_sequence_number=None, expected_sequence_number=None): ext = ExtensionSettings() ext.sequenceNumber = goal_state_sequence_number patch_get_largest_seq.return_value = disk_sequence_number ext_handler = Extension(name='foo') ext_handler.version = "1.2.3" instance = ExtHandlerInstance(ext_handler=ext_handler, protocol=None) seq, path = instance.get_status_file_path(ext) self.assertEqual(expected_sequence_number, seq) if seq > -1: self.assertTrue(path.endswith('/foo-1.2.3/status/{0}.status'.format(expected_sequence_number))) else: self.assertIsNone(path) def test_extension_sequence_number(self): self.assert_extension_sequence_number(goal_state_sequence_number="12", disk_sequence_number=366, expected_sequence_number=12) self.assert_extension_sequence_number(goal_state_sequence_number=" 12 ", disk_sequence_number=366, expected_sequence_number=12) self.assert_extension_sequence_number(goal_state_sequence_number=" foo", disk_sequence_number=3, expected_sequence_number=3) self.assert_extension_sequence_number(goal_state_sequence_number="-1", disk_sequence_number=3, expected_sequence_number=-1) def test_it_should_report_error_if_plugin_settings_version_mismatch(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_PLUGIN_SETTINGS_MISMATCH) as protocol: with patch("azurelinuxagent.common.protocol.extensions_goal_state_from_extensions_config.add_event") as mock_add_event: # Forcing update of GoalState to allow the ExtConfig to report an event protocol.mock_wire_data.set_incarnation(2) protocol.client.update_goal_state() plugin_setting_mismatch_calls = [kw for _, kw in mock_add_event.call_args_list if kw['op'] == WALAEventOperation.PluginSettingsVersionMismatch] self.assertEqual(1, len(plugin_setting_mismatch_calls), "PluginSettingsMismatch event should be reported once") self.assertIn('Extension PluginSettings Version Mismatch! Expected PluginSettings version: 1.0.0 for Extension: OSTCExtensions.ExampleHandlerLinux' , plugin_setting_mismatch_calls[0]['message'], "Invalid error message with incomplete data detected for PluginSettingsVersionMismatch") self.assertTrue("1.0.2" in plugin_setting_mismatch_calls[0]['message'] and "1.0.1" in plugin_setting_mismatch_calls[0]['message'], "Error message should contain the incorrect versions") self.assertFalse(plugin_setting_mismatch_calls[0]['is_success'], "The event should be false") @patch("azurelinuxagent.common.conf.get_ext_log_dir") def test_command_extension_log_truncates_correctly(self, mock_log_dir): log_dir_path = os.path.join(self.tmp_dir, "log_directory") mock_log_dir.return_value = log_dir_path ext_handler = Extension(name='foo') ext_handler.version = "1.2.3" first_line = "This is the first line!" second_line = "This is the second line." old_logfile_contents = "{first_line}\n{second_line}\n".format(first_line=first_line, second_line=second_line) log_file_path = os.path.join(log_dir_path, "foo", "CommandExecution.log") fileutil.mkdir(os.path.join(log_dir_path, "foo"), mode=0o755) with open(log_file_path, "a") as log_file: log_file.write(old_logfile_contents) _ = ExtHandlerInstance(ext_handler=ext_handler, protocol=None, execution_log_max_size=(len(first_line)+len(second_line)//2)) with open(log_file_path) as truncated_log_file: self.assertEqual(truncated_log_file.read(), "{second_line}\n".format(second_line=second_line)) def test_set_logger_should_not_reset_the_mode_of_the_log_directory(self): ext_log_dir = os.path.join(self.tmp_dir, "log_directory") with patch("azurelinuxagent.common.conf.get_ext_log_dir", return_value=ext_log_dir): ext_handler = Extension(name='foo') ext_handler.version = "1.2.3" ext_handler_instance = ExtHandlerInstance(ext_handler=ext_handler, protocol=None) ext_handler_log_dir = os.path.join(ext_log_dir, ext_handler.name) # Double-check the initial mode get_mode = lambda f: os.stat(f).st_mode & 0o777 mode = get_mode(ext_handler_log_dir) if mode != 0o755: raise Exception("The initial mode of the log directory should be 0o755, got 0{0:o}".format(mode)) new_mode = 0o700 os.chmod(ext_handler_log_dir, new_mode) ext_handler_instance.set_logger() mode = get_mode(ext_handler_log_dir) self.assertEqual(new_mode, mode, "The mode of the log directory should not have changed") def test_it_should_report_the_message_in_the_hearbeat(self): def heartbeat_with_message(): return {'code': 0, 'formattedMessage': {'lang': 'en-US', 'message': 'This is a heartbeat message'}, 'status': 'ready'} with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: with patch("azurelinuxagent.common.protocol.wire.WireProtocol.report_vm_status", return_value=None): with patch("azurelinuxagent.ga.exthandlers.ExtHandlerInstance.collect_heartbeat", side_effect=heartbeat_with_message): with patch("azurelinuxagent.ga.exthandlers.ExtHandlerInstance.get_handler_state", return_value=ExtHandlerState.Enabled): with patch("azurelinuxagent.ga.exthandlers.ExtHandlerInstance.collect_ext_status", return_value=None): exthandlers_handler = get_exthandlers_handler(protocol) exthandlers_handler.run() vm_status = exthandlers_handler.report_ext_handlers_status() ext_handler = vm_status.vmAgent.extensionHandlers[0] self.assertEqual(ext_handler.message, heartbeat_with_message().get('formattedMessage').get('message'), "Extension handler messages don't match") self.assertEqual(ext_handler.status, heartbeat_with_message().get('status'), "Extension handler statuses don't match") class LaunchCommandTestCase(AgentTestCase): """ Test cases for launch_command """ def setUp(self): AgentTestCase.setUp(self) self.ext_handler = Extension(name='foo') self.ext_handler.version = "1.2.3" self.ext_handler_instance = ExtHandlerInstance(ext_handler=self.ext_handler, protocol=WireProtocol("1.2.3.4")) self.mock_get_base_dir = patch("azurelinuxagent.ga.exthandlers.ExtHandlerInstance.get_base_dir", lambda *_: self.tmp_dir) self.mock_get_base_dir.start() self.log_dir = os.path.join(self.tmp_dir, "log") self.mock_get_log_dir = patch("azurelinuxagent.ga.exthandlers.ExtHandlerInstance.get_log_dir", lambda *_: self.log_dir) self.mock_get_log_dir.start() self.mock_sleep = patch("time.sleep", lambda *_: mock_sleep(0.01)) self.mock_sleep.start() def tearDown(self): self.mock_get_log_dir.stop() self.mock_get_base_dir.stop() self.mock_sleep.stop() AgentTestCase.tearDown(self) @staticmethod def _output_regex(stdout, stderr): return r"\[stdout\]\s+{0}\s+\[stderr\]\s+{1}".format(stdout, stderr) @staticmethod def _find_process(command): for pid in [pid for pid in os.listdir('/proc') if pid.isdigit()]: try: with open(os.path.join('/proc', pid, 'cmdline'), 'r') as cmdline: for line in cmdline.readlines(): if command in line: return True except IOError: # proc has already terminated continue return False def test_it_should_capture_the_output_of_the_command(self): stdout = "stdout" * 5 stderr = "stderr" * 5 command = "produce_output.py" self.create_script(os.path.join(self.ext_handler_instance.get_base_dir(), command), ''' import sys sys.stdout.write("{0}") sys.stderr.write("{1}") '''.format(stdout, stderr)) def list_directory(): base_dir = self.ext_handler_instance.get_base_dir() return [i for i in os.listdir(base_dir) if not i.endswith(AGENT_EVENT_FILE_EXTENSION)] # ignore telemetry files files_before = list_directory() output = self.ext_handler_instance.launch_command(command) files_after = list_directory() self.assertRegex(output, LaunchCommandTestCase._output_regex(stdout, stderr)) self.assertListEqual(files_before, files_after, "Not all temporary files were deleted. File list: {0}".format(files_after)) def test_it_should_raise_an_exception_when_the_command_times_out(self): extension_error_code = ExtensionErrorCodes.PluginHandlerScriptTimedout stdout = "stdout" * 7 stderr = "stderr" * 7 # the signal file is used by the test command to indicate it has produced output signal_file = os.path.join(self.tmp_dir, "signal_file.txt") # the test command produces some output then goes into an infinite loop command = "produce_output_then_hang.py" self.create_script(os.path.join(self.ext_handler_instance.get_base_dir(), command), ''' import sys import time sys.stdout.write("{0}") sys.stdout.flush() sys.stderr.write("{1}") sys.stderr.flush() with open("{2}", "w") as file: while True: file.write(".") time.sleep(1) '''.format(stdout, stderr, signal_file)) # mock time.sleep to wait for the signal file (launch_command implements the time out using polling and sleep) def sleep(seconds): if not os.path.exists(signal_file): sleep.original_sleep(seconds) sleep.original_sleep = time.sleep timeout = 60 start_time = time.time() with patch("time.sleep", side_effect=sleep, autospec=True) as mock_sleep: # pylint: disable=redefined-outer-name with self.assertRaises(ExtensionError) as context_manager: self.ext_handler_instance.launch_command(command, timeout=timeout, extension_error_code=extension_error_code) # the command name and its output should be part of the message message = str(context_manager.exception) command_full_path = os.path.join(self.tmp_dir, command.lstrip(os.path.sep)) self.assertRegex(message, r"Timeout\(\d+\):\s+{0}\s+{1}".format(command_full_path, LaunchCommandTestCase._output_regex(stdout, stderr))) # the exception code should be as specified in the call to launch_command self.assertEqual(context_manager.exception.code, extension_error_code) # the timeout period should have elapsed self.assertGreaterEqual(mock_sleep.call_count, timeout) # The command should have been terminated. # The /proc file system may still include the process when we do this check so we try a few times after a short delay; note that we # are mocking sleep, so we need to use the original implementation. terminated = False i = 0 while not terminated and i < 4: if not LaunchCommandTestCase._find_process(command): terminated = True else: sleep.original_sleep(0.25) i += 1 self.assertTrue(terminated, "The command was not terminated") # as a check for the test itself, verify it completed in just a few seconds self.assertLessEqual(time.time() - start_time, 5) def test_it_should_raise_an_exception_when_the_command_fails(self): extension_error_code = 2345 stdout = "stdout" * 3 stderr = "stderr" * 3 exit_code = 99 command = "fail.py" self.create_script(os.path.join(self.ext_handler_instance.get_base_dir(), command), ''' import sys sys.stdout.write("{0}") sys.stderr.write("{1}") exit({2}) '''.format(stdout, stderr, exit_code)) # the output is captured as part of the exception message with self.assertRaises(ExtensionError) as context_manager: self.ext_handler_instance.launch_command(command, extension_error_code=extension_error_code) message = str(context_manager.exception) self.assertRegex(message, r"Non-zero exit code: {0}.+{1}\s+{2}".format(exit_code, command, LaunchCommandTestCase._output_regex(stdout, stderr))) self.assertEqual(context_manager.exception.code, extension_error_code) def test_it_should_not_wait_for_child_process(self): stdout = "stdout" stderr = "stderr" command = "start_child_process.py" self.create_script(os.path.join(self.ext_handler_instance.get_base_dir(), command), ''' import os import sys import time pid = os.fork() if pid == 0: time.sleep(60) else: sys.stdout.write("{0}") sys.stderr.write("{1}") '''.format(stdout, stderr)) start_time = time.time() output = self.ext_handler_instance.launch_command(command) self.assertLessEqual(time.time() - start_time, 5) # Also check that we capture the parent's output self.assertRegex(output, LaunchCommandTestCase._output_regex(stdout, stderr)) def test_it_should_capture_the_output_of_child_process(self): parent_stdout = "PARENT STDOUT" parent_stderr = "PARENT STDERR" child_stdout = "CHILD STDOUT" child_stderr = "CHILD STDERR" more_parent_stdout = "MORE PARENT STDOUT" more_parent_stderr = "MORE PARENT STDERR" # the child process uses the signal file to indicate it has produced output signal_file = os.path.join(self.tmp_dir, "signal_file.txt") command = "start_child_with_output.py" self.create_script(os.path.join(self.ext_handler_instance.get_base_dir(), command), ''' import os import sys import time sys.stdout.write("{0}") sys.stderr.write("{1}") pid = os.fork() if pid == 0: sys.stdout.write("{2}") sys.stderr.write("{3}") open("{6}", "w").close() else: sys.stdout.write("{4}") sys.stderr.write("{5}") while not os.path.exists("{6}"): time.sleep(0.5) '''.format(parent_stdout, parent_stderr, child_stdout, child_stderr, more_parent_stdout, more_parent_stderr, signal_file)) output = self.ext_handler_instance.launch_command(command) self.assertIn(parent_stdout, output) self.assertIn(parent_stderr, output) self.assertIn(child_stdout, output) self.assertIn(child_stderr, output) self.assertIn(more_parent_stdout, output) self.assertIn(more_parent_stderr, output) def test_it_should_capture_the_output_of_child_process_that_fails_to_start(self): parent_stdout = "PARENT STDOUT" parent_stderr = "PARENT STDERR" child_stdout = "CHILD STDOUT" child_stderr = "CHILD STDERR" command = "start_child_that_fails.py" self.create_script(os.path.join(self.ext_handler_instance.get_base_dir(), command), ''' import os import sys import time pid = os.fork() if pid == 0: sys.stdout.write("{0}") sys.stderr.write("{1}") exit(1) else: sys.stdout.write("{2}") sys.stderr.write("{3}") '''.format(child_stdout, child_stderr, parent_stdout, parent_stderr)) output = self.ext_handler_instance.launch_command(command) self.assertIn(parent_stdout, output) self.assertIn(parent_stderr, output) self.assertIn(child_stdout, output) self.assertIn(child_stderr, output) def test_it_should_execute_commands_with_no_output(self): # file used to verify the command completed successfully signal_file = os.path.join(self.tmp_dir, "signal_file.txt") command = "create_file.py" self.create_script(os.path.join(self.ext_handler_instance.get_base_dir(), command), ''' open("{0}", "w").close() '''.format(signal_file)) output = self.ext_handler_instance.launch_command(command) self.assertTrue(os.path.exists(signal_file)) self.assertRegex(output, LaunchCommandTestCase._output_regex('', '')) def test_it_should_not_capture_the_output_of_commands_that_do_their_own_redirection(self): # the test script redirects its output to this file command_output_file = os.path.join(self.tmp_dir, "command_output.txt") stdout = "STDOUT" stderr = "STDERR" # the test script mimics the redirection done by the Custom Script extension command = "produce_output" self.create_script(os.path.join(self.ext_handler_instance.get_base_dir(), command), ''' exec &> {0} echo {1} >&2 echo {2} '''.format(command_output_file, stdout, stderr)) output = self.ext_handler_instance.launch_command(command) self.assertRegex(output, LaunchCommandTestCase._output_regex('', '')) with open(command_output_file, "r") as command_output: output = command_output.read() self.assertEqual(output, "{0}\n{1}\n".format(stdout, stderr)) def test_it_should_truncate_the_command_output(self): stdout = "STDOUT" stderr = "STDERR" command = "produce_long_output.py" self.create_script(os.path.join(self.ext_handler_instance.get_base_dir(), command), ''' import sys sys.stdout.write( "{0}" * {1}) sys.stderr.write( "{2}" * {3}) '''.format(stdout, int(TELEMETRY_MESSAGE_MAX_LEN / len(stdout)), stderr, int(TELEMETRY_MESSAGE_MAX_LEN / len(stderr)))) output = self.ext_handler_instance.launch_command(command) self.assertLessEqual(len(output), TELEMETRY_MESSAGE_MAX_LEN) self.assertIn(stdout, output) self.assertIn(stderr, output) def test_it_should_read_only_the_head_of_large_outputs(self): command = "produce_long_output.py" self.create_script(os.path.join(self.ext_handler_instance.get_base_dir(), command), ''' import sys sys.stdout.write("O" * 5 * 1024 * 1024) sys.stderr.write("E" * 5 * 1024 * 1024) ''') # Mocking the call to file.read() is difficult, so instead we mock the call to format_stdout_stderr, which takes the # return value of the calls to file.read(). The intention of the test is to verify we never read (and load in memory) # more than a few KB of data from the files used to capture stdout/stderr with patch('azurelinuxagent.ga.extensionprocessutil.format_stdout_stderr', side_effect=format_stdout_stderr) as mock_format: output = self.ext_handler_instance.launch_command(command) self.assertGreaterEqual(len(output), 1024) self.assertLessEqual(len(output), TELEMETRY_MESSAGE_MAX_LEN) self.assertEqual(1, mock_format.call_count, "format_stdout_stderr should be called once") args, kwargs = mock_format.call_args # pylint: disable=unused-variable stdout, stderr = args self.assertGreaterEqual(len(stdout), 1024) self.assertLessEqual(len(stdout), TELEMETRY_MESSAGE_MAX_LEN) self.assertGreaterEqual(len(stderr), 1024) self.assertLessEqual(len(stderr), TELEMETRY_MESSAGE_MAX_LEN) def test_it_should_handle_errors_while_reading_the_command_output(self): command = "produce_output.py" self.create_script(os.path.join(self.ext_handler_instance.get_base_dir(), command), ''' import sys sys.stdout.write("STDOUT") sys.stderr.write("STDERR") ''') # Mocking the call to file.read() is difficult, so instead we mock the call to_capture_process_output, # which will call file.read() and we force stdout/stderr to be None; this will produce an exception when # trying to use these files. original_capture_process_output = read_output def capture_process_output(stdout_file, stderr_file): # pylint: disable=unused-argument return original_capture_process_output(None, None) with patch('azurelinuxagent.ga.extensionprocessutil.read_output', side_effect=capture_process_output): output = self.ext_handler_instance.launch_command(command) self.assertIn("[stderr]\nCannot read stdout/stderr:", output) def test_it_should_contain_all_helper_environment_variables(self): wire_ip = str(uuid.uuid4()) ext_handler_instance = ExtHandlerInstance(ext_handler=self.ext_handler, protocol=WireProtocol(wire_ip)) helper_env_vars = {ExtCommandEnvVariable.ExtensionSeqNumber: _DEFAULT_SEQ_NO, ExtCommandEnvVariable.ExtensionPath: self.tmp_dir, ExtCommandEnvVariable.ExtensionVersion: ext_handler_instance.ext_handler.version, ExtCommandEnvVariable.WireProtocolAddress: wire_ip} command = """ printenv | grep -E '(%s)' """ % '|'.join(helper_env_vars.keys()) test_file = 'printHelperEnvironments.sh' self.create_script(os.path.join(self.ext_handler_instance.get_base_dir(), test_file), command) with patch("subprocess.Popen", wraps=subprocess.Popen) as patch_popen: # Returning empty list for get_agent_supported_features_list_for_extensions as we have a separate test for it with patch("azurelinuxagent.ga.exthandlers.get_agent_supported_features_list_for_extensions", return_value={}): output = ext_handler_instance.launch_command(test_file) args, kwagrs = patch_popen.call_args # pylint: disable=unused-variable without_os_env = dict((k, v) for (k, v) in kwagrs['env'].items() if k not in os.environ) # This check will fail if any helper environment variables are added/removed later on self.assertEqual(helper_env_vars, without_os_env) # This check is checking if the expected values are set for the extension commands for helper_var in helper_env_vars: self.assertIn("%s=%s" % (helper_var, helper_env_vars[helper_var]), output) def test_it_should_pass_supported_features_list_as_environment_variables(self): class TestFeature(AgentSupportedFeature): def __init__(self, name, version, supported): super(TestFeature, self).__init__(name=name, version=version, supported=supported) test_name = str(uuid.uuid4()) test_version = str(uuid.uuid4()) command = "check_env.py" self.create_script(os.path.join(self.ext_handler_instance.get_base_dir(), command), ''' import os import json import sys features = os.getenv("{0}") if not features: print("{0} not found in environment") sys.exit(0) l = json.loads(features) found = False for feature in l: if feature['Key'] == "{1}" and feature['Value'] == "{2}": found = True break print("Found Feature %s: %s" % ("{1}", found)) '''.format(ExtCommandEnvVariable.ExtensionSupportedFeatures, test_name, test_version)) # It should include all supported features and pass it as Environment Variable to extensions test_supported_features = {test_name: TestFeature(name=test_name, version=test_version, supported=True)} with patch("azurelinuxagent.common.agent_supported_feature.__EXTENSION_ADVERTISED_FEATURES", test_supported_features): output = self.ext_handler_instance.launch_command(command) self.assertIn("[stdout]\nFound Feature {0}: True".format(test_name), output, "Feature not found") # It should not include the feature if feature not supported test_supported_features = { test_name: TestFeature(name=test_name, version=test_version, supported=False), "testFeature": TestFeature(name="testFeature", version="1.2.1", supported=True) } with patch("azurelinuxagent.common.agent_supported_feature.__EXTENSION_ADVERTISED_FEATURES", test_supported_features): output = self.ext_handler_instance.launch_command(command) self.assertIn("[stdout]\nFound Feature {0}: False".format(test_name), output, "Feature wrongfully found") # It should not include the SupportedFeatures Key in Environment variables if no features supported test_supported_features = {test_name: TestFeature(name=test_name, version=test_version, supported=False)} with patch("azurelinuxagent.common.agent_supported_feature.__EXTENSION_ADVERTISED_FEATURES", test_supported_features): output = self.ext_handler_instance.launch_command(command) self.assertIn( "[stdout]\n{0} not found in environment".format(ExtCommandEnvVariable.ExtensionSupportedFeatures), output, "Environment variable should not be found") Azure-WALinuxAgent-a976115/tests/ga/test_exthandlers_download_extension.py000066400000000000000000000311571510742556200270470ustar00rootroot00000000000000# Copyright (c) Microsoft Corporation. All rights reserved. # Licensed under the Apache License. import contextlib import os import time import zipfile from azurelinuxagent.common.exception import ExtensionDownloadError, ExtensionErrorCodes from azurelinuxagent.common.protocol.restapi import Extension, ExtHandlerPackage from azurelinuxagent.common.protocol.wire import WireProtocol from azurelinuxagent.ga.exthandlers import ExtHandlerInstance, ExtHandlerState from tests.lib import wire_protocol_data from tests.lib.mock_wire_protocol import mock_wire_protocol from tests.lib.tools import AgentTestCase, patch, Mock class DownloadExtensionTestCase(AgentTestCase): """ Test cases for launch_command """ @classmethod def setUpClass(cls): AgentTestCase.setUpClass() cls.mock_cgroups = patch("azurelinuxagent.ga.exthandlers.CGroupConfigurator") cls.mock_cgroups.start() @classmethod def tearDownClass(cls): cls.mock_cgroups.stop() AgentTestCase.tearDownClass() def setUp(self): AgentTestCase.setUp(self) ext_handler = Extension(name='Microsoft.CPlat.Core.RunCommandLinux') ext_handler.version = "1.0.0" protocol = WireProtocol("http://Microsoft.CPlat.Core.RunCommandLinux/foo-bar") protocol.client.get_host_plugin = Mock() protocol.client.get_artifact_request = Mock(return_value=(None, None)) # create a dummy goal state, since downloads are done via the GoalState class with mock_wire_protocol(wire_protocol_data.DATA_FILE) as p: goal_state = p.get_goal_state() goal_state._wire_client = protocol.client protocol.client._goal_state = goal_state self.pkg = ExtHandlerPackage() self.pkg.uris = [ 'https://zrdfepirv2cy4prdstr00a.blob.core.windows.net/f72653efd9e349ed9842c8b99e4c1712-foobar/Microsoft.CPlat.Core__RunCommandLinux__1.0.0', 'https://zrdfepirv2cy4prdstr01a.blob.core.windows.net/f72653efd9e349ed9842c8b99e4c1712-foobar/Microsoft.CPlat.Core__RunCommandLinux__1.0.0', 'https://zrdfepirv2cy4prdstr02a.blob.core.windows.net/f72653efd9e349ed9842c8b99e4c1712-foobar/Microsoft.CPlat.Core__RunCommandLinux__1.0.0', 'https://zrdfepirv2cy4prdstr03a.blob.core.windows.net/f72653efd9e349ed9842c8b99e4c1712-foobar/Microsoft.CPlat.Core__RunCommandLinux__1.0.0', 'https://zrdfepirv2cy4prdstr04a.blob.core.windows.net/f72653efd9e349ed9842c8b99e4c1712-foobar/Microsoft.CPlat.Core__RunCommandLinux__1.0.0' ] self.ext_handler_instance = ExtHandlerInstance(ext_handler=ext_handler, protocol=protocol) self.ext_handler_instance.pkg = self.pkg self.extension_dir = os.path.join(self.tmp_dir, "Microsoft.CPlat.Core.RunCommandLinux-1.0.0") self.mock_get_base_dir = patch("azurelinuxagent.ga.exthandlers.ExtHandlerInstance.get_base_dir", return_value=self.extension_dir) self.mock_get_base_dir.start() self.mock_get_log_dir = patch("azurelinuxagent.ga.exthandlers.ExtHandlerInstance.get_log_dir", return_value=self.tmp_dir) self.mock_get_log_dir.start() self.agent_dir = self.tmp_dir self.mock_get_lib_dir = patch("azurelinuxagent.ga.exthandlers.conf.get_lib_dir", return_value=self.agent_dir) self.mock_get_lib_dir.start() def tearDown(self): self.mock_get_lib_dir.stop() self.mock_get_log_dir.stop() self.mock_get_base_dir.stop() AgentTestCase.tearDown(self) _extension_command = "RunCommandLinux.sh" @staticmethod def _create_zip_file(filename): file = None # pylint: disable=redefined-builtin try: file = zipfile.ZipFile(filename, "w") info = zipfile.ZipInfo(DownloadExtensionTestCase._extension_command) info.date_time = time.localtime(time.time())[:6] info.compress_type = zipfile.ZIP_DEFLATED file.writestr(info, "#!/bin/sh\necho 'RunCommandLinux executed successfully'\n") finally: if file is not None: file.close() @staticmethod def _create_invalid_zip_file(filename): with open(filename, "w") as file: # pylint: disable=redefined-builtin file.write("An invalid ZIP file\n") def _get_extension_base_dir(self): return self.extension_dir def _get_extension_package_file(self): return os.path.join(self.agent_dir, self.ext_handler_instance.get_extension_package_zipfile_name()) def _get_extension_command_file(self): return os.path.join(self.extension_dir, DownloadExtensionTestCase._extension_command) def _assert_download_and_expand_succeeded(self): self.assertTrue(os.path.exists(self._get_extension_base_dir()), "The extension package was not downloaded to the expected location") self.assertTrue(os.path.exists(self._get_extension_command_file()), "The extension package was not expanded to the expected location") @staticmethod @contextlib.contextmanager def create_mock_stream(stream_function): with patch("azurelinuxagent.common.protocol.wire.WireClient.stream", side_effect=stream_function) as mock_stream: mock_stream.download_failures = 0 with patch('time.sleep'): # don't sleep in-between retries yield mock_stream def test_it_should_download_and_expand_extension_package(self): def stream(_, destination, **__): DownloadExtensionTestCase._create_zip_file(destination) return True with DownloadExtensionTestCase.create_mock_stream(stream) as mock_stream: with patch("azurelinuxagent.ga.exthandlers.ExtHandlerInstance.report_event") as mock_report_event: self.ext_handler_instance.download() # first download attempt should succeed self.assertEqual(1, mock_stream.call_count, "wireserver stream should be called once") self.assertEqual(1, mock_report_event.call_count, "report_event should be called once") self._assert_download_and_expand_succeeded() def test_it_should_use_existing_extension_package_when_already_downloaded(self): DownloadExtensionTestCase._create_zip_file(self._get_extension_package_file()) with DownloadExtensionTestCase.create_mock_stream(lambda: None) as mock_stream: self.ext_handler_instance.download() mock_stream.assert_not_called() self.assertTrue(os.path.exists(self._get_extension_command_file()), "The extension package was not expanded to the expected location") def test_it_should_ignore_existing_extension_package_when_it_is_invalid(self): def stream(_, destination, **__): DownloadExtensionTestCase._create_zip_file(destination) return True DownloadExtensionTestCase._create_invalid_zip_file(self._get_extension_package_file()) with DownloadExtensionTestCase.create_mock_stream(stream) as mock_stream: self.ext_handler_instance.download() self.assertEqual(1, mock_stream.call_count, "wireserver stream should be called once") self._assert_download_and_expand_succeeded() def test_it_should_maintain_extension_handler_state_when_good_zip_exists(self): DownloadExtensionTestCase._create_zip_file(self._get_extension_package_file()) self.ext_handler_instance.set_handler_state(ExtHandlerState.NotInstalled) self.ext_handler_instance.download() self._assert_download_and_expand_succeeded() self.assertTrue(os.path.exists(os.path.join(self.ext_handler_instance.get_conf_dir(), "HandlerState")), "Ensure that the HandlerState file exists on disk") self.assertEqual(self.ext_handler_instance.get_handler_state(), ExtHandlerState.NotInstalled, "Ensure that the state is maintained for extension HandlerState") def test_it_should_maintain_extension_handler_state_when_bad_zip_exists_and_recovers_with_good_zip(self): def stream(_, destination, **__): DownloadExtensionTestCase._create_zip_file(destination) return True DownloadExtensionTestCase._create_invalid_zip_file(self._get_extension_package_file()) self.ext_handler_instance.set_handler_state(ExtHandlerState.NotInstalled) with DownloadExtensionTestCase.create_mock_stream(stream) as mock_stream: self.ext_handler_instance.download() self.assertEqual(1, mock_stream.call_count, "wireserver stream should be called once") self._assert_download_and_expand_succeeded() self.assertEqual(self.ext_handler_instance.get_handler_state(), ExtHandlerState.NotInstalled, "Ensure that the state is maintained for extension HandlerState") def test_it_should_maintain_extension_handler_state_when_it_downloads_bad_zips(self): def stream(_, destination, **__): DownloadExtensionTestCase._create_invalid_zip_file(destination) return True self.ext_handler_instance.set_handler_state(ExtHandlerState.NotInstalled) with DownloadExtensionTestCase.create_mock_stream(stream): with self.assertRaises(ExtensionDownloadError): self.ext_handler_instance.download() self.assertFalse(os.path.exists(self._get_extension_package_file()), "The bad zip extension package should not be downloaded to the expected location") self.assertFalse(os.path.exists(self._get_extension_command_file()), "The extension package should not expanded be to the expected location due to bad zip") self.assertEqual(self.ext_handler_instance.get_handler_state(), ExtHandlerState.NotInstalled, "Ensure that the state is maintained for extension HandlerState") def test_it_should_use_alternate_uris_when_download_fails(self): def stream(_, destination, **__): # fail a few times, then succeed if mock_stream.download_failures < 3: mock_stream.download_failures += 1 return None DownloadExtensionTestCase._create_zip_file(destination) return True with DownloadExtensionTestCase.create_mock_stream(stream) as mock_stream: self.ext_handler_instance.download() self.assertEqual(mock_stream.call_count, mock_stream.download_failures + 1) self._assert_download_and_expand_succeeded() def test_it_should_use_alternate_uris_when_download_raises_an_exception(self): def stream(_, destination, **__): # fail a few times, then succeed if mock_stream.download_failures < 3: mock_stream.download_failures += 1 raise Exception("Download failed") DownloadExtensionTestCase._create_zip_file(destination) return True with DownloadExtensionTestCase.create_mock_stream(stream) as mock_stream: self.ext_handler_instance.download() self.assertEqual(mock_stream.call_count, mock_stream.download_failures + 1) self._assert_download_and_expand_succeeded() def test_it_should_use_alternate_uris_when_it_downloads_an_invalid_package(self): def stream(_, destination, **__): # fail a few times, then succeed if mock_stream.download_failures < 3: mock_stream.download_failures += 1 DownloadExtensionTestCase._create_invalid_zip_file(destination) else: DownloadExtensionTestCase._create_zip_file(destination) return True with DownloadExtensionTestCase.create_mock_stream(stream) as mock_stream: self.ext_handler_instance.download() self.assertEqual(mock_stream.call_count, mock_stream.download_failures + 1) self._assert_download_and_expand_succeeded() def test_it_should_raise_an_exception_when_all_downloads_fail(self): def stream(_, target_file, **___): stream.target_file = target_file DownloadExtensionTestCase._create_invalid_zip_file(target_file) return True stream.target_file = None with DownloadExtensionTestCase.create_mock_stream(stream) as mock_stream: with self.assertRaises(ExtensionDownloadError) as context_manager: self.ext_handler_instance.download() self.assertEqual(mock_stream.call_count, len(self.pkg.uris)) self.assertRegex(str(context_manager.exception), "Failed to download .* from all URIs") self.assertEqual(context_manager.exception.code, ExtensionErrorCodes.PluginManifestDownloadError) self.assertFalse(os.path.exists(self.extension_dir), "The extension directory was not removed") self.assertFalse(os.path.exists(stream.target_file), "The extension package was not removed") Azure-WALinuxAgent-a976115/tests/ga/test_exthandlers_exthandlerinstance.py000066400000000000000000000143121510742556200270210ustar00rootroot00000000000000# Copyright (c) Microsoft Corporation. All rights reserved. # Licensed under the Apache License. import os import shutil import sys from azurelinuxagent.common.protocol.restapi import Extension, ExtHandlerPackage from azurelinuxagent.ga.exthandlers import ExtHandlerInstance from tests.lib.tools import AgentTestCase, patch class ExtHandlerInstanceTestCase(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) ext_handler = Extension(name='foo') ext_handler.version = "1.2.3" self.ext_handler_instance = ExtHandlerInstance(ext_handler=ext_handler, protocol=None) pkg_uri = "http://bar/foo__1.2.3" self.ext_handler_instance.pkg = ExtHandlerPackage(ext_handler.version) self.ext_handler_instance.pkg.uris.append(pkg_uri) self.base_dir = self.tmp_dir self.extension_directory = os.path.join(self.tmp_dir, "extension_directory") self.mock_get_base_dir = patch.object(self.ext_handler_instance, "get_base_dir", return_value=self.extension_directory) self.mock_get_base_dir.start() def tearDown(self): self.mock_get_base_dir.stop() super(ExtHandlerInstanceTestCase, self).tearDown() def test_rm_ext_handler_dir_should_remove_the_extension_packages(self): os.mkdir(self.extension_directory) open(os.path.join(self.extension_directory, "extension_file1"), 'w').close() open(os.path.join(self.extension_directory, "extension_file2"), 'w').close() open(os.path.join(self.extension_directory, "extension_file3"), 'w').close() open(os.path.join(self.base_dir, "foo__1.2.3.zip"), 'w').close() self.ext_handler_instance.remove_ext_handler() self.assertFalse(os.path.exists(self.extension_directory)) self.assertFalse(os.path.exists(os.path.join(self.base_dir, "foo__1.2.3.zip"))) def test_rm_ext_handler_dir_should_remove_the_extension_directory(self): os.mkdir(self.extension_directory) os.mknod(os.path.join(self.extension_directory, "extension_file1")) os.mknod(os.path.join(self.extension_directory, "extension_file2")) os.mknod(os.path.join(self.extension_directory, "extension_file3")) self.ext_handler_instance.remove_ext_handler() self.assertFalse(os.path.exists(self.extension_directory)) def test_rm_ext_handler_dir_should_not_report_an_event_if_the_extension_directory_does_not_exist(self): if os.path.exists(self.extension_directory): os.rmdir(self.extension_directory) with patch.object(self.ext_handler_instance, "report_event") as mock_report_event: self.ext_handler_instance.remove_ext_handler() mock_report_event.assert_not_called() def test_rm_ext_handler_dir_should_not_report_an_event_if_a_child_is_removed_asynchronously_while_deleting_the_extension_directory(self): os.mkdir(self.extension_directory) os.mknod(os.path.join(self.extension_directory, "extension_file1")) os.mknod(os.path.join(self.extension_directory, "extension_file2")) os.mknod(os.path.join(self.extension_directory, "extension_file3")) # # Some extensions uninstall asynchronously and the files we are trying to remove may be removed # while shutil.rmtree is traversing the extension's directory. Mock this by deleting a file # twice (the second call will produce "[Errno 2] No such file or directory", which should not be # reported as a telemetry event. # In order to mock this, we need to know that remove_ext_handler invokes Pyhon's shutil.rmtree, # which in turn invokes os.unlink (Python 3) or os.remove (Python 2) # remove_api_name = "unlink" if sys.version_info >= (3, 0) else "remove" original_remove_api = getattr(shutil.os, remove_api_name) extension_directory = self.extension_directory def mock_remove(path, dir_fd=None): if dir_fd is not None: # path is relative, make it absolute path = os.path.join(extension_directory, path) if path.endswith("extension_file2"): original_remove_api(path) mock_remove.file_deleted_asynchronously = True original_remove_api(path) mock_remove.file_deleted_asynchronously = False with patch.object(shutil.os, remove_api_name, mock_remove): with patch.object(self.ext_handler_instance, "report_event") as mock_report_event: self.ext_handler_instance.remove_ext_handler() mock_report_event.assert_not_called() # The next 2 asserts are checks on the mock itself, in case the implementation of remove_ext_handler changes (mocks may need to be updated then) self.assertTrue(mock_remove.file_deleted_asynchronously) # verify the mock was actually called self.assertFalse(os.path.exists(self.extension_directory)) # verify the error produced by the mock did not prevent the deletion def test_rm_ext_handler_dir_should_report_an_event_if_an_error_occurs_while_deleting_the_extension_directory(self): os.mkdir(self.extension_directory) os.mknod(os.path.join(self.extension_directory, "extension_file1")) os.mknod(os.path.join(self.extension_directory, "extension_file2")) os.mknod(os.path.join(self.extension_directory, "extension_file3")) # The mock below relies on the knowledge that remove_ext_handler invokes Pyhon's shutil.rmtree, # which in turn invokes os.unlink (Python 3) or os.remove (Python 2) remove_api_name = "unlink" if sys.version_info >= (3, 0) else "remove" original_remove_api = getattr(shutil.os, remove_api_name) def mock_remove(path, dir_fd=None): # pylint: disable=unused-argument if path.endswith("extension_file2"): raise IOError(999,"A mocked error","extension_file2") original_remove_api(path) with patch.object(shutil.os, remove_api_name, mock_remove): with patch.object(self.ext_handler_instance, "report_event") as mock_report_event: self.ext_handler_instance.remove_ext_handler() args, kwargs = mock_report_event.call_args # pylint: disable=unused-variable self.assertTrue("A mocked error" in kwargs["message"]) Azure-WALinuxAgent-a976115/tests/ga/test_firewall_manager.py000066400000000000000000000315331510742556200240400ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import contextlib import os import unittest from azurelinuxagent.common.utils import shellutil from azurelinuxagent.ga.firewall_manager import FirewallManager, IpTables, FirewallCmd, NfTables, FirewallStateError, FirewallManagerNotAvailableError from tests.lib.tools import AgentTestCase, patch from tests.lib.mock_firewall_command import MockIpTables, MockFirewallCmd, MockNft @contextlib.contextmanager def firewall_command_exists_mock(iptables_exist=True, firewallcmd_exist=True, nft_exists=True): """ Mocks the shellutil.run_command method to fake calls to the iptables/firewall-cmd/nft commands. If ech of those commands should exists, the call is faked to return success. Otherwise, the call is faked to invoke a non-existing command. """ commands = { "iptables": iptables_exist, "firewall-cmd": firewallcmd_exist, "nft": nft_exists } original_run_command = shellutil.run_command def mock_run_command(command, *args, **kwargs): command_exists = commands.get(command[0]) if command_exists is not None: command = ['sh', '-c', "exit 0"] if command_exists else ["fake-command-that-does-not-exist"] return original_run_command(command, *args, **kwargs) with patch("azurelinuxagent.ga.firewall_manager.shellutil.run_command", side_effect=mock_run_command) as patcher: yield patcher class TestFirewallManager(AgentTestCase): def test_create_should_prefer_iptables_when_both_iptables_and_nftables_exist(self): with firewall_command_exists_mock(iptables_exist=True, nft_exists=True): firewall = FirewallManager.create('168.63.129.16') self.assertIsInstance(firewall, IpTables) def test_create_should_use_nftables_when_iptables_does_not_exist(self): with firewall_command_exists_mock(iptables_exist=False, nft_exists=True): firewall = FirewallManager.create('168.63.129.16') self.assertIsInstance(firewall, NfTables) def test_create_should_raise_FirewallManagerNotAvailableError_when_both_iptables_and_nftables_do_not_exist(self): with firewall_command_exists_mock(iptables_exist=False, nft_exists=False): with self.assertRaises(FirewallManagerNotAvailableError): FirewallManager.create('168.63.129.16') class _TestFirewallCommand(AgentTestCase): """ Defines the test cases common to TestIpTables and TestFirewallCmd. Note that the test cases are marked as protected to prevent the unit test runner from executing them directly. """ def _test_setup_should_set_all_the_firewall_rules(self, firewall_cmd_type, mock_firewall_cmd_type): with mock_firewall_cmd_type() as mock: firewall = firewall_cmd_type('168.63.129.16') firewall.setup() self.assertEqual( [ mock.get_accept_dns_command(mock.add_option), mock.get_accept_command(mock.add_option), mock.get_drop_command(mock.add_option), ], mock.call_list, "Expected exactly 3 calls to the {0} (add) command".format(mock.add_option)) def _test_remove_should_delete_all_rules(self, firewall_cmd_type, mock_firewall_cmd_type): with mock_firewall_cmd_type() as mock: firewall = firewall_cmd_type('168.63.129.16') firewall.remove() self.assertEqual( [ mock.get_accept_dns_command(mock.check_option), mock.get_accept_command(mock.check_option), mock.get_drop_command(mock.check_option), mock.get_accept_dns_command(mock.delete_option), mock.get_accept_command(mock.delete_option), mock.get_drop_command(mock.delete_option) ], mock.call_list, "Expected 3 calls to the {0} (check) command, followed by 3 calls to the {1} (delete) command".format(mock.add_option, mock.delete_option)) def _test_remove_should_not_attempt_to_delete_rules_that_do_not_exist(self, firewall_cmd_type, mock_firewall_cmd_type): with mock_firewall_cmd_type() as mock: mock.set_return_values(mock.check_option, accept_dns=0, accept=1, drop=0, legacy=0) # The accept rule does not exist firewall = firewall_cmd_type('168.63.129.16') firewall.remove() self.assertEqual( [ mock.get_accept_dns_command(mock.check_option), mock.get_accept_command(mock.check_option), mock.get_drop_command(mock.check_option), mock.get_accept_dns_command(mock.delete_option), mock.get_drop_command(mock.delete_option), ], mock.call_list, "Expected 3 calls to the {0} (check) command followed by 2 calls to the {1} (delete) command (accept DNS and drop)".format(mock.check_option, mock.delete_option)) def _test_check_should_verify_all_rules(self, firewall_cmd_type, mock_firewall_cmd_type): with mock_firewall_cmd_type() as mock: firewall = firewall_cmd_type('168.63.129.16') firewall.check() self.assertEqual( [ mock.get_accept_dns_command(mock.check_option), mock.get_accept_command(mock.check_option), mock.get_drop_command(mock.check_option) ], mock.call_list, "Expected 3 calls to the {0} (check) command".format(mock.check_option)) def _test_remove_legacy_rule_should_delete_the_legacy_rule(self, firewall_cmd_type, mock_firewall_cmd_type): with mock_firewall_cmd_type() as mock: firewall = firewall_cmd_type('168.63.129.16') firewall.remove_legacy_rule() self.assertEqual( [ mock.get_legacy_command(mock.check_option), mock.get_legacy_command(mock.delete_option) ], mock.call_list, "Expected a check ({0}) for the legacy rule, followed by a delete ({1}) of the rule".format(mock.check_option, mock.delete_option)) class TestIpTables(_TestFirewallCommand): def test_it_should_raise_FirewallManagerNotAvailableError_when_the_command_is_not_available(self): with firewall_command_exists_mock(iptables_exist=False): with self.assertRaises(FirewallManagerNotAvailableError): IpTables('168.63.129.16') def test_setup_should_set_all_the_firewall_rules(self): self._test_setup_should_set_all_the_firewall_rules(IpTables, MockIpTables) def test_remove_should_delete_all_rules(self): self._test_remove_should_delete_all_rules(IpTables, MockIpTables) def test_remove_should_not_attempt_to_delete_rules_that_do_not_exist(self): self._test_remove_should_not_attempt_to_delete_rules_that_do_not_exist(IpTables, MockIpTables) def test_check_should_verify_all_rules(self): self._test_check_should_verify_all_rules(IpTables, MockIpTables) def test_remove_legacy_rule_should_delete_the_legacy_rule(self): self._test_remove_legacy_rule_should_delete_the_legacy_rule(IpTables, MockIpTables) def test_it_should_not_use_the_wait_option_on_iptables_versions_less_than_1_4_21(self): with MockIpTables(version='1.4.20') as mock_iptables: firewall = IpTables('168.63.129.16') firewall.setup() self.assertEqual( [ MockIpTables.get_accept_dns_command("-A").replace("-w ", ""), MockIpTables.get_accept_command("-A").replace("-w ", ""), MockIpTables.get_drop_command("-A").replace("-w ", "") ], mock_iptables.call_list, "Expected only 3 calls to the -A (append) command without the -w option") class TestFirewallCmd(_TestFirewallCommand): def test_it_should_raise_FirewallManagerNotAvailableError_when_the_command_is_not_available(self): with firewall_command_exists_mock(firewallcmd_exist=False): with self.assertRaises(FirewallManagerNotAvailableError): FirewallCmd('168.63.129.16') def test_setup_should_set_all_the_firewall_rules(self): self._test_setup_should_set_all_the_firewall_rules(FirewallCmd, MockFirewallCmd) def test_remove_should_delete_all_rules(self): self._test_remove_should_delete_all_rules(FirewallCmd, MockFirewallCmd) def test_remove_should_not_attempt_to_delete_rules_that_do_not_exist(self): self._test_remove_should_not_attempt_to_delete_rules_that_do_not_exist(FirewallCmd, MockFirewallCmd) def test_check_should_verify_all_rules(self): self._test_check_should_verify_all_rules(FirewallCmd, MockFirewallCmd) def test_remove_legacy_rule_should_delete_the_legacy_rule(self): self._test_remove_legacy_rule_should_delete_the_legacy_rule(FirewallCmd, MockFirewallCmd) class TestNft(AgentTestCase): def test_it_should_raise_FirewallManagerNotAvailableError_when_the_command_is_not_available(self): with firewall_command_exists_mock(nft_exists=False): with self.assertRaises(FirewallManagerNotAvailableError): NfTables('168.63.129.16') def test_setup_should_set_the_walinuxagent_table(self): with MockNft() as mock_nft: firewall = NfTables('168.63.129.16') firewall.setup() self.assertEqual(len(mock_nft.call_list), 1, "Expected exactly 1 call to execute a script to create the walinuxagent table; got {0}".format(mock_nft.call_list)) script = mock_nft.call_list[0] self.assertIn("add table ip walinuxagent", script, "The setup script should to create the walinuxagent table. Script: {0}".format(script)) self.assertIn("add chain ip walinuxagent output", script, "The setup script should to create the output chain. Script: {0}".format(script)) self.assertIn("add rule ip walinuxagent output ", script, "The setup script should to create the rule to manage the output chain. Script: {0}".format(script)) def test_remove_should_delete_the_walinuxagent_table(self): with MockNft() as mock_nft: firewall = NfTables('168.63.129.16') firewall.remove() self.assertEqual(['nft delete table walinuxagent'], mock_nft.call_list, "Expected a call to delete the walinuxagent table") def test_check_should_verify_all_rules(self): with MockNft() as mock_nft: _, walinuxagent_table = mock_nft.get_return_value(mock_nft.get_list_command("table")) firewall = NfTables('168.63.129.16') # Remove the clause for DNS and verify check() fails stdout = walinuxagent_table.replace('{ "match": {"op": "!=", "left": { "payload": { "protocol": "tcp", "field": "dport" } }, "right": 53}},', '') mock_nft.set_return_value("list", "table", (0, stdout)) with self.assertRaises(FirewallStateError) as context: firewall.check() self.assertIn("['No expression excludes the DNS port']", str(context.exception), "Expected an error message indicating the DNS port is not excluded") # Remove the clause for root and verify check() fails stdout = walinuxagent_table.replace('{ "match": {"op": "!=", "left": { "meta": { "key": "skuid" } }, "right": ' + str(os.getuid()) + '}},', '') mock_nft.set_return_value("list", "table", (0, stdout)) with self.assertRaises(FirewallStateError) as context: firewall.check() self.assertIn('["No expression excludes the Agent\'s UID"]', str(context.exception), "Expected an error message indicating the Agent's UID is not excluded") # Remove the "drop" clause and verify check() fails stdout = walinuxagent_table.replace('{ "drop": null }', '{ "accept": null }') mock_nft.set_return_value("list", "table", (0, stdout)) with self.assertRaises(FirewallStateError) as context: firewall.check() self.assertIn("['The drop action is missing']", str(context.exception), "Expected an error message indicating the Agent's UID is not excluded") if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-a976115/tests/ga/test_guestagent.py000066400000000000000000000252621510742556200227110ustar00rootroot00000000000000import contextlib import json import os import tempfile from azurelinuxagent.common import conf from azurelinuxagent.common.exception import UpdateError from azurelinuxagent.ga.guestagent import GuestAgent, AGENT_MANIFEST_FILE, AGENT_ERROR_FILE, GuestAgentError, \ MAX_FAILURE, GuestAgentUpdateAttempt from azurelinuxagent.common.version import AGENT_NAME from tests.ga.test_update import UpdateTestCase, EMPTY_MANIFEST, WITH_ERROR, NO_ERROR class TestGuestAgent(UpdateTestCase): def setUp(self): UpdateTestCase.setUp(self) self.copy_agents(self._get_agent_file_path()) self.agent_path = os.path.join(self.tmp_dir, self._get_agent_name()) def test_creation(self): with self.assertRaises(UpdateError): GuestAgent.from_installed_agent("A very bad file name") with self.assertRaises(UpdateError): GuestAgent.from_installed_agent("{0}-a.bad.version".format(AGENT_NAME)) self.expand_agents() agent = GuestAgent.from_installed_agent(self.agent_path) self.assertNotEqual(None, agent) self.assertEqual(self._get_agent_name(), agent.name) self.assertEqual(self._get_agent_version(), agent.version) self.assertEqual(self.agent_path, agent.get_agent_dir()) path = os.path.join(self.agent_path, AGENT_MANIFEST_FILE) self.assertEqual(path, agent.get_agent_manifest_path()) self.assertEqual( os.path.join(self.agent_path, AGENT_ERROR_FILE), agent.get_agent_error_file()) path = ".".join((os.path.join(conf.get_lib_dir(), self._get_agent_name()), "zip")) self.assertEqual(path, agent.get_agent_pkg_path()) self.assertTrue(agent.is_downloaded) self.assertFalse(agent.is_blacklisted) self.assertTrue(agent.is_available) def test_clear_error(self): self.expand_agents() agent = GuestAgent.from_installed_agent(self.agent_path) agent.mark_failure(is_fatal=True) self.assertTrue(agent.error.last_failure > 0.0) self.assertEqual(1, agent.error.failure_count) self.assertTrue(agent.is_blacklisted) self.assertEqual(agent.is_blacklisted, agent.error.is_blacklisted) agent.clear_error() self.assertEqual(0.0, agent.error.last_failure) self.assertEqual(0, agent.error.failure_count) self.assertFalse(agent.is_blacklisted) self.assertEqual(agent.is_blacklisted, agent.error.is_blacklisted) def test_is_available(self): self.expand_agents() agent = GuestAgent.from_installed_agent(self.agent_path) self.assertTrue(agent.is_available) agent.mark_failure(is_fatal=True) self.assertFalse(agent.is_available) def test_is_blacklisted(self): self.expand_agents() agent = GuestAgent.from_installed_agent(self.agent_path) self.assertFalse(agent.is_blacklisted) self.assertEqual(agent.is_blacklisted, agent.error.is_blacklisted) agent.mark_failure(is_fatal=True) self.assertTrue(agent.is_blacklisted) self.assertEqual(agent.is_blacklisted, agent.error.is_blacklisted) def test_is_downloaded(self): self.expand_agents() agent = GuestAgent.from_installed_agent(self.agent_path) self.assertTrue(agent.is_downloaded) def test_mark_failure(self): agent = GuestAgent.from_installed_agent(self.agent_path) agent.mark_failure() self.assertEqual(1, agent.error.failure_count) agent.mark_failure(is_fatal=True) self.assertEqual(2, agent.error.failure_count) self.assertTrue(agent.is_blacklisted) def test_inc_update_attempt_count(self): agent = GuestAgent.from_installed_agent(self.agent_path) agent.inc_update_attempt_count() self.assertEqual(1, agent.update_attempt_data.count) agent.inc_update_attempt_count() self.assertEqual(2, agent.update_attempt_data.count) def test_get_update_count(self): agent = GuestAgent.from_installed_agent(self.agent_path) agent.inc_update_attempt_count() self.assertEqual(1, agent.get_update_attempt_count()) agent.inc_update_attempt_count() self.assertEqual(2, agent.get_update_attempt_count()) def test_load_manifest(self): self.expand_agents() agent = GuestAgent.from_installed_agent(self.agent_path) agent._load_manifest() self.assertEqual(agent.manifest.get_enable_command(), agent.get_agent_cmd()) def test_load_manifest_missing(self): self.expand_agents() agent = GuestAgent.from_installed_agent(self.agent_path) os.remove(agent.get_agent_manifest_path()) self.assertRaises(UpdateError, agent._load_manifest) def test_load_manifest_is_empty(self): self.expand_agents() agent = GuestAgent.from_installed_agent(self.agent_path) self.assertTrue(os.path.isfile(agent.get_agent_manifest_path())) with open(agent.get_agent_manifest_path(), "w") as file: # pylint: disable=redefined-builtin json.dump(EMPTY_MANIFEST, file) self.assertRaises(UpdateError, agent._load_manifest) def test_load_manifest_is_malformed(self): self.expand_agents() agent = GuestAgent.from_installed_agent(self.agent_path) self.assertTrue(os.path.isfile(agent.get_agent_manifest_path())) with open(agent.get_agent_manifest_path(), "w") as file: # pylint: disable=redefined-builtin file.write("This is not JSON data") self.assertRaises(UpdateError, agent._load_manifest) def test_load_error(self): agent = GuestAgent.from_installed_agent(self.agent_path) agent.error = None agent._load_error() self.assertTrue(agent.error is not None) class TestGuestAgentError(UpdateTestCase): def test_creation(self): self.assertRaises(TypeError, GuestAgentError) self.assertRaises(UpdateError, GuestAgentError, None) with self.get_error_file(error_data=WITH_ERROR) as path: err = GuestAgentError(path.name) err.load() self.assertEqual(path.name, err.path) self.assertNotEqual(None, err) self.assertEqual(WITH_ERROR["last_failure"], err.last_failure) self.assertEqual(WITH_ERROR["failure_count"], err.failure_count) self.assertEqual(WITH_ERROR["was_fatal"], err.was_fatal) return def test_clear(self): with self.get_error_file(error_data=WITH_ERROR) as path: err = GuestAgentError(path.name) err.load() self.assertEqual(path.name, err.path) self.assertNotEqual(None, err) err.clear() self.assertEqual(NO_ERROR["last_failure"], err.last_failure) self.assertEqual(NO_ERROR["failure_count"], err.failure_count) self.assertEqual(NO_ERROR["was_fatal"], err.was_fatal) return def test_save(self): err1 = self.create_error() err1.mark_failure() err1.mark_failure(is_fatal=True) err2 = self.create_error(err1.to_json()) self.assertEqual(err1.last_failure, err2.last_failure) self.assertEqual(err1.failure_count, err2.failure_count) self.assertEqual(err1.was_fatal, err2.was_fatal) def test_mark_failure(self): err = self.create_error() self.assertFalse(err.is_blacklisted) for i in range(0, MAX_FAILURE): # pylint: disable=unused-variable err.mark_failure() # Agent failed >= MAX_FAILURE, it should be blacklisted self.assertTrue(err.is_blacklisted) self.assertEqual(MAX_FAILURE, err.failure_count) return def test_mark_failure_permanent(self): err = self.create_error() self.assertFalse(err.is_blacklisted) # Fatal errors immediately blacklist err.mark_failure(is_fatal=True) self.assertTrue(err.is_blacklisted) self.assertTrue(err.failure_count < MAX_FAILURE) return def test_str(self): err = self.create_error(error_data=NO_ERROR) s = "Last Failure: {0}, Total Failures: {1}, Fatal: {2}, Reason: {3}".format( NO_ERROR["last_failure"], NO_ERROR["failure_count"], NO_ERROR["was_fatal"], NO_ERROR["reason"]) self.assertEqual(s, str(err)) err = self.create_error(error_data=WITH_ERROR) s = "Last Failure: {0}, Total Failures: {1}, Fatal: {2}, Reason: {3}".format( WITH_ERROR["last_failure"], WITH_ERROR["failure_count"], WITH_ERROR["was_fatal"], WITH_ERROR["reason"]) self.assertEqual(s, str(err)) return UPDATE_ATTEMPT = { "count": 2 } NO_ATTEMPT = { "count": 0 } class TestGuestAgentUpdateAttempt(UpdateTestCase): @contextlib.contextmanager def get_attempt_count_file(self, attempt_count=None): if attempt_count is None: attempt_count = NO_ATTEMPT with tempfile.NamedTemporaryFile(mode="w") as fp: json.dump(attempt_count, fp) fp.seek(0) yield fp def test_creation(self): self.assertRaises(TypeError, GuestAgentUpdateAttempt) self.assertRaises(UpdateError, GuestAgentUpdateAttempt, None) with self.get_attempt_count_file(UPDATE_ATTEMPT) as path: update_data = GuestAgentUpdateAttempt(path.name) update_data.load() self.assertEqual(path.name, update_data.path) self.assertNotEqual(None, update_data) self.assertEqual(UPDATE_ATTEMPT["count"], update_data.count) def test_clear(self): with self.get_attempt_count_file(UPDATE_ATTEMPT) as path: update_data = GuestAgentUpdateAttempt(path.name) update_data.load() self.assertEqual(path.name, update_data.path) self.assertNotEqual(None, update_data) update_data.clear() self.assertEqual(NO_ATTEMPT["count"], update_data.count) def test_save(self): with self.get_attempt_count_file(UPDATE_ATTEMPT) as path: update_data = GuestAgentUpdateAttempt(path.name) update_data.load() update_data.inc_count() update_data.save() with self.get_attempt_count_file(update_data.to_json()) as path: new_data = GuestAgentUpdateAttempt(path.name) new_data.load() self.assertEqual(update_data.count, new_data.count) def test_inc_count(self): with self.get_attempt_count_file() as path: update_data = GuestAgentUpdateAttempt(path.name) update_data.load() self.assertEqual(0, update_data.count) update_data.inc_count() self.assertEqual(1, update_data.count) update_data.inc_count() self.assertEqual(2, update_data.count) Azure-WALinuxAgent-a976115/tests/ga/test_logcollector.py000066400000000000000000000620251510742556200232310ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import shutil import tempfile import zipfile from azurelinuxagent.ga.logcollector import LogCollector from azurelinuxagent.common.utils import fileutil from azurelinuxagent.common.utils.fileutil import rm_dirs, mkdir, rm_files from tests.lib.tools import AgentTestCase, is_python_version_26, patch, skip_if_predicate_true, data_dir SMALL_FILE_SIZE = 1 * 1024 * 1024 # 1 MB LARGE_FILE_SIZE = 5 * 1024 * 1024 # 5 MB @skip_if_predicate_true(is_python_version_26, "Disabled on Python 2.6") class TestLogCollector(AgentTestCase): @classmethod def setUpClass(cls): AgentTestCase.setUpClass() prefix = "{0}_".format(cls.__class__.__name__) cls.tmp_dir = tempfile.mkdtemp(prefix=prefix) cls.root_collect_dir = os.path.join(cls.tmp_dir, "files_to_collect") mkdir(cls.root_collect_dir) cls._mock_constants() cls._mock_cgroup() @classmethod def _mock_constants(cls): cls.mock_manifest = patch("azurelinuxagent.ga.logcollector.MANIFEST_NORMAL", cls._build_manifest()) cls.mock_manifest.start() cls.log_collector_dir = os.path.join(cls.tmp_dir, "logcollector") cls.mock_log_collector_dir = patch("azurelinuxagent.ga.logcollector._LOG_COLLECTOR_DIR", cls.log_collector_dir) cls.mock_log_collector_dir.start() cls.truncated_files_dir = os.path.join(cls.tmp_dir, "truncated") cls.mock_truncated_files_dir = patch("azurelinuxagent.ga.logcollector._TRUNCATED_FILES_DIR", cls.truncated_files_dir) cls.mock_truncated_files_dir.start() cls.output_results_file_path = os.path.join(cls.log_collector_dir, "results.txt") cls.mock_output_results_file_path = patch("azurelinuxagent.ga.logcollector.OUTPUT_RESULTS_FILE_PATH", cls.output_results_file_path) cls.mock_output_results_file_path.start() cls.compressed_archive_path = os.path.join(cls.log_collector_dir, "logs.zip") cls.mock_compressed_archive_path = patch("azurelinuxagent.ga.logcollector.COMPRESSED_ARCHIVE_PATH", cls.compressed_archive_path) cls.mock_compressed_archive_path.start() @classmethod def _mock_cgroup(cls): # CPU Cgroups compute usage based on /proc/stat and /sys/fs/cgroup/.../cpuacct.stat; use mock data for those # files original_read_file = fileutil.read_file def mock_read_file(filepath, **args): if filepath == "/proc/stat": filepath = os.path.join(data_dir, "cgroups", "v1", "proc_stat_t0") elif filepath.endswith("/cpuacct.stat"): filepath = os.path.join(data_dir, "cgroups", "v1", "cpuacct.stat_t0") return original_read_file(filepath, **args) cls._mock_read_cpu_cgroup_file = patch("azurelinuxagent.common.utils.fileutil.read_file", side_effect=mock_read_file) cls._mock_read_cpu_cgroup_file.start() @classmethod def tearDownClass(cls): cls.mock_manifest.stop() cls.mock_log_collector_dir.stop() cls.mock_truncated_files_dir.stop() cls.mock_output_results_file_path.stop() cls.mock_compressed_archive_path.stop() cls._mock_read_cpu_cgroup_file.stop() shutil.rmtree(cls.tmp_dir) AgentTestCase.tearDownClass() def setUp(self): AgentTestCase.setUp(self) self._build_test_data() def tearDown(self): rm_dirs(self.root_collect_dir) rm_files(self.compressed_archive_path) AgentTestCase.tearDown(self) @classmethod def _build_test_data(cls): """ Build a dummy file structure which will be used as a foundation for the log collector tests """ cls._create_file_of_specific_size(os.path.join(cls.root_collect_dir, "waagent.log"), SMALL_FILE_SIZE) # small text file cls._create_file_of_specific_size(os.path.join(cls.root_collect_dir, "waagent.log.1"), LARGE_FILE_SIZE) # large text file cls._create_file_of_specific_size(os.path.join(cls.root_collect_dir, "waagent.log.2.gz"), SMALL_FILE_SIZE, binary=True) # small binary file cls._create_file_of_specific_size(os.path.join(cls.root_collect_dir, "waagent.log.3.gz"), LARGE_FILE_SIZE, binary=True) # large binary file mkdir(os.path.join(cls.root_collect_dir, "another_dir")) cls._create_file_of_specific_size(os.path.join(cls.root_collect_dir, "less_important_file"), SMALL_FILE_SIZE) cls._create_file_of_specific_size(os.path.join(cls.root_collect_dir, "another_dir", "least_important_file"), SMALL_FILE_SIZE) @classmethod def _build_manifest(cls): """ Files listed in the manifest will be collected, others will be ignored """ files = [ os.path.join(cls.root_collect_dir, "waagent*"), os.path.join(cls.root_collect_dir, "less_important_file*"), os.path.join(cls.root_collect_dir, "another_dir", "least_important_file"), os.path.join(cls.root_collect_dir, "non_existing_file"), ] manifest = "" for file_entry in files: manifest += "copy,{0}\n".format(file_entry) return manifest @staticmethod def _create_file_of_specific_size(file_path, file_size, binary=False): binary_descriptor = "b" if binary else "" data = b'0' if binary else '0' with open(file_path, "w{0}".format(binary_descriptor)) as fh: # pylint: disable=bad-open-mode fh.seek(file_size - 1) fh.write(data) @staticmethod def _truncated_path(normal_path): return "truncated_" + normal_path.replace(os.path.sep, "_") def _assert_files_are_in_archive(self, expected_files): with zipfile.ZipFile(self.compressed_archive_path, "r") as archive: archive_files = archive.namelist() for file in expected_files: # pylint: disable=redefined-builtin if file.lstrip(os.path.sep) not in archive_files: self.fail("File {0} was supposed to be collected, but is not present in the archive!".format(file)) # Assert that results file is always present if "results.txt" not in archive_files: self.fail("File results.txt was supposed to be collected, but is not present in the archive!") self.assertTrue(True) # pylint: disable=redundant-unittest-assert def _assert_files_are_not_in_archive(self, unexpected_files): with zipfile.ZipFile(self.compressed_archive_path, "r") as archive: archive_files = archive.namelist() for file in unexpected_files: # pylint: disable=redefined-builtin if file.lstrip(os.path.sep) in archive_files: self.fail("File {0} wasn't supposed to be collected, but is present in the archive!".format(file)) self.assertTrue(True) # pylint: disable=redundant-unittest-assert def _assert_archive_created(self, archive): with open(self.output_results_file_path, "r") as out: error_message = out.readlines()[-1] self.assertTrue(archive, "Failed to collect logs, error message: {0}".format(error_message)) def _get_uncompressed_file_size(self, file): # pylint: disable=redefined-builtin with zipfile.ZipFile(self.compressed_archive_path, "r") as archive: return archive.getinfo(file.lstrip(os.path.sep)).file_size def _get_number_of_files_in_archive(self): with zipfile.ZipFile(self.compressed_archive_path, "r") as archive: # Exclude results file return len(archive.namelist())-1 def test_log_collector_parses_commands_in_manifest(self): # Ensure familiar commands are parsed and unknowns are ignored (like diskinfo and malformed entries) file_to_collect = os.path.join(self.root_collect_dir, "waagent.log") folder_to_list = self.root_collect_dir manifest = """ echo,### Test header ### unknown command ll,{0} copy,{1} diskinfo,""".format(folder_to_list, file_to_collect) with patch("azurelinuxagent.ga.logcollector.MANIFEST_NORMAL", manifest): log_collector = LogCollector() archive, uncompressed_file_size = log_collector.collect_logs_and_get_archive() with open(self.output_results_file_path, "r") as fh: results = fh.readlines() # Assert echo was parsed self.assertTrue(any(line.endswith("### Test header ###\n") for line in results)) # Assert unknown command was reported self.assertTrue(any(line.endswith("ERROR Couldn\'t parse \"unknown command\"\n") for line in results)) # Assert ll was parsed self.assertTrue(any("ls -alF {0}".format(folder_to_list) in line for line in results)) # Assert copy was parsed self._assert_archive_created(archive) self._assert_files_are_in_archive(expected_files=[file_to_collect]) self.assertEqual(uncompressed_file_size, os.path.getsize(file_to_collect)) no_files = self._get_number_of_files_in_archive() self.assertEqual(1, no_files, "Expected 1 file in archive, found {0}!".format(no_files)) def test_log_collector_uses_full_manifest_when_full_mode_enabled(self): file_to_collect = os.path.join(self.root_collect_dir, "less_important_file") manifest = """ echo,### Test header ### copy,{0} """.format(file_to_collect) with patch("azurelinuxagent.ga.logcollector.MANIFEST_FULL", manifest): log_collector = LogCollector(is_full_mode=True) archive, uncompressed_file_size = log_collector.collect_logs_and_get_archive() self._assert_archive_created(archive) self._assert_files_are_in_archive(expected_files=[file_to_collect]) self.assertEqual(uncompressed_file_size, os.path.getsize(file_to_collect)) no_files = self._get_number_of_files_in_archive() self.assertEqual(1, no_files, "Expected 1 file in archive, found {0}!".format(no_files)) def test_log_collector_should_collect_all_files(self): # All files in the manifest should be collected, since none of them are over the individual file size limit, # and combined they do not cross the archive size threshold. log_collector = LogCollector() archive, uncompressed_file_size = log_collector.collect_logs_and_get_archive() self._assert_archive_created(archive) expected_files = [ os.path.join(self.root_collect_dir, "waagent.log"), os.path.join(self.root_collect_dir, "waagent.log.1"), os.path.join(self.root_collect_dir, "waagent.log.2.gz"), os.path.join(self.root_collect_dir, "waagent.log.3.gz"), os.path.join(self.root_collect_dir, "less_important_file"), os.path.join(self.root_collect_dir, "another_dir", "least_important_file") ] self._assert_files_are_in_archive(expected_files) expected_total_uncompressed_size = 0 for file in expected_files: expected_total_uncompressed_size += os.path.getsize(file) self.assertEqual(uncompressed_file_size, expected_total_uncompressed_size) no_files = self._get_number_of_files_in_archive() self.assertEqual(6, no_files, "Expected 6 files in archive, found {0}!".format(no_files)) def test_log_collector_should_truncate_large_text_files_and_ignore_large_binary_files(self): # Set the size limit so that some files are too large to collect in full. with patch("azurelinuxagent.ga.logcollector._FILE_SIZE_LIMIT", SMALL_FILE_SIZE): log_collector = LogCollector() archive, uncompressed_file_size = log_collector.collect_logs_and_get_archive() self._assert_archive_created(archive) expected_files = [ os.path.join(self.root_collect_dir, "waagent.log"), self._truncated_path(os.path.join(self.root_collect_dir, "waagent.log.1")), # this file should be truncated os.path.join(self.root_collect_dir, "waagent.log.2.gz"), os.path.join(self.root_collect_dir, "less_important_file"), os.path.join(self.root_collect_dir, "another_dir", "least_important_file") ] unexpected_files = [ os.path.join(self.root_collect_dir, "waagent.log.3.gz") # binary files cannot be truncated, ignore it ] self._assert_files_are_in_archive(expected_files) self._assert_files_are_not_in_archive(unexpected_files) total_uncompressed_file_size = 0 for file in expected_files: if file.startswith("truncated_"): total_uncompressed_file_size += SMALL_FILE_SIZE else: total_uncompressed_file_size += os.path.getsize(file) self.assertEqual(total_uncompressed_file_size, uncompressed_file_size) no_files = self._get_number_of_files_in_archive() self.assertEqual(5, no_files, "Expected 5 files in archive, found {0}!".format(no_files)) def test_log_collector_should_prioritize_important_files_if_archive_too_big(self): # Set the archive size limit so that not all files can be collected. In that case, files will be added to the # archive according to their priority. # Specify files that have priority. The list is ordered, where the first entry has the highest priority. must_collect_files = [ os.path.join(self.root_collect_dir, "waagent*"), os.path.join(self.root_collect_dir, "less_important_file*") ] with patch("azurelinuxagent.ga.logcollector._UNCOMPRESSED_ARCHIVE_SIZE_LIMIT", 10 * 1024 * 1024): with patch("azurelinuxagent.ga.logcollector._MUST_COLLECT_FILES", must_collect_files): log_collector = LogCollector() archive, uncompressed_file_size = log_collector.collect_logs_and_get_archive() self._assert_archive_created(archive) expected_files = [ os.path.join(self.root_collect_dir, "waagent.log"), os.path.join(self.root_collect_dir, "waagent.log.1"), os.path.join(self.root_collect_dir, "waagent.log.2.gz") ] unexpected_files = [ os.path.join(self.root_collect_dir, "waagent.log.3.gz"), os.path.join(self.root_collect_dir, "less_important_file"), os.path.join(self.root_collect_dir, "another_dir", "least_important_file") ] self._assert_files_are_in_archive(expected_files) self._assert_files_are_not_in_archive(unexpected_files) expected_total_uncompressed_size = 0 for file in expected_files: expected_total_uncompressed_size += os.path.getsize(file) self.assertEqual(uncompressed_file_size, expected_total_uncompressed_size) no_files = self._get_number_of_files_in_archive() self.assertEqual(3, no_files, "Expected 3 files in archive, found {0}!".format(no_files)) # Second collection, if a file got deleted, delete it from the archive and add next file on the priority list # if there is enough space. rm_files(os.path.join(self.root_collect_dir, "waagent.log.3.gz")) with patch("azurelinuxagent.ga.logcollector._UNCOMPRESSED_ARCHIVE_SIZE_LIMIT", 10 * 1024 * 1024): with patch("azurelinuxagent.ga.logcollector._MUST_COLLECT_FILES", must_collect_files): second_archive, second_uncompressed_file_size = log_collector.collect_logs_and_get_archive() expected_files = [ os.path.join(self.root_collect_dir, "waagent.log"), os.path.join(self.root_collect_dir, "waagent.log.1"), os.path.join(self.root_collect_dir, "waagent.log.2.gz"), os.path.join(self.root_collect_dir, "less_important_file"), os.path.join(self.root_collect_dir, "another_dir", "least_important_file") ] unexpected_files = [ os.path.join(self.root_collect_dir, "waagent.log.3.gz") ] self._assert_files_are_in_archive(expected_files) self._assert_files_are_not_in_archive(unexpected_files) expected_total_uncompressed_size = 0 for file in expected_files: expected_total_uncompressed_size += os.path.getsize(file) self.assertEqual(second_uncompressed_file_size, expected_total_uncompressed_size) self._assert_archive_created(second_archive) no_files = self._get_number_of_files_in_archive() self.assertEqual(5, no_files, "Expected 5 files in archive, found {0}!".format(no_files)) def test_log_collector_should_update_archive_when_files_are_new_or_modified_or_deleted(self): # Ensure the archive reflects the state of files on the disk at collection time. If a file was updated, it # needs to be updated in the archive, deleted if removed from disk, and added if not previously seen. log_collector = LogCollector() first_archive, first_uncompressed_file_size = log_collector.collect_logs_and_get_archive() self._assert_archive_created(first_archive) # Everything should be in the archive expected_files = [ os.path.join(self.root_collect_dir, "waagent.log"), os.path.join(self.root_collect_dir, "waagent.log.1"), os.path.join(self.root_collect_dir, "waagent.log.2.gz"), os.path.join(self.root_collect_dir, "waagent.log.3.gz"), os.path.join(self.root_collect_dir, "less_important_file"), os.path.join(self.root_collect_dir, "another_dir", "least_important_file") ] self._assert_files_are_in_archive(expected_files) expected_total_uncompressed_size = 0 for file in expected_files: expected_total_uncompressed_size += os.path.getsize(file) self.assertEqual(first_uncompressed_file_size, expected_total_uncompressed_size) no_files = self._get_number_of_files_in_archive() self.assertEqual(6, no_files, "Expected 6 files in archive, found {0}!".format(no_files)) # Update a file and its last modified time to ensure the last modified time and last collection time are not # the same in this test file_to_update = os.path.join(self.root_collect_dir, "waagent.log") self._create_file_of_specific_size(file_to_update, LARGE_FILE_SIZE) # update existing file new_time = os.path.getmtime(file_to_update) + 5 os.utime(file_to_update, (new_time, new_time)) # Create a new file (that is covered by the manifest and will be collected) and delete a file self._create_file_of_specific_size(os.path.join(self.root_collect_dir, "less_important_file.1"), LARGE_FILE_SIZE) rm_files(os.path.join(self.root_collect_dir, "waagent.log.1")) second_archive, second_uncompressed_file_size = log_collector.collect_logs_and_get_archive() self._assert_archive_created(second_archive) expected_files = [ os.path.join(self.root_collect_dir, "waagent.log"), os.path.join(self.root_collect_dir, "waagent.log.2.gz"), os.path.join(self.root_collect_dir, "waagent.log.3.gz"), os.path.join(self.root_collect_dir, "less_important_file"), os.path.join(self.root_collect_dir, "less_important_file.1"), os.path.join(self.root_collect_dir, "another_dir", "least_important_file") ] unexpected_files = [ os.path.join(self.root_collect_dir, "waagent.log.1") ] self._assert_files_are_in_archive(expected_files) self._assert_files_are_not_in_archive(unexpected_files) expected_total_uncompressed_size = 0 for file in expected_files: expected_total_uncompressed_size += os.path.getsize(file) self.assertEqual(second_uncompressed_file_size, expected_total_uncompressed_size) file = os.path.join(self.root_collect_dir, "waagent.log") # pylint: disable=redefined-builtin new_file_size = self._get_uncompressed_file_size(file) self.assertEqual(LARGE_FILE_SIZE, new_file_size, "File {0} hasn't been updated! Size in archive is {1}, but " "should be {2}.".format(file, new_file_size, LARGE_FILE_SIZE)) no_files = self._get_number_of_files_in_archive() self.assertEqual(6, no_files, "Expected 6 files in archive, found {0}!".format(no_files)) def test_log_collector_should_clean_up_uncollected_truncated_files(self): # Make sure that truncated files that are no longer needed are cleaned up. If an existing truncated file # from a previous run is not collected in the current run, it should be deleted to free up space. # Specify files that have priority. The list is ordered, where the first entry has the highest priority. must_collect_files = [ os.path.join(self.root_collect_dir, "waagent*") ] # Set the archive size limit so that not all files can be collected. In that case, files will be added to the # archive according to their priority. # Set the size limit so that only two files can be collected, of which one needs to be truncated. with patch("azurelinuxagent.ga.logcollector._UNCOMPRESSED_ARCHIVE_SIZE_LIMIT", 2 * SMALL_FILE_SIZE): with patch("azurelinuxagent.ga.logcollector._MUST_COLLECT_FILES", must_collect_files): with patch("azurelinuxagent.ga.logcollector._FILE_SIZE_LIMIT", SMALL_FILE_SIZE): log_collector = LogCollector() archive, uncompressed_file_size = log_collector.collect_logs_and_get_archive() self._assert_archive_created(archive) expected_files = [ os.path.join(self.root_collect_dir, "waagent.log"), self._truncated_path(os.path.join(self.root_collect_dir, "waagent.log.1")), # this file should be truncated ] self._assert_files_are_in_archive(expected_files) expected_total_uncompressed_size = 0 for file in expected_files: if file.startswith("truncated_"): expected_total_uncompressed_size += SMALL_FILE_SIZE else: expected_total_uncompressed_size += os.path.getsize(file) self.assertEqual(uncompressed_file_size, expected_total_uncompressed_size) no_files = self._get_number_of_files_in_archive() self.assertEqual(2, no_files, "Expected 2 files in archive, found {0}!".format(no_files)) # Remove the original file so it is not collected anymore. In the next collection, the truncated file should be # removed both from the archive and from the filesystem. rm_files(os.path.join(self.root_collect_dir, "waagent.log.1")) with patch("azurelinuxagent.ga.logcollector._UNCOMPRESSED_ARCHIVE_SIZE_LIMIT", 2 * SMALL_FILE_SIZE): with patch("azurelinuxagent.ga.logcollector._MUST_COLLECT_FILES", must_collect_files): with patch("azurelinuxagent.ga.logcollector._FILE_SIZE_LIMIT", SMALL_FILE_SIZE): log_collector = LogCollector() second_archive, second_uncompressed_file_size = log_collector.collect_logs_and_get_archive() expected_files = [ os.path.join(self.root_collect_dir, "waagent.log"), os.path.join(self.root_collect_dir, "waagent.log.2.gz"), ] unexpected_files = [ self._truncated_path(os.path.join(self.root_collect_dir, "waagent.log.1")) ] self._assert_files_are_in_archive(expected_files) self._assert_files_are_not_in_archive(unexpected_files) expected_total_uncompressed_size = 0 for file in expected_files: if file.startswith("truncated_"): expected_total_uncompressed_size += SMALL_FILE_SIZE else: expected_total_uncompressed_size += os.path.getsize(file) self.assertEqual(second_uncompressed_file_size, expected_total_uncompressed_size) self._assert_archive_created(second_archive) no_files = self._get_number_of_files_in_archive() self.assertEqual(2, no_files, "Expected 2 files in archive, found {0}!".format(no_files)) truncated_files = os.listdir(self.truncated_files_dir) self.assertEqual(0, len(truncated_files), "Uncollected truncated file waagent.log.1 should have been deleted!") Azure-WALinuxAgent-a976115/tests/ga/test_memorycontroller.py000066400000000000000000000115731510742556200241570ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.4+ and Openssl 1.0+ # from __future__ import print_function import errno import os import shutil from azurelinuxagent.ga.cgroupcontroller import CounterNotFound from azurelinuxagent.ga.memorycontroller import MemoryControllerV1, MemoryControllerV2 from tests.lib.tools import AgentTestCase, data_dir class TestMemoryControllerV1(AgentTestCase): def test_get_metrics_v1(self): test_mem_controller = MemoryControllerV1("test_extension", os.path.join(data_dir, "cgroups", "v1")) rss_memory_usage, cache_memory_usage = test_mem_controller.get_memory_usage() self.assertEqual(100000, rss_memory_usage) self.assertEqual(50000, cache_memory_usage) max_memory_usage = test_mem_controller.get_max_memory_usage() self.assertEqual(1000000, max_memory_usage) swap_memory_usage = test_mem_controller.try_swap_memory_usage() self.assertEqual(20000, swap_memory_usage) def test_get_metrics_v1_when_files_not_present(self): test_mem_controller = MemoryControllerV1("test_extension", os.path.join(data_dir, "cgroups")) with self.assertRaises(IOError) as e: test_mem_controller.get_memory_usage() self.assertEqual(e.exception.errno, errno.ENOENT) with self.assertRaises(IOError) as e: test_mem_controller.get_max_memory_usage() self.assertEqual(e.exception.errno, errno.ENOENT) with self.assertRaises(IOError) as e: test_mem_controller.try_swap_memory_usage() self.assertEqual(e.exception.errno, errno.ENOENT) def test_get_memory_usage_v1_counters_not_found(self): test_file = os.path.join(self.tmp_dir, "memory.stat") shutil.copyfile(os.path.join(data_dir, "cgroups", "v1", "memory.stat_missing"), test_file) test_mem_controller = MemoryControllerV1("test_extension", self.tmp_dir) with self.assertRaises(CounterNotFound): test_mem_controller.get_memory_usage() swap_memory_usage = test_mem_controller.try_swap_memory_usage() self.assertEqual(0, swap_memory_usage) class TestMemoryControllerV2(AgentTestCase): def test_get_metrics_v2(self): test_mem_controller = MemoryControllerV2("test_extension", os.path.join(data_dir, "cgroups", "v2")) anon_memory_usage, cache_memory_usage = test_mem_controller.get_memory_usage() self.assertEqual(17589300, anon_memory_usage) self.assertEqual(134553600, cache_memory_usage) max_memory_usage = test_mem_controller.get_max_memory_usage() self.assertEqual(194494464, max_memory_usage) swap_memory_usage = test_mem_controller.try_swap_memory_usage() self.assertEqual(20000, swap_memory_usage) memory_throttled_events = test_mem_controller.get_memory_throttled_events() self.assertEqual(9, memory_throttled_events) def test_get_metrics_v2_when_files_not_present(self): test_mem_controller = MemoryControllerV2("test_extension", os.path.join(data_dir, "cgroups")) with self.assertRaises(IOError) as e: test_mem_controller.get_memory_usage() self.assertEqual(e.exception.errno, errno.ENOENT) with self.assertRaises(IOError) as e: test_mem_controller.get_max_memory_usage() self.assertEqual(e.exception.errno, errno.ENOENT) with self.assertRaises(IOError) as e: test_mem_controller.try_swap_memory_usage() self.assertEqual(e.exception.errno, errno.ENOENT) with self.assertRaises(IOError) as e: test_mem_controller.get_memory_throttled_events() self.assertEqual(e.exception.errno, errno.ENOENT) def test_get_memory_usage_v1_counters_not_found(self): test_stat_file = os.path.join(self.tmp_dir, "memory.stat") shutil.copyfile(os.path.join(data_dir, "cgroups", "v2", "memory.stat_missing"), test_stat_file) test_events_file = os.path.join(self.tmp_dir, "memory.events") shutil.copyfile(os.path.join(data_dir, "cgroups", "v2", "memory.stat_missing"), test_events_file) test_mem_controller = MemoryControllerV2("test_extension", self.tmp_dir) with self.assertRaises(CounterNotFound): test_mem_controller.get_memory_usage() with self.assertRaises(CounterNotFound): test_mem_controller.get_memory_throttled_events() Azure-WALinuxAgent-a976115/tests/ga/test_monitor.py000066400000000000000000000350621510742556200222310ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import contextlib import os import random import string from azurelinuxagent.common import event, logger from azurelinuxagent.ga.cgroupcontroller import MetricValue, _REPORT_EVERY_HOUR from azurelinuxagent.ga.cgroupstelemetry import CGroupsTelemetry from azurelinuxagent.common.event import EVENTS_DIRECTORY from azurelinuxagent.common.protocol.healthservice import HealthService from azurelinuxagent.common.protocol.util import ProtocolUtil from azurelinuxagent.common.protocol.wire import WireProtocol from azurelinuxagent.ga.cpucontroller import CpuControllerV1 from azurelinuxagent.ga.memorycontroller import MemoryControllerV1 from azurelinuxagent.ga.monitor import get_monitor_handler, PeriodicOperation, SendImdsHeartbeat, \ ResetPeriodicLogMessages, SendHostPluginHeartbeat, PollResourceUsage, \ ReportNetworkErrors, ReportNetworkConfigurationChanges, PollSystemWideResourceUsage from tests.lib.mock_wire_protocol import mock_wire_protocol, MockHttpResponse from tests.lib.http_request_predicates import HttpRequestPredicates from tests.lib.wire_protocol_data import DATA_FILE from tests.lib.tools import Mock, MagicMock, patch, AgentTestCase, clear_singleton_instances def random_generator(size=6, chars=string.ascii_uppercase + string.digits + string.ascii_lowercase): return ''.join(random.choice(chars) for x in range(size)) @contextlib.contextmanager def _mock_wire_protocol(): # Since ProtocolUtil is a singleton per thread, we need to clear it to ensure that the test cases do not # reuse a previous state clear_singleton_instances(ProtocolUtil) with mock_wire_protocol(DATA_FILE) as protocol: protocol_util = MagicMock() protocol_util.get_protocol = Mock(return_value=protocol) with patch("azurelinuxagent.ga.monitor.get_protocol_util", return_value=protocol_util): yield protocol class MonitorHandlerTestCase(AgentTestCase): def test_it_should_invoke_all_periodic_operations(self): def periodic_operation_run(self): invoked_operations.append(self.__class__.__name__) with _mock_wire_protocol(): with patch("azurelinuxagent.ga.monitor.MonitorHandler.stopped", side_effect=[False, True, False, True]): with patch("time.sleep"): with patch.object(PeriodicOperation, "run", side_effect=periodic_operation_run, autospec=True): with patch("azurelinuxagent.common.conf.get_monitor_network_configuration_changes") as monitor_network_changes: for network_changes in [True, False]: monitor_network_changes.return_value = network_changes invoked_operations = [] monitor_handler = get_monitor_handler() monitor_handler.run() monitor_handler.join() expected_operations = [ PollResourceUsage.__name__, PollSystemWideResourceUsage.__name__, ReportNetworkErrors.__name__, ResetPeriodicLogMessages.__name__, SendHostPluginHeartbeat.__name__, SendImdsHeartbeat.__name__, ] if network_changes: expected_operations.append(ReportNetworkConfigurationChanges.__name__) invoked_operations.sort() expected_operations.sort() self.assertEqual(invoked_operations, expected_operations, "The monitor thread did not invoke the expected operations") class SendHostPluginHeartbeatOperationTestCase(AgentTestCase, HttpRequestPredicates): def test_it_should_report_host_ga_health(self): with _mock_wire_protocol() as protocol: def http_post_handler(url, _, **__): if self.is_health_service_request(url): http_post_handler.health_service_posted = True return MockHttpResponse(status=200) return None http_post_handler.health_service_posted = False protocol.set_http_handlers(http_post_handler=http_post_handler) health_service = HealthService(protocol.get_endpoint()) SendHostPluginHeartbeat(protocol, health_service).run() self.assertTrue(http_post_handler.health_service_posted, "The monitor thread did not report host ga plugin health") def test_it_should_report_a_telemetry_event_when_host_plugin_is_not_healthy(self): with _mock_wire_protocol() as protocol: # the error triggers only after ERROR_STATE_DELTA_DEFAULT with patch('azurelinuxagent.common.errorstate.ErrorState.is_triggered', return_value=True): with patch('azurelinuxagent.common.event.EventLogger.add_event') as add_event_patcher: def http_get_handler(url, *_, **__): if self.is_host_plugin_health_request(url): return MockHttpResponse(status=503) return None protocol.set_http_handlers(http_get_handler=http_get_handler) health_service = HealthService(protocol.get_endpoint()) SendHostPluginHeartbeat(protocol, health_service).run() heartbeat_events = [kwargs for _, kwargs in add_event_patcher.call_args_list if kwargs['op'] == 'HostPluginHeartbeatExtended'] self.assertTrue(len(heartbeat_events) == 1, "The monitor thread should have reported exactly 1 telemetry event for an unhealthy host ga plugin") self.assertFalse(heartbeat_events[0]['is_success'], 'The reported event should indicate failure') def test_it_should_not_send_a_health_signal_when_the_hearbeat_fails(self): with _mock_wire_protocol() as protocol: with patch('azurelinuxagent.common.event.EventLogger.add_event') as add_event_patcher: health_service_post_requests = [] def http_get_handler(url, *_, **__): if self.is_host_plugin_health_request(url): del health_service_post_requests[:] # clear the requests; after this error there should be no more POSTs raise IOError('A CLIENT ERROR') return None def http_post_handler(url, _, **__): if self.is_health_service_request(url): health_service_post_requests.append(url) return MockHttpResponse(status=200) return None protocol.set_http_handlers(http_get_handler=http_get_handler, http_post_handler=http_post_handler) health_service = HealthService(protocol.get_endpoint()) SendHostPluginHeartbeat(protocol, health_service).run() self.assertEqual(0, len(health_service_post_requests), "No health signals should have been posted: {0}".format(health_service_post_requests)) heartbeat_events = [kwargs for _, kwargs in add_event_patcher.call_args_list if kwargs['op'] == 'HostPluginHeartbeat'] self.assertTrue(len(heartbeat_events) == 1, "The monitor thread should have reported exactly 1 telemetry event for an unhealthy host ga plugin") self.assertFalse(heartbeat_events[0]['is_success'], 'The reported event should indicate failure') self.assertIn('A CLIENT ERROR', heartbeat_events[0]['message'], 'The failure does not include the expected message') class ResetPeriodicLogMessagesOperationTestCase(AgentTestCase, HttpRequestPredicates): def test_it_should_clear_periodic_log_messages(self): logger.reset_periodic() # Adding 100 different messages expected = 100 for i in range(expected): logger.periodic_info(logger.EVERY_DAY, "Test {0}".format(i)) actual = len(logger.DEFAULT_LOGGER.periodic_messages) if actual != expected: raise Exception('Test setup error: the periodic messages were not added. Got: {0} Expected: {1}'.format(actual, expected)) ResetPeriodicLogMessages().run() self.assertEqual(0, len(logger.DEFAULT_LOGGER.periodic_messages), "The monitor thread did not reset the periodic log messages") @patch('azurelinuxagent.common.osutil.get_osutil') @patch('azurelinuxagent.common.protocol.util.get_protocol_util') @patch("azurelinuxagent.common.protocol.healthservice.HealthService._report") @patch("azurelinuxagent.common.utils.restutil.http_get") class TestExtensionMetricsDataTelemetry(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) event.init_event_logger(os.path.join(self.tmp_dir, EVENTS_DIRECTORY)) CGroupsTelemetry.reset() clear_singleton_instances(ProtocolUtil) protocol = WireProtocol('endpoint') protocol.client.update_goal_state = MagicMock() self.get_protocol = patch('azurelinuxagent.common.protocol.util.ProtocolUtil.get_protocol', return_value=protocol) self.get_protocol.start() def tearDown(self): AgentTestCase.tearDown(self) CGroupsTelemetry.reset() self.get_protocol.stop() @patch('azurelinuxagent.common.event.EventLogger.add_metric') @patch("azurelinuxagent.ga.cgroupstelemetry.CGroupsTelemetry.poll_all_tracked") def test_send_extension_metrics_telemetry(self, patch_poll_all_tracked, # pylint: disable=unused-argument patch_add_metric, *args): patch_poll_all_tracked.return_value = [MetricValue("Process", "% Processor Time", "service", 1), MetricValue("Memory", "Total Memory Usage", "service", 1), MetricValue("Memory", "Max Memory Usage", "service", 1, _REPORT_EVERY_HOUR), MetricValue("Memory", "Swap Memory Usage", "service", 1, _REPORT_EVERY_HOUR) ] PollResourceUsage().run() self.assertEqual(1, patch_poll_all_tracked.call_count) self.assertEqual(4, patch_add_metric.call_count) # Four metrics being sent. @patch('azurelinuxagent.common.event.EventLogger.add_metric') @patch("azurelinuxagent.ga.cgroupstelemetry.CGroupsTelemetry.poll_all_tracked") def test_send_extension_metrics_telemetry_for_empty_cgroup(self, patch_poll_all_tracked, # pylint: disable=unused-argument patch_add_metric, *args): patch_poll_all_tracked.return_value = [] PollResourceUsage().run() self.assertEqual(1, patch_poll_all_tracked.call_count) self.assertEqual(0, patch_add_metric.call_count) @patch('azurelinuxagent.common.event.EventLogger.add_metric') @patch("azurelinuxagent.ga.memorycontroller.MemoryControllerV1.get_memory_usage") @patch('azurelinuxagent.common.logger.Logger.periodic_warn') def test_send_extension_metrics_telemetry_handling_memory_cgroup_exceptions_errno2(self, patch_periodic_warn, # pylint: disable=unused-argument get_memory_usage, patch_add_metric, *args): ioerror = IOError() ioerror.errno = 2 get_memory_usage.side_effect = ioerror CGroupsTelemetry._tracked["/test/path"] = MemoryControllerV1("_cgroup_name", "/test/path") PollResourceUsage().run() self.assertEqual(0, patch_periodic_warn.call_count) self.assertEqual(0, patch_add_metric.call_count) # No metrics should be sent. @patch('azurelinuxagent.common.event.EventLogger.add_metric') @patch("azurelinuxagent.ga.cpucontroller.CpuControllerV1.get_cpu_usage") @patch('azurelinuxagent.common.logger.Logger.periodic_warn') def test_send_extension_metrics_telemetry_handling_cpu_cgroup_exceptions_errno2(self, patch_periodic_warn, # pylint: disable=unused-argument patch_cpu_usage, patch_add_metric, *args): ioerror = IOError() ioerror.errno = 2 patch_cpu_usage.side_effect = ioerror CGroupsTelemetry._tracked["/test/path"] = CpuControllerV1("_cgroup_name", "/test/path") PollResourceUsage().run() self.assertEqual(0, patch_periodic_warn.call_count) self.assertEqual(0, patch_add_metric.call_count) # No metrics should be sent. class TestPollSystemWideResourceUsage(AgentTestCase): @patch('azurelinuxagent.common.event.EventLogger.add_metric') @patch("azurelinuxagent.common.osutil.default.DefaultOSUtil.get_used_and_available_system_memory") def test_send_system_memory_metrics(self, path_get_system_memory, patch_add_metric, *args): # pylint: disable=unused-argument path_get_system_memory.return_value = (234.45, 123.45) PollSystemWideResourceUsage().run() self.assertEqual(1, path_get_system_memory.call_count) self.assertEqual(2, patch_add_metric.call_count) # 2 metrics being sent. @patch('azurelinuxagent.common.event.EventLogger.add_metric') @patch("azurelinuxagent.ga.monitor.PollSystemWideResourceUsage.poll_system_memory_metrics") def test_send_system_memory_metrics_empty(self, path_poll_system_memory_metrics, patch_add_metric, # pylint: disable=unused-argument *args): path_poll_system_memory_metrics.return_value = [] PollSystemWideResourceUsage().run() self.assertEqual(1, path_poll_system_memory_metrics.call_count) self.assertEqual(0, patch_add_metric.call_count) # Zero metrics being sent.Azure-WALinuxAgent-a976115/tests/ga/test_multi_config_extension.py000066400000000000000000002545021510742556200253170ustar00rootroot00000000000000import contextlib import json import os.path import re import subprocess import uuid from azurelinuxagent.common import conf from azurelinuxagent.common.event import WALAEventOperation from azurelinuxagent.common.exception import GoalStateAggregateStatusCodes from azurelinuxagent.common.future import ustr from azurelinuxagent.common.protocol.restapi import ExtensionRequestedState, ExtensionState from azurelinuxagent.common.utils import fileutil from azurelinuxagent.ga.exthandlers import get_exthandlers_handler, ExtensionStatusValue, ExtCommandEnvVariable, \ GoalStateStatus, ExtHandlerInstance from tests.lib.extension_emulator import enable_invocations, extension_emulator, ExtensionCommandNames, Actions, \ extract_extension_info_from_command from tests.lib.mock_wire_protocol import mock_wire_protocol, MockHttpResponse from tests.lib.http_request_predicates import HttpRequestPredicates from tests.lib.wire_protocol_data import DATA_FILE, WireProtocolData from tests.lib.tools import AgentTestCase, mock_sleep, patch class TestMultiConfigExtensionsConfigParsing(AgentTestCase): _MULTI_CONFIG_TEST_DATA = os.path.join("wire", "multi-config") def setUp(self): AgentTestCase.setUp(self) self.mock_sleep = patch("time.sleep", lambda *_: mock_sleep(0.0001)) self.mock_sleep.start() self.test_data = DATA_FILE.copy() def tearDown(self): self.mock_sleep.stop() AgentTestCase.tearDown(self) class _TestExtHandlerObject: def __init__(self, name, version, state="enabled"): self.name = name self.version = version self.state = state self.is_invalid_setting = False self.settings = {} class _TestExtensionObject: def __init__(self, name, seq_no, dependency_level="0", state="enabled"): self.name = name self.seq_no = seq_no self.dependency_level = int(dependency_level) self.state = state def _mock_and_assert_ext_handlers(self, expected_handlers): with mock_wire_protocol(self.test_data) as protocol: ext_handlers = protocol.get_goal_state().extensions_goal_state.extensions for ext_handler in ext_handlers: if ext_handler.name not in expected_handlers: continue expected_handler = expected_handlers.pop(ext_handler.name) self.assertEqual(expected_handler.state, ext_handler.state) self.assertEqual(expected_handler.version, ext_handler.version) self.assertEqual(expected_handler.is_invalid_setting, ext_handler.is_invalid_setting) self.assertEqual(len(expected_handler.settings), len(ext_handler.settings)) for extension in ext_handler.settings: self.assertIn(extension.name, expected_handler.settings) expected_extension = expected_handler.settings.pop(extension.name) self.assertEqual(expected_extension.seq_no, extension.sequenceNumber) self.assertEqual(expected_extension.state, extension.state) self.assertEqual(expected_extension.dependency_level, extension.dependencyLevel) self.assertEqual(0, len(expected_handler.settings), "All extensions not verified for handler") self.assertEqual(0, len(expected_handlers), "All handlers not verified") def _get_mock_expected_handler_data(self, rc_extensions, vmaccess_extensions, geneva_extensions): # Set expected handler data run_command_test_handler = self._TestExtHandlerObject("Microsoft.CPlat.Core.RunCommandHandlerWindows", "2.3.0") run_command_test_handler.settings.update(rc_extensions) vm_access_test_handler = self._TestExtHandlerObject("Microsoft.Compute.VMAccessAgent", "2.4.7") vm_access_test_handler.settings.update(vmaccess_extensions) geneva_test_handler = self._TestExtHandlerObject("Microsoft.Azure.Geneva.GenevaMonitoring", "2.20.0.1") geneva_test_handler.settings.update(geneva_extensions) expected_handlers = { run_command_test_handler.name: run_command_test_handler, vm_access_test_handler.name: vm_access_test_handler, geneva_test_handler.name: geneva_test_handler } return expected_handlers def test_it_should_parse_multi_config_settings_properly(self): self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, "ext_conf_with_multi_config.xml") rc_extensions = { "firstRunCommand": self._TestExtensionObject(name="firstRunCommand", seq_no=2), "secondRunCommand": self._TestExtensionObject(name="secondRunCommand", seq_no=2, dependency_level="3"), "thirdRunCommand": self._TestExtensionObject(name="thirdRunCommand", seq_no=1, dependency_level="4") } vmaccess_extensions = { "Microsoft.Compute.VMAccessAgent": self._TestExtensionObject(name="Microsoft.Compute.VMAccessAgent", seq_no=1, dependency_level=2)} geneva_extensions = {"Microsoft.Azure.Geneva.GenevaMonitoring": self._TestExtensionObject( name="Microsoft.Azure.Geneva.GenevaMonitoring", seq_no=1)} expected_handlers = self._get_mock_expected_handler_data(rc_extensions, vmaccess_extensions, geneva_extensions) self._mock_and_assert_ext_handlers(expected_handlers) def test_it_should_parse_multi_config_with_disable_state_properly(self): self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, "ext_conf_with_disabled_multi_config.xml") rc_extensions = { "firstRunCommand": self._TestExtensionObject(name="firstRunCommand", seq_no=3), "secondRunCommand": self._TestExtensionObject(name="secondRunCommand", seq_no=3, dependency_level="1"), "thirdRunCommand": self._TestExtensionObject(name="thirdRunCommand", seq_no=1, dependency_level="4", state="disabled") } vmaccess_extensions = { "Microsoft.Compute.VMAccessAgent": self._TestExtensionObject(name="Microsoft.Compute.VMAccessAgent", seq_no=2, dependency_level="2")} geneva_extensions = {"Microsoft.Azure.Geneva.GenevaMonitoring": self._TestExtensionObject( name="Microsoft.Azure.Geneva.GenevaMonitoring", seq_no=2)} expected_handlers = self._get_mock_expected_handler_data(rc_extensions, vmaccess_extensions, geneva_extensions) self._mock_and_assert_ext_handlers(expected_handlers) class _MultiConfigBaseTestClass(AgentTestCase): _MULTI_CONFIG_TEST_DATA = os.path.join("wire", "multi-config") def setUp(self): AgentTestCase.setUp(self) self.mock_sleep = patch("time.sleep", lambda *_: mock_sleep(0.01)) self.mock_sleep.start() self.test_data = DATA_FILE.copy() def tearDown(self): self.mock_sleep.stop() AgentTestCase.tearDown(self) @contextlib.contextmanager def _setup_test_env(self, mock_manifest=False): with mock_wire_protocol(self.test_data) as protocol: def mock_http_put(url, *args, **_): if HttpRequestPredicates.is_host_plugin_status_request(url): # Skip reading the HostGA request data as its encoded return MockHttpResponse(status=500) protocol.aggregate_status = json.loads(args[0]) return MockHttpResponse(status=201) with patch("azurelinuxagent.common.agent_supported_feature._MultiConfigFeature.is_supported", True): protocol.aggregate_status = None protocol.set_http_handlers(http_put_handler=mock_http_put) exthandlers_handler = get_exthandlers_handler(protocol) no_of_extensions = protocol.mock_wire_data.get_no_of_extensions_in_config() if mock_manifest: with patch('azurelinuxagent.ga.exthandlers.HandlerManifest.supports_multiple_extensions', return_value=True): yield exthandlers_handler, protocol, no_of_extensions else: yield exthandlers_handler, protocol, no_of_extensions def _assert_and_get_handler_status(self, aggregate_status, handler_name="OSTCExtensions.ExampleHandlerLinux", handler_version="1.0.0", status="Ready", expected_count=1, message=None): self.assertIsNotNone(aggregate_status['aggregateStatus'], "No aggregate status found") handlers = [handler for handler in aggregate_status['aggregateStatus']['handlerAggregateStatus'] if handler_name == handler['handlerName'] and handler_version == handler['handlerVersion']] self.assertEqual(expected_count, len(handlers), "Unexpected extension count") self.assertTrue(all(handler['status'] == status for handler in handlers), "Unexpected Status reported for handler {0}".format(handler_name)) if message is not None: self.assertTrue(all(message in handler['formattedMessage']['message'] for handler in handlers), "Status Message mismatch") return handlers def _assert_extension_status(self, handler_statuses, expected_ext_status, multi_config=False): for ext_name, settings_status in expected_ext_status.items(): ext_status = next(handler for handler in handler_statuses if handler['runtimeSettingsStatus']['settingsStatus']['status']['name'] == ext_name) ext_runtime_status = ext_status['runtimeSettingsStatus'] self.assertIsNotNone(ext_runtime_status, "Extension not found") self.assertEqual(settings_status['seq_no'], ext_runtime_status['sequenceNumber'], "Sequence no mismatch") self.assertEqual(settings_status['status'], ext_runtime_status['settingsStatus']['status']['status'], "status mismatch") if 'message' in settings_status and settings_status['message'] is not None: self.assertIn(settings_status['message'], ext_runtime_status['settingsStatus']['status']['formattedMessage']['message'], "message mismatch") if multi_config: self.assertEqual(ext_name, ext_runtime_status['extensionName'], "ext name mismatch") else: self.assertNotIn('extensionName', ext_runtime_status, "Extension name should not be reported for SC") handler_statuses.remove(ext_status) self.assertEqual(0, len(handler_statuses), "Unexpected extensions left for handler") class TestMultiConfigExtensions(_MultiConfigBaseTestClass): def __assert_extension_not_present(self, handlers, extensions): for ext_name in extensions: self.assertFalse(all( 'runtimeSettingsStatus' in handler and 'extensionName' in handler['runtimeSettingsStatus'] and handler['runtimeSettingsStatus']['extensionName'] == ext_name for handler in handlers), "Extension status found") def __run_and_assert_generic_case(self, exthandlers_handler, protocol, no_of_extensions, with_message=True): def get_message(msg): return msg if with_message else None exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(no_of_extensions, len(protocol.aggregate_status['aggregateStatus']['handlerAggregateStatus']), "incorrect extensions reported") mc_handlers = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="OSTCExtensions.ExampleHandlerLinux", expected_count=3) expected_extensions = { "firstExtension": {"status": ExtensionStatusValue.success, "seq_no": 1, "message": get_message("Enabling firstExtension")}, "secondExtension": {"status": ExtensionStatusValue.success, "seq_no": 2, "message": get_message("Enabling secondExtension")}, "thirdExtension": {"status": ExtensionStatusValue.success, "seq_no": 3, "message": get_message("Enabling thirdExtension")}, } self._assert_extension_status(mc_handlers[:], expected_extensions, multi_config=True) sc_handler = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="Microsoft.Powershell.ExampleExtension") expected_extensions = { "Microsoft.Powershell.ExampleExtension": {"status": ExtensionStatusValue.success, "seq_no": 9, "message": get_message("Enabling SingleConfig extension")} } self._assert_extension_status(sc_handler[:], expected_extensions) return mc_handlers, sc_handler def __setup_and_assert_disable_scenario(self, exthandlers_handler, protocol): self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, 'ext_conf_mc_disabled_extensions.xml') protocol.mock_wire_data = WireProtocolData(self.test_data) protocol.mock_wire_data.set_incarnation(2) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() mc_handlers = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="OSTCExtensions.ExampleHandlerLinux", status="Ready", expected_count=2) expected_extensions = { "thirdExtension": {"status": ExtensionStatusValue.success, "seq_no": 99, "message": None}, "fourthExtension": {"status": ExtensionStatusValue.success, "seq_no": 101, "message": None}, } self.__assert_extension_not_present(mc_handlers[:], ["firstExtension", "secondExtension"]) self._assert_extension_status(mc_handlers[:], expected_extensions, multi_config=True) sc_handler = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="Microsoft.Powershell.ExampleExtension", status="Ready") expected_extensions = { "Microsoft.Powershell.ExampleExtension": {"status": ExtensionStatusValue.success, "seq_no": 10, "message": None} } self._assert_extension_status(sc_handler[:], expected_extensions) return mc_handlers, sc_handler @contextlib.contextmanager def __setup_generic_test_env(self): self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, "ext_conf_multi_config_no_dependencies.xml") first_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.firstExtension") second_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.secondExtension") third_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.thirdExtension") fourth_ext = extension_emulator(name="Microsoft.Powershell.ExampleExtension") # In _setup_test_env() contextmanager, yield is used inside an if-else block and that's creating a false positive pylint warning with self._setup_test_env(mock_manifest=True) as (exthandlers_handler, protocol, no_of_extensions): # pylint: disable=contextmanager-generator-missing-cleanup with enable_invocations(first_ext, second_ext, third_ext, fourth_ext) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(no_of_extensions, len(protocol.aggregate_status['aggregateStatus']['handlerAggregateStatus']), "incorrect extensions reported") invocation_record.compare( (first_ext, ExtensionCommandNames.INSTALL), (first_ext, ExtensionCommandNames.ENABLE), (second_ext, ExtensionCommandNames.ENABLE), (third_ext, ExtensionCommandNames.ENABLE), (fourth_ext, ExtensionCommandNames.INSTALL), (fourth_ext, ExtensionCommandNames.ENABLE) ) self.__run_and_assert_generic_case(exthandlers_handler, protocol, no_of_extensions, with_message=False) yield exthandlers_handler, protocol, [first_ext, second_ext, third_ext, fourth_ext] def test_it_should_execute_and_report_multi_config_extensions_properly(self): self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, "ext_conf_multi_config_no_dependencies.xml") with self._setup_test_env(mock_manifest=True) as (exthandlers_handler, protocol, no_of_extensions): # Case 1: Install and enable Single and MultiConfig extensions self.__run_and_assert_generic_case(exthandlers_handler, protocol, no_of_extensions) # Case 2: Disable 2 multi-config extensions and add another for enable self.__setup_and_assert_disable_scenario(exthandlers_handler, protocol) # Case 3: Uninstall Multi-config handler (with enabled extensions) and single config extension protocol.mock_wire_data.set_incarnation(3) protocol.mock_wire_data.set_extensions_config_state(ExtensionRequestedState.Uninstall) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(0, len(protocol.aggregate_status['aggregateStatus']['handlerAggregateStatus']), "No handler/extension status should be reported") def test_it_should_report_unregistered_version_error_per_extension(self): self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, "ext_conf_multi_config_no_dependencies.xml") with self._setup_test_env() as (exthandlers_handler, protocol, no_of_extensions): # Set a random failing extension failing_version = "19.12.1221" protocol.mock_wire_data.set_extensions_config_version(failing_version) protocol.mock_wire_data.set_incarnation(2) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(no_of_extensions, len(protocol.aggregate_status['aggregateStatus']['handlerAggregateStatus']), "incorrect extensions reported") error_msg_format = '[ExtensionError] Unable to find version {0} in manifest for extension {1}' mc_handlers = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="OSTCExtensions.ExampleHandlerLinux", handler_version=failing_version, status="NotReady", expected_count=3, message=error_msg_format.format(failing_version, "OSTCExtensions.ExampleHandlerLinux")) self.assertTrue(all( handler['runtimeSettingsStatus']['settingsStatus']['status']['operation'] == WALAEventOperation.Download and handler['runtimeSettingsStatus']['settingsStatus']['status']['status'] == ExtensionStatusValue.error for handler in mc_handlers), "Incorrect data reported") sc_handlers = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="Microsoft.Powershell.ExampleExtension", handler_version=failing_version, status="NotReady", message=error_msg_format.format(failing_version, "Microsoft.Powershell.ExampleExtension")) self.assertFalse(all("runtimeSettingsStatus" in handler for handler in sc_handlers), "Incorrect status") def test_it_should_not_install_handler_again_if_installed(self): with self.__setup_generic_test_env() as (_, _, _): # Everything is already asserted in the context manager pass def test_it_should_retry_handler_installation_per_extension_if_failed(self): self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, "ext_conf_multi_config_no_dependencies.xml") with self._setup_test_env() as (exthandlers_handler, protocol, no_of_extensions): fail_code, fail_action = Actions.generate_unique_fail() first_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.firstExtension", install_action=fail_action, supports_multiple_extensions=True) second_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.secondExtension", supports_multiple_extensions=True) third_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.thirdExtension", supports_multiple_extensions=True) sc_ext = extension_emulator(name="Microsoft.Powershell.ExampleExtension", install_action=fail_action) with enable_invocations(first_ext, second_ext, third_ext, sc_ext) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(no_of_extensions, len(protocol.aggregate_status['aggregateStatus']['handlerAggregateStatus']), "incorrect extensions reported") invocation_record.compare( (first_ext, ExtensionCommandNames.INSTALL), # Should try installation again if first time failed (second_ext, ExtensionCommandNames.INSTALL), (second_ext, ExtensionCommandNames.ENABLE), (third_ext, ExtensionCommandNames.ENABLE), (sc_ext, ExtensionCommandNames.INSTALL) ) mc_handlers = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="OSTCExtensions.ExampleHandlerLinux", expected_count=3, status="Ready") expected_extensions = { "firstExtension": {"status": ExtensionStatusValue.error, "seq_no": 1, "message": fail_code}, "secondExtension": {"status": ExtensionStatusValue.success, "seq_no": 2, "message": None}, "thirdExtension": {"status": ExtensionStatusValue.success, "seq_no": 3, "message": None}, } self._assert_extension_status(mc_handlers, expected_extensions, multi_config=True) sc_handlers = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="Microsoft.Powershell.ExampleExtension", status="NotReady", message=fail_code) self.assertFalse(all("runtimeSettingsStatus" in handler for handler in sc_handlers), "Incorrect status") def test_it_should_only_disable_enabled_extensions_on_update(self): with self.__setup_generic_test_env() as (exthandlers_handler, protocol, old_exts): # Update extensions self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, 'ext_conf_mc_update_extensions.xml') protocol.mock_wire_data = WireProtocolData(self.test_data) protocol.mock_wire_data.set_incarnation(2) protocol.client.update_goal_state() new_version = "1.1.0" new_first_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.firstExtension", version=new_version, supports_multiple_extensions=True) new_second_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.secondExtension", version=new_version, supports_multiple_extensions=True) new_third_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.thirdExtension", version=new_version, supports_multiple_extensions=True) new_sc_ext = extension_emulator(name="Microsoft.Powershell.ExampleExtension", version=new_version) with enable_invocations(new_first_ext, new_second_ext, new_third_ext, new_sc_ext, *old_exts) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() old_first, old_second, old_third, old_fourth = old_exts invocation_record.compare( # Disable all enabled commands for MC before updating the Handler (old_first, ExtensionCommandNames.DISABLE), (old_second, ExtensionCommandNames.DISABLE), (old_third, ExtensionCommandNames.DISABLE), (new_first_ext, ExtensionCommandNames.UPDATE), (old_first, ExtensionCommandNames.UNINSTALL), (new_first_ext, ExtensionCommandNames.INSTALL), # No enable for First and Second extension as their state is Disabled in GoalState, # only enabled the ThirdExtension (new_third_ext, ExtensionCommandNames.ENABLE), # Follow the normal update pattern for Single config handlers (old_fourth, ExtensionCommandNames.DISABLE), (new_sc_ext, ExtensionCommandNames.UPDATE), (old_fourth, ExtensionCommandNames.UNINSTALL), (new_sc_ext, ExtensionCommandNames.INSTALL), (new_sc_ext, ExtensionCommandNames.ENABLE) ) mc_handlers = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="OSTCExtensions.ExampleHandlerLinux", expected_count=1, handler_version=new_version) expected_extensions = { "thirdExtension": {"status": ExtensionStatusValue.success, "seq_no": 99, "message": None} } self._assert_extension_status(mc_handlers, expected_extensions, multi_config=True) self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_version=new_version, handler_name="Microsoft.Powershell.ExampleExtension") def test_it_should_retry_update_sequence_per_extension_if_previous_failed(self): with self.__setup_generic_test_env() as (exthandlers_handler, protocol, old_exts): # Update extensions self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, 'ext_conf_mc_update_extensions.xml') protocol.mock_wire_data = WireProtocolData(self.test_data) protocol.mock_wire_data.set_incarnation(2) protocol.client.update_goal_state() new_version = "1.1.0" _, fail_action = Actions.generate_unique_fail() # Fail Uninstall of the secondExtension old_exts[1] = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.secondExtension", uninstall_action=fail_action, supports_multiple_extensions=True) # Fail update of the first extension new_first_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.firstExtension", version=new_version, update_action=fail_action, supports_multiple_extensions=True) new_second_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.secondExtension", version=new_version, supports_multiple_extensions=True) new_third_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.thirdExtension", version=new_version, supports_multiple_extensions=True) new_sc_ext = extension_emulator(name="Microsoft.Powershell.ExampleExtension", version=new_version) with enable_invocations(new_first_ext, new_second_ext, new_third_ext, new_sc_ext, *old_exts) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() old_first, old_second, old_third, old_fourth = old_exts invocation_record.compare( # Disable all enabled commands for MC before updating the Handler (old_first, ExtensionCommandNames.DISABLE), (old_second, ExtensionCommandNames.DISABLE), (old_third, ExtensionCommandNames.DISABLE), (new_first_ext, ExtensionCommandNames.UPDATE), # Since the extensions have been disabled before, we won't disable them again for Update scenario (new_second_ext, ExtensionCommandNames.UPDATE), # This will fail too as per the mock above (old_second, ExtensionCommandNames.UNINSTALL), (new_third_ext, ExtensionCommandNames.UPDATE), (old_third, ExtensionCommandNames.UNINSTALL), (new_third_ext, ExtensionCommandNames.INSTALL), # No enable for First and Second extension as their state is Disabled in GoalState, # only enabled the ThirdExtension (new_third_ext, ExtensionCommandNames.ENABLE), # Follow the normal update pattern for Single config handlers (old_fourth, ExtensionCommandNames.DISABLE), (new_sc_ext, ExtensionCommandNames.UPDATE), (old_fourth, ExtensionCommandNames.UNINSTALL), (new_sc_ext, ExtensionCommandNames.INSTALL), (new_sc_ext, ExtensionCommandNames.ENABLE) ) # Since firstExtension and secondExtension are Disabled, we won't report their status mc_handlers = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="OSTCExtensions.ExampleHandlerLinux", expected_count=1, handler_version=new_version) expected_extensions = { "thirdExtension": {"status": ExtensionStatusValue.success, "seq_no": 99, "message": None} } self._assert_extension_status(mc_handlers, expected_extensions, multi_config=True) sc_handler = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_version=new_version, handler_name="Microsoft.Powershell.ExampleExtension") expected_extensions = { "Microsoft.Powershell.ExampleExtension": {"status": ExtensionStatusValue.success, "seq_no": 10, "message": None} } self._assert_extension_status(sc_handler, expected_extensions) def test_it_should_report_disabled_extension_errors_if_update_failed(self): with self.__setup_generic_test_env() as (exthandlers_handler, protocol, old_exts): # Update extensions self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, 'ext_conf_mc_update_extensions.xml') protocol.mock_wire_data = WireProtocolData(self.test_data) protocol.mock_wire_data.set_incarnation(2) protocol.client.update_goal_state() new_version = "1.1.0" fail_code, fail_action = Actions.generate_unique_fail() # Fail Disable of the firstExtension old_exts[0] = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.firstExtension", disable_action=fail_action, supports_multiple_extensions=True) new_first_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.firstExtension", version=new_version, supports_multiple_extensions=True) new_second_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.secondExtension", version=new_version, supports_multiple_extensions=True) new_third_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.thirdExtension", version=new_version, supports_multiple_extensions=True) new_fourth_ext = extension_emulator(name="Microsoft.Powershell.ExampleExtension", version=new_version) with enable_invocations(new_first_ext, new_second_ext, new_third_ext, new_fourth_ext, *old_exts) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() old_first, _, _, old_fourth = old_exts invocation_record.compare( # Disable for firstExtension should fail 3 times, i.e., once per extension which tries to update the Handler (old_first, ExtensionCommandNames.DISABLE), (old_first, ExtensionCommandNames.DISABLE), (old_first, ExtensionCommandNames.DISABLE), # Since Disable fails for the firstExtension and continueOnUpdate = False, Update should not go through # Follow the normal update pattern for Single config handlers (old_fourth, ExtensionCommandNames.DISABLE), (new_fourth_ext, ExtensionCommandNames.UPDATE), (old_fourth, ExtensionCommandNames.UNINSTALL), (new_fourth_ext, ExtensionCommandNames.INSTALL), (new_fourth_ext, ExtensionCommandNames.ENABLE) ) # Since firstExtension and secondExtension are Disabled, we won't report their status mc_handlers = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="OSTCExtensions.ExampleHandlerLinux", expected_count=1, handler_version=new_version, status="NotReady", message=fail_code) expected_extensions = { "thirdExtension": {"status": ExtensionStatusValue.error, "seq_no": 99, "message": fail_code} } self._assert_extension_status(mc_handlers, expected_extensions, multi_config=True) sc_handler = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_version=new_version, handler_name="Microsoft.Powershell.ExampleExtension") expected_extensions = { "Microsoft.Powershell.ExampleExtension": {"status": ExtensionStatusValue.success, "seq_no": 10, "message": None} } self._assert_extension_status(sc_handler, expected_extensions) def test_it_should_report_extension_status_properly(self): self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, "ext_conf_multi_config_no_dependencies.xml") with self._setup_test_env(mock_manifest=True) as (exthandlers_handler, protocol, no_of_extensions): self.__run_and_assert_generic_case(exthandlers_handler, protocol, no_of_extensions) def test_it_should_handle_and_report_enable_errors_properly(self): self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, "ext_conf_multi_config_no_dependencies.xml") with self._setup_test_env() as (exthandlers_handler, protocol, no_of_extensions): fail_code, fail_action = Actions.generate_unique_fail() first_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.firstExtension", supports_multiple_extensions=True) second_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.secondExtension", supports_multiple_extensions=True) third_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.thirdExtension", enable_action=fail_action, supports_multiple_extensions=True) fourth_ext = extension_emulator(name="Microsoft.Powershell.ExampleExtension", enable_action=fail_action) with enable_invocations(first_ext, second_ext, third_ext, fourth_ext) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(no_of_extensions, len(protocol.aggregate_status['aggregateStatus']['handlerAggregateStatus']), "incorrect extensions reported") invocation_record.compare( (first_ext, ExtensionCommandNames.INSTALL), (first_ext, ExtensionCommandNames.ENABLE), (second_ext, ExtensionCommandNames.ENABLE), (third_ext, ExtensionCommandNames.ENABLE), (fourth_ext, ExtensionCommandNames.INSTALL), (fourth_ext, ExtensionCommandNames.ENABLE) ) mc_handlers = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="OSTCExtensions.ExampleHandlerLinux", expected_count=3, status="Ready") expected_extensions = { "firstExtension": {"status": ExtensionStatusValue.success, "seq_no": 1, "message": None}, "secondExtension": {"status": ExtensionStatusValue.success, "seq_no": 2, "message": None}, "thirdExtension": {"status": ExtensionStatusValue.error, "seq_no": 3, "message": fail_code}, } self._assert_extension_status(mc_handlers, expected_extensions, multi_config=True) sc_handler = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="Microsoft.Powershell.ExampleExtension", status="NotReady", message=fail_code) expected_extensions = { "Microsoft.Powershell.ExampleExtension": {"status": ExtensionStatusValue.error, "seq_no": 9, "message": fail_code} } self._assert_extension_status(sc_handler, expected_extensions) def test_it_should_report_failed_status_for_extensions_disallowed_by_policy(self): """If multiconfig extension is disallowed by policy, all instances should be blocked.""" policy_path = os.path.join(self.tmp_dir, "waagent_policy.json") with patch('azurelinuxagent.common.conf.get_policy_file_path', return_value=str(policy_path)): with patch('azurelinuxagent.ga.policy.policy_engine.conf.get_extension_policy_enabled', return_value=True): policy = \ { "policyVersion": "0.0.1", "extensionPolicies": { "allowListedExtensionsOnly": True, "signatureRequired": True, "extensions": { "Microsoft.Powershell.ExampleExtension": {} } } } with open(policy_path, mode='w') as policy_file: json.dump(policy, policy_file, indent=4) policy_file.flush() self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, "ext_conf_multi_config_no_dependencies.xml") with self._setup_test_env() as (exthandlers_handler, protocol, no_of_extensions): disallowed_mc_1 = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.firstExtension", supports_multiple_extensions=True) disallowed_mc_2 = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.secondExtension", supports_multiple_extensions=True) disallowed_mc_3 = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.thirdExtension", supports_multiple_extensions=True) allowed_ext = extension_emulator(name="Microsoft.Powershell.ExampleExtension") with enable_invocations(disallowed_mc_1, disallowed_mc_2, disallowed_mc_3, allowed_ext) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(no_of_extensions, len(protocol.aggregate_status['aggregateStatus']['handlerAggregateStatus']), "incorrect extensions reported") # We should only enable the allowed extension, no instances of the multiconfig extension should be enabled invocation_record.compare( (allowed_ext, ExtensionCommandNames.INSTALL), (allowed_ext, ExtensionCommandNames.ENABLE) ) mc_handlers = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="OSTCExtensions.ExampleHandlerLinux", expected_count=3, status="NotReady") msg = "failed to run extension 'OSTCExtensions.ExampleHandlerLinux' because it is not specified as an allowed extension" expected_extensions = { "firstExtension": {"status": ExtensionStatusValue.error, "seq_no": 1, "message": msg}, "secondExtension": {"status": ExtensionStatusValue.error, "seq_no": 2, "message": msg}, "thirdExtension": {"status": ExtensionStatusValue.error, "seq_no": 3, "message": msg}, } self._assert_extension_status(mc_handlers, expected_extensions, multi_config=True) sc_handler = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="Microsoft.Powershell.ExampleExtension", status="Ready", message=None) expected_extensions = { "Microsoft.Powershell.ExampleExtension": {"status": ExtensionStatusValue.success, "seq_no": 9, "message": None} } self._assert_extension_status(sc_handler, expected_extensions) def test_it_should_report_successful_status_for_extensions_allowed_by_policy(self): """If multiconfig extension is allowed by policy, all instances should be allowed.""" policy_path = os.path.join(self.tmp_dir, "waagent_policy.json") with patch('azurelinuxagent.common.conf.get_policy_file_path', return_value=str(policy_path)): with patch('azurelinuxagent.ga.policy.policy_engine.conf.get_extension_policy_enabled', return_value=True): policy = \ { "policyVersion": "0.0.1", "extensionPolicies": { "allowListedExtensionsOnly": True, "signatureRequired": True, "extensions": { "OSTCExtensions.ExampleHandlerLinux": {}, "Microsoft.Powershell.ExampleExtension": {} } } } with open(policy_path, mode='w') as policy_file: json.dump(policy, policy_file, indent=4) policy_file.flush() self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, "ext_conf_multi_config_no_dependencies.xml") with self._setup_test_env(mock_manifest=True) as (exthandlers_handler, protocol, no_of_extensions): self.__run_and_assert_generic_case(exthandlers_handler, protocol, no_of_extensions) def test_it_should_cleanup_extension_state_on_disable(self): def __assert_state_file(handler_name, handler_version, extensions, state, not_present=None): config_path = os.path.join(self.tmp_dir, "{0}-{1}".format(handler_name, handler_version), "config") config_files = os.listdir(config_path) for ext_name in extensions: self.assertIn("{0}.settings".format(ext_name), config_files, "settings not found") self.assertEqual( fileutil.read_file(os.path.join(config_path, "{0}.HandlerState".format(ext_name.split(".")[0]))), state, "Invalid state") if not_present is not None: for ext_name in not_present: self.assertNotIn("{0}.HandlerState".format(ext_name), config_files, "Wrongful state found") with self.__setup_generic_test_env() as (ext_handler, protocol, _): __assert_state_file("OSTCExtensions.ExampleHandlerLinux", "1.0.0", ["firstExtension.1", "secondExtension.2", "thirdExtension.3"], ExtensionState.Enabled) self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, 'ext_conf_mc_disabled_extensions.xml') protocol.mock_wire_data = WireProtocolData(self.test_data) protocol.mock_wire_data.set_incarnation(2) protocol.client.update_goal_state() ext_handler.run() ext_handler.report_ext_handlers_status() mc_handlers = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="OSTCExtensions.ExampleHandlerLinux", expected_count=2, status="Ready") expected_extensions = { "thirdExtension": {"status": ExtensionStatusValue.success, "seq_no": 99, "message": "Enabling thirdExtension"}, "fourthExtension": {"status": ExtensionStatusValue.success, "seq_no": 101, "message": "Enabling fourthExtension"}, } self._assert_extension_status(mc_handlers, expected_extensions, multi_config=True) __assert_state_file("OSTCExtensions.ExampleHandlerLinux", "1.0.0", ["thirdExtension.99", "fourthExtension.101"], ExtensionState.Enabled, not_present=["firstExtension", "secondExtension"]) sc_handler = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="Microsoft.Powershell.ExampleExtension") expected_extensions = { "Microsoft.Powershell.ExampleExtension": {"status": ExtensionStatusValue.success, "seq_no": 10, "message": "Enabling SingleConfig Extension"} } self._assert_extension_status(sc_handler, expected_extensions) def test_it_should_create_command_execution_log_per_extension(self): with self.__setup_generic_test_env() as (_, _, _): sc_handler_path = os.path.join(conf.get_ext_log_dir(), "Microsoft.Powershell.ExampleExtension") mc_handler_path = os.path.join(conf.get_ext_log_dir(), "OSTCExtensions.ExampleHandlerLinux") self.assertIn("CommandExecution_firstExtension.log", os.listdir(mc_handler_path), "Command Execution file not found") self.assertGreater(os.path.getsize(os.path.join(mc_handler_path, "CommandExecution_firstExtension.log")), 0, "Log file not being used") self.assertIn("CommandExecution_secondExtension.log", os.listdir(mc_handler_path), "Command Execution file not found") self.assertGreater(os.path.getsize(os.path.join(mc_handler_path, "CommandExecution_secondExtension.log")), 0, "Log file not being used") self.assertIn("CommandExecution_thirdExtension.log", os.listdir(mc_handler_path), "Command Execution file not found") self.assertGreater(os.path.getsize(os.path.join(mc_handler_path, "CommandExecution_thirdExtension.log")), 0, "Log file not being used") self.assertIn("CommandExecution.log", os.listdir(sc_handler_path), "Command Execution file not found") self.assertGreater(os.path.getsize(os.path.join(sc_handler_path, "CommandExecution.log")), 0, "Log file not being used") def test_it_should_set_relevant_environment_variables_for_mc(self): original_popen = subprocess.Popen handler_envs = {} def __assert_env_variables(handler_name, handler_version="1.0.0", seq_no="1", ext_name=None, expected_vars=None, not_expected=None): original_env_vars = { ExtCommandEnvVariable.ExtensionPath: os.path.join(self.tmp_dir, "{0}-{1}".format(handler_name, handler_version)), ExtCommandEnvVariable.ExtensionVersion: handler_version, ExtCommandEnvVariable.ExtensionSeqNumber: ustr(seq_no), ExtCommandEnvVariable.WireProtocolAddress: '168.63.129.16', ExtCommandEnvVariable.ExtensionSupportedFeatures: json.dumps([{"Key": "ExtensionTelemetryPipeline", "Value": "1.0"}]) } full_name = handler_name if ext_name is not None: original_env_vars[ExtCommandEnvVariable.ExtensionName] = ext_name full_name = "{0}.{1}".format(handler_name, ext_name) self.assertIn(full_name, handler_envs, "Handler/ext combo not called") for commands in handler_envs[full_name]: expected_environment_variables = original_env_vars.copy() if expected_vars is not None and commands['command'] in expected_vars: for name, val in expected_vars[commands['command']].items(): expected_environment_variables[name] = val self.assertTrue(all( env_var in commands['data'] and env_val == commands['data'][env_var] for env_var, env_val in expected_environment_variables.items()), "Incorrect data for environment variable for {0}-{1}, incorrect: {2}".format( full_name, commands['command'], [(env_var, env_val) for env_var, env_val in expected_environment_variables.items() if env_var not in commands['data'] or env_val != commands['data'][env_var]])) if not_expected is not None and commands['command'] in not_expected: self.assertFalse(any(env_var in commands['data'] for env_var in not_expected), "Unwanted env variable found") def mock_popen(cmd, *_, **kwargs): # This cgroupsapi Popen mocking all other popen calls which breaking the extension emulator logic. # The emulator should be used only on extension commands and not on other commands even env flag set. # So, added ExtensionVersion check to avoid using extension emulator on non extension operations. if 'env' in kwargs and ExtCommandEnvVariable.ExtensionVersion in kwargs['env']: handler_name, __, command = extract_extension_info_from_command(cmd) name = handler_name if ExtCommandEnvVariable.ExtensionName in kwargs['env']: name = "{0}.{1}".format(handler_name, kwargs['env'][ExtCommandEnvVariable.ExtensionName]) data = { "command": command, "data": kwargs['env'] } if name in handler_envs: handler_envs[name].append(data) else: handler_envs[name] = [data] return original_popen(cmd, *_, **kwargs) self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, "ext_conf_multi_config_no_dependencies.xml") with self._setup_test_env(mock_manifest=True) as (exthandlers_handler, protocol, no_of_extensions): with patch('azurelinuxagent.ga.cgroupapi.subprocess.Popen', side_effect=mock_popen): # Case 1: Check normal scenario - Install/Enable mc_handlers, sc_handler = self.__run_and_assert_generic_case(exthandlers_handler, protocol, no_of_extensions) for handler in mc_handlers: __assert_env_variables(handler['handlerName'], ext_name=handler['runtimeSettingsStatus']['extensionName'], seq_no=handler['runtimeSettingsStatus']['sequenceNumber']) for handler in sc_handler: __assert_env_variables(handler['handlerName'], seq_no=handler['runtimeSettingsStatus']['sequenceNumber']) # Case 2: Check Update Scenario # Clear old test case state handler_envs = {} self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, 'ext_conf_mc_update_extensions.xml') protocol.mock_wire_data = WireProtocolData(self.test_data) protocol.mock_wire_data.set_incarnation(2) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() mc_handlers = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="OSTCExtensions.ExampleHandlerLinux", expected_count=1, handler_version="1.1.0") expected_extensions = { "thirdExtension": {"status": ExtensionStatusValue.success, "seq_no": 99, "message": "Enabling thirdExtension"}, } self._assert_extension_status(mc_handlers[:], expected_extensions, multi_config=True) sc_handler = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="Microsoft.Powershell.ExampleExtension", handler_version="1.1.0") expected_extensions = { "Microsoft.Powershell.ExampleExtension": {"status": ExtensionStatusValue.success, "seq_no": 10, "message": "Enabling SingleConfig extension"} } self._assert_extension_status(sc_handler[:], expected_extensions) for handler in mc_handlers: __assert_env_variables(handler['handlerName'], handler_version="1.1.0", ext_name=handler['runtimeSettingsStatus']['extensionName'], seq_no=handler['runtimeSettingsStatus']['sequenceNumber'], expected_vars={ "disable": { ExtCommandEnvVariable.ExtensionPath: os.path.join(self.tmp_dir, "{0}-{1}".format(handler['handlerName'], "1.0.0")), ExtCommandEnvVariable.ExtensionVersion: '1.0.0' }}) # Assert the environment variables were present even for disabled/uninstalled commands first_ext_expected_vars = { "disable": { ExtCommandEnvVariable.ExtensionPath: os.path.join(self.tmp_dir, "{0}-{1}".format(handler['handlerName'], "1.0.0")), ExtCommandEnvVariable.ExtensionVersion: '1.0.0' }, "uninstall": { ExtCommandEnvVariable.ExtensionPath: os.path.join(self.tmp_dir, "{0}-{1}".format(handler['handlerName'], "1.0.0")), ExtCommandEnvVariable.ExtensionVersion: '1.0.0' }, "update": { ExtCommandEnvVariable.UpdatingFromVersion: "1.0.0", ExtCommandEnvVariable.DisableReturnCodeMultipleExtensions: json.dumps([ {"extensionName": "firstExtension", "exitCode": "0"}, {"extensionName": "secondExtension", "exitCode": "0"}, {"extensionName": "thirdExtension", "exitCode": "0"} ]) } } __assert_env_variables(handler['handlerName'], ext_name="firstExtension", expected_vars=first_ext_expected_vars, handler_version="1.1.0", seq_no="1", not_expected={ "update": [ExtCommandEnvVariable.DisableReturnCode] }) __assert_env_variables(handler['handlerName'], ext_name="secondExtension", seq_no="2") for handler in sc_handler: sc_expected_vars = { "disable": { ExtCommandEnvVariable.ExtensionPath: os.path.join(self.tmp_dir, "{0}-{1}".format(handler['handlerName'], "1.0.0")), ExtCommandEnvVariable.ExtensionVersion: '1.0.0' }, "uninstall": { ExtCommandEnvVariable.ExtensionPath: os.path.join(self.tmp_dir, "{0}-{1}".format(handler['handlerName'], "1.0.0")), ExtCommandEnvVariable.ExtensionVersion: '1.0.0' }, "update": { ExtCommandEnvVariable.UpdatingFromVersion: "1.0.0", ExtCommandEnvVariable.DisableReturnCode: "0" } } __assert_env_variables(handler['handlerName'], handler_version="1.1.0", seq_no=handler['runtimeSettingsStatus']['sequenceNumber'], expected_vars=sc_expected_vars, not_expected={ "update": [ExtCommandEnvVariable.DisableReturnCodeMultipleExtensions] }) def test_it_should_ignore_disable_errors_for_multi_config_extensions(self): fail_code, fail_action = Actions.generate_unique_fail() with self.__setup_generic_test_env() as (exthandlers_handler, protocol, exts): # Fail disable of 1st and 2nd extension exts[0] = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.firstExtension", disable_action=fail_action) exts[1] = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.secondExtension", disable_action=fail_action) fourth_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.fourthExtension") with patch.object(ExtHandlerInstance, "report_event", autospec=True) as patch_report_event: with enable_invocations(fourth_ext, *exts) as invocation_record: # Assert even though 2 extensions are failing, we clean their state up properly and enable the # remaining extensions self.__setup_and_assert_disable_scenario(exthandlers_handler, protocol) first_ext, second_ext, third_ext, sc_ext = exts invocation_record.compare( (first_ext, ExtensionCommandNames.DISABLE), (second_ext, ExtensionCommandNames.DISABLE), (third_ext, ExtensionCommandNames.ENABLE), (fourth_ext, ExtensionCommandNames.ENABLE), (sc_ext, ExtensionCommandNames.ENABLE) ) reported_events = [kwargs for _, kwargs in patch_report_event.call_args_list if re.search("Executing command: (.+) with environment variables: ", kwargs['message']) is None] self.assertTrue(all( fail_code in kwargs['message'] for kwargs in reported_events if kwargs['name'] == first_ext.name), "Error not reported") self.assertTrue(all( fail_code in kwargs['message'] for kwargs in reported_events if kwargs['name'] == second_ext.name), "Error not reported") # Make sure fail code is not reported for any other extension self.assertFalse(all( fail_code in kwargs['message'] for kwargs in reported_events if kwargs['name'] == third_ext.name), "Error not reported") def test_it_should_report_transitioning_if_status_file_not_found(self): original_popen = subprocess.Popen def mock_popen(cmd, *_, **kwargs): if 'env' in kwargs: handler_name, handler_version, __ = extract_extension_info_from_command(cmd) ext_name = None if ExtCommandEnvVariable.ExtensionName in kwargs['env']: ext_name = kwargs['env'][ExtCommandEnvVariable.ExtensionName] seq_no = kwargs['env'][ExtCommandEnvVariable.ExtensionSeqNumber] status_file_name = "{0}.status".format(seq_no) status_file_name = "{0}.{1}".format(ext_name, status_file_name) if ext_name is not None else status_file_name status_file = os.path.join(self.tmp_dir, "{0}-{1}".format(handler_name, handler_version), "status", status_file_name) if os.path.exists(status_file): os.remove(status_file) return original_popen("echo " + cmd, *_, **kwargs) self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, "ext_conf_multi_config_no_dependencies.xml") with self._setup_test_env(mock_manifest=True) as (exthandlers_handler, protocol, no_of_extensions): with patch('azurelinuxagent.ga.cgroupapi.subprocess.Popen', side_effect=mock_popen): exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(no_of_extensions, len(protocol.aggregate_status['aggregateStatus']['handlerAggregateStatus']), "incorrect extensions reported") mc_handlers = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="OSTCExtensions.ExampleHandlerLinux", expected_count=3) agent_status_message = "This status is being reported by the Guest Agent since no status file was " \ "reported by extension {0}: " \ "[ExtensionStatusError] Status file" expected_extensions = { "firstExtension": {"status": ExtensionStatusValue.transitioning, "seq_no": 1, "message": agent_status_message.format("OSTCExtensions.ExampleHandlerLinux.firstExtension")}, "secondExtension": {"status": ExtensionStatusValue.transitioning, "seq_no": 2, "message": agent_status_message.format("OSTCExtensions.ExampleHandlerLinux.secondExtension")}, "thirdExtension": {"status": ExtensionStatusValue.transitioning, "seq_no": 3, "message": agent_status_message.format("OSTCExtensions.ExampleHandlerLinux.thirdExtension")}, } self._assert_extension_status(mc_handlers[:], expected_extensions, multi_config=True) sc_handler = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="Microsoft.Powershell.ExampleExtension") expected_extensions = { "Microsoft.Powershell.ExampleExtension": {"status": ExtensionStatusValue.transitioning, "seq_no": 9, "message": agent_status_message.format("Microsoft.Powershell.ExampleExtension")} } self._assert_extension_status(sc_handler[:], expected_extensions) def test_it_should_report_status_correctly_for_unsupported_goal_state(self): with self.__setup_generic_test_env() as (exthandlers_handler, protocol, _): # Update GS with an ExtensionConfig with 3 Required features to force GA to mark it as unsupported self.test_data['ext_conf'] = "wire/ext_conf_required_features.xml" protocol.mock_wire_data = WireProtocolData(self.test_data) protocol.mock_wire_data.set_incarnation(2) protocol.client.update_goal_state() # Assert the extension status is the same as we reported for Incarnation 1. self.__run_and_assert_generic_case(exthandlers_handler, protocol, no_of_extensions=4, with_message=False) # Assert the GS was reported as unsupported gs_aggregate_status = protocol.aggregate_status['aggregateStatus']['vmArtifactsAggregateStatus'][ 'goalStateAggregateStatus'] self.assertEqual(gs_aggregate_status['status'], GoalStateStatus.Failed, "Incorrect status") self.assertEqual(gs_aggregate_status['code'], GoalStateAggregateStatusCodes.GoalStateUnsupportedRequiredFeatures, "Incorrect code") self.assertEqual(gs_aggregate_status['inSvdSeqNo'], '2', "Incorrect incarnation reported") self.assertEqual(gs_aggregate_status['formattedMessage']['message'], 'Failing GS incarnation_2 as Unsupported features found: TestRequiredFeature1, TestRequiredFeature2, TestRequiredFeature3', "Incorrect error message reported") def test_it_should_fail_handler_if_handler_does_not_support_mc(self): self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, "ext_conf_multi_config_no_dependencies.xml") first_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.firstExtension") second_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.secondExtension") third_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.thirdExtension") fourth_ext = extension_emulator(name="Microsoft.Powershell.ExampleExtension") with self._setup_test_env() as (exthandlers_handler, protocol, no_of_extensions): with enable_invocations(first_ext, second_ext, third_ext, fourth_ext) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(no_of_extensions, len(protocol.aggregate_status['aggregateStatus']['handlerAggregateStatus']), "incorrect extensions reported") invocation_record.compare( # Since we raise a ConfigError, we shouldn't process any of the MC extensions at all (fourth_ext, ExtensionCommandNames.INSTALL), (fourth_ext, ExtensionCommandNames.ENABLE) ) err_msg = 'Handler OSTCExtensions.ExampleHandlerLinux does not support MultiConfig but CRP expects it, failing due to inconsistent data' mc_handlers = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="OSTCExtensions.ExampleHandlerLinux", expected_count=3, status="NotReady", message=err_msg) expected_extensions = { "firstExtension": {"status": ExtensionStatusValue.error, "seq_no": 1, "message": err_msg}, "secondExtension": {"status": ExtensionStatusValue.error, "seq_no": 2, "message": err_msg}, "thirdExtension": {"status": ExtensionStatusValue.error, "seq_no": 3, "message": err_msg}, } self._assert_extension_status(mc_handlers[:], expected_extensions, multi_config=True) sc_handler = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="Microsoft.Powershell.ExampleExtension") expected_extensions = { "Microsoft.Powershell.ExampleExtension": {"status": ExtensionStatusValue.success, "seq_no": 9} } self._assert_extension_status(sc_handler[:], expected_extensions) def test_it_should_check_every_time_if_handler_supports_mc(self): with self.__setup_generic_test_env() as (exthandlers_handler, protocol, old_exts): protocol.mock_wire_data.set_incarnation(2) protocol.client.update_goal_state() # Mock manifest to not support multiple extensions with patch('azurelinuxagent.ga.exthandlers.HandlerManifest.supports_multiple_extensions', return_value=False): with enable_invocations(*old_exts) as invocation_record: (_, _, _, fourth_ext) = old_exts exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(4, len(protocol.aggregate_status['aggregateStatus']['handlerAggregateStatus']), "incorrect extensions reported") invocation_record.compare( # Since we raise a ConfigError, we shouldn't process any of the MC extensions at all (fourth_ext, ExtensionCommandNames.ENABLE) ) err_msg = 'Handler OSTCExtensions.ExampleHandlerLinux does not support MultiConfig but CRP expects it, failing due to inconsistent data' mc_handlers = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="OSTCExtensions.ExampleHandlerLinux", expected_count=3, status="NotReady", message=err_msg) # Since the extensions were not even executed, their status file should reflect the last status # (Handler status above should always report the error though) expected_extensions = { "firstExtension": {"status": ExtensionStatusValue.success, "seq_no": 1}, "secondExtension": {"status": ExtensionStatusValue.success, "seq_no": 2}, "thirdExtension": {"status": ExtensionStatusValue.success, "seq_no": 3}, } self._assert_extension_status(mc_handlers[:], expected_extensions, multi_config=True) sc_handler = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="Microsoft.Powershell.ExampleExtension") expected_extensions = { "Microsoft.Powershell.ExampleExtension": {"status": ExtensionStatusValue.success, "seq_no": 9} } self._assert_extension_status(sc_handler[:], expected_extensions) class TestMultiConfigExtensionSequencing(_MultiConfigBaseTestClass): @contextlib.contextmanager def __setup_test_and_get_exts(self): self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, "ext_conf_with_multi_config_dependencies.xml") first_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.firstExtension", supports_multiple_extensions=True) second_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.secondExtension", supports_multiple_extensions=True) third_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.thirdExtension", supports_multiple_extensions=True) dependent_sc_ext = extension_emulator(name="Microsoft.Powershell.ExampleExtension") independent_sc_ext = extension_emulator(name="Microsoft.Azure.Geneva.GenevaMonitoring", version="1.1.0") # In _setup_test_env() contextmanager, yield is used inside an if-else block and that's creating a false positive pylint warning with self._setup_test_env() as (exthandlers_handler, protocol, no_of_extensions): # pylint: disable=contextmanager-generator-missing-cleanup yield exthandlers_handler, protocol, no_of_extensions, first_ext, second_ext, third_ext, dependent_sc_ext, independent_sc_ext def test_it_should_process_dependency_chain_extensions_properly(self): with self.__setup_test_and_get_exts() as ( exthandlers_handler, protocol, no_of_extensions, first_ext, second_ext, third_ext, dependent_sc_ext, independent_sc_ext): with enable_invocations(first_ext, second_ext, third_ext, dependent_sc_ext, independent_sc_ext) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(no_of_extensions, len(protocol.aggregate_status['aggregateStatus']['handlerAggregateStatus']), "incorrect extensions reported") invocation_record.compare( (first_ext, ExtensionCommandNames.INSTALL), (first_ext, ExtensionCommandNames.ENABLE), (independent_sc_ext, ExtensionCommandNames.INSTALL), (independent_sc_ext, ExtensionCommandNames.ENABLE), (dependent_sc_ext, ExtensionCommandNames.INSTALL), (dependent_sc_ext, ExtensionCommandNames.ENABLE), (second_ext, ExtensionCommandNames.ENABLE), (third_ext, ExtensionCommandNames.ENABLE) ) mc_handlers = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="OSTCExtensions.ExampleHandlerLinux", expected_count=3) expected_extensions = { "firstExtension": {"status": ExtensionStatusValue.success, "seq_no": 2}, "secondExtension": {"status": ExtensionStatusValue.success, "seq_no": 2}, "thirdExtension": {"status": ExtensionStatusValue.success, "seq_no": 1}, } self._assert_extension_status(mc_handlers[:], expected_extensions, multi_config=True) sc_dependent_handler = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="Microsoft.Powershell.ExampleExtension") expected_extensions = { "Microsoft.Powershell.ExampleExtension": {"status": ExtensionStatusValue.success, "seq_no": 2} } self._assert_extension_status(sc_dependent_handler[:], expected_extensions) sc_independent_handler = self._assert_and_get_handler_status( aggregate_status=protocol.aggregate_status, handler_name="Microsoft.Azure.Geneva.GenevaMonitoring", handler_version="1.1.0") expected_extensions = { "Microsoft.Azure.Geneva.GenevaMonitoring": {"status": ExtensionStatusValue.success, "seq_no": 1} } self._assert_extension_status(sc_independent_handler[:], expected_extensions) def __assert_invalid_status_scenario(self, protocol, fail_code, mc_status="NotReady", mc_message="Plugin installed but not enabled", err_msg=None): mc_handlers = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="OSTCExtensions.ExampleHandlerLinux", expected_count=3, status=mc_status, message=mc_message) expected_extensions = { "firstExtension": {"status": ExtensionStatusValue.error, "seq_no": 2, "message": fail_code}, "secondExtension": {"status": ExtensionStatusValue.error, "seq_no": 2, "message": err_msg}, "thirdExtension": {"status": ExtensionStatusValue.error, "seq_no": 1, "message": err_msg}, } self._assert_extension_status(mc_handlers[:], expected_extensions, multi_config=True) sc_dependent_handler = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="Microsoft.Powershell.ExampleExtension", status="NotReady", message=err_msg) self.assertTrue(all('runtimeSettingsStatus' not in handler for handler in sc_dependent_handler)) sc_independent_handler = self._assert_and_get_handler_status( aggregate_status=protocol.aggregate_status, handler_name="Microsoft.Azure.Geneva.GenevaMonitoring", handler_version="1.1.0", status="NotReady", message=err_msg) self.assertTrue(all('runtimeSettingsStatus' not in handler for handler in sc_independent_handler)) def test_it_should_report_extension_status_failures_for_all_dependent_extensions(self): with self.__setup_test_and_get_exts() as ( exthandlers_handler, protocol, no_of_extensions, first_ext, second_ext, third_ext, dependent_sc_ext, independent_sc_ext): # Fail the enable for firstExtension. fail_code, fail_action = Actions.generate_unique_fail() first_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.firstExtension", enable_action=fail_action, supports_multiple_extensions=True) with enable_invocations(first_ext, second_ext, third_ext, dependent_sc_ext, independent_sc_ext) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(no_of_extensions, len(protocol.aggregate_status['aggregateStatus']['handlerAggregateStatus']), "incorrect extensions reported") # Since firstExtension is high up on the dependency chain, no other extensions should be executed invocation_record.compare( (first_ext, ExtensionCommandNames.INSTALL), (first_ext, ExtensionCommandNames.ENABLE) ) err_msg = 'Skipping processing of extensions since execution of dependent extension OSTCExtensions.ExampleHandlerLinux.firstExtension failed' self.__assert_invalid_status_scenario(protocol, fail_code, err_msg=err_msg) def test_it_should_stop_execution_if_status_file_contains_errors(self): # This test tests the scenario where the extensions exit with a success exit code but fail subsequently with an # error in the status file self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, "ext_conf_with_multi_config_dependencies.xml") original_popen = subprocess.Popen invocation_records = [] fail_code = str(uuid.uuid4()) def mock_popen(cmd, *_, **kwargs): try: handler_name, handler_version, command_name = extract_extension_info_from_command(cmd) except ValueError: return original_popen(cmd, *_, **kwargs) if 'env' in kwargs: env = kwargs['env'] if ExtCommandEnvVariable.ExtensionName in env: full_name = "{0}.{1}".format(handler_name, env[ExtCommandEnvVariable.ExtensionName]) status_file = "{0}.{1}.status".format(env[ExtCommandEnvVariable.ExtensionName], env[ExtCommandEnvVariable.ExtensionSeqNumber]) status_contents = [{"status": {"status": ExtensionStatusValue.error, "code": fail_code, "formattedMessage": {"message": fail_code, "lang": "en-US"}}}] fileutil.write_file(os.path.join(env[ExtCommandEnvVariable.ExtensionPath], "status", status_file), json.dumps(status_contents)) invocation_records.append((full_name, handler_version, command_name)) # The return code is 0 but the status file should have the error, this it to test the scenario # where the extensions return a success code but fail later. return original_popen(['echo', "works"], *_, **kwargs) invocation_records.append((handler_name, handler_version, command_name)) return original_popen(cmd, *_, **kwargs) with self._setup_test_env(mock_manifest=True) as (exthandlers_handler, protocol, no_of_extensions): with patch('azurelinuxagent.ga.cgroupapi.subprocess.Popen', side_effect=mock_popen): exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(no_of_extensions, len(protocol.aggregate_status['aggregateStatus']['handlerAggregateStatus']), "incorrect extensions reported") # Since we're writing error status for firstExtension, only the firstExtension should be invoked and # everything else should be skipped expected_invocations = [ ('OSTCExtensions.ExampleHandlerLinux.firstExtension', '1.0.0', ExtensionCommandNames.INSTALL), ('OSTCExtensions.ExampleHandlerLinux.firstExtension', '1.0.0', ExtensionCommandNames.ENABLE)] self.assertEqual(invocation_records, expected_invocations, "Invalid invocations found") err_msg = 'Dependent Extension OSTCExtensions.ExampleHandlerLinux.firstExtension did not succeed. Status was error' self.__assert_invalid_status_scenario(protocol, fail_code, mc_status="Ready", mc_message="Plugin enabled", err_msg=err_msg) Azure-WALinuxAgent-a976115/tests/ga/test_periodic_operation.py000066400000000000000000000156561510742556200244270ustar00rootroot00000000000000# Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import datetime import time from azurelinuxagent.common.future import UTC from azurelinuxagent.ga.monitor import PeriodicOperation from tests.lib.tools import AgentTestCase, patch, PropertyMock class TestPeriodicOperation(AgentTestCase): class SaveRunTimestamp(PeriodicOperation): def __init__(self, period): super(TestPeriodicOperation.SaveRunTimestamp, self).__init__(period) self.run_time = None def _operation(self): self.run_time = datetime.datetime.now(UTC) def test_it_should_take_a_timedelta_as_period(self): op = TestPeriodicOperation.SaveRunTimestamp(datetime.timedelta(hours=1)) op.run() expected = op.run_time + datetime.timedelta(hours=1) difference = op.next_run_time() - expected self.assertTrue(difference < datetime.timedelta(seconds=1), "The next run time exceeds the expected value by more than 1 second: {0} vs {1}".format(op.next_run_time(), expected)) def test_it_should_take_a_number_of_seconds_as_period(self): op = TestPeriodicOperation.SaveRunTimestamp(3600) op.run() expected = op.run_time + datetime.timedelta(hours=1) difference = op.next_run_time() - expected self.assertTrue(difference < datetime.timedelta(seconds=1), "The next run time exceeds the expected value by more than 1 second: {0} vs {1}".format(op.next_run_time(), expected)) class CountInvocations(PeriodicOperation): def __init__(self, period): super(TestPeriodicOperation.CountInvocations, self).__init__(period) self.invoke_count = 0 def _operation(self): self.invoke_count += 1 def test_it_should_be_invoked_when_run_is_called_first_time(self): op = TestPeriodicOperation.CountInvocations(datetime.timedelta(hours=1)) op.run() self.assertTrue(op.invoke_count > 0, "The operation was not invoked") def test_it_should_not_be_invoked_if_the_period_has_not_elapsed(self): pop = TestPeriodicOperation.CountInvocations(datetime.timedelta(hours=1)) for _ in range(5): pop.run() # the first run() invoked the operation, so the count is 1 self.assertEqual(pop.invoke_count, 1, "The operation was invoked before the period elapsed") def test_it_should_be_invoked_if_the_period_has_elapsed(self): pop = TestPeriodicOperation.CountInvocations(datetime.timedelta(milliseconds=1)) for _ in range(5): pop.run() time.sleep(0.001) self.assertEqual(pop.invoke_count, 5, "The operation was not invoked after the period elapsed") class RaiseException(PeriodicOperation): def _operation(self): raise Exception("A test exception") @staticmethod def _get_number_of_warnings(warn_patcher, message="A test exception"): return len([args for args, _ in warn_patcher.call_args_list if any(message in a for a in args)]) def test_it_should_log_a_warning_if_the_operation_fails(self): with patch("azurelinuxagent.common.logger.warn") as warn_patcher: TestPeriodicOperation.RaiseException(datetime.timedelta(hours=1)).run() self.assertEqual(self._get_number_of_warnings(warn_patcher), 1, "The error in the operation was should have been reported exactly once") def test_it_should_not_log_multiple_warnings_when_the_period_has_not_elapsed(self): with patch("azurelinuxagent.common.logger.warn") as warn_patcher: pop = TestPeriodicOperation.RaiseException(datetime.timedelta(hours=1)) for _ in range(5): pop.run() self.assertEqual(self._get_number_of_warnings(warn_patcher), 1, "The error in the operation was should have been reported exactly once") def test_it_should_not_log_multiple_warnings_when_the_period_has_elapsed(self): with patch("azurelinuxagent.common.logger.warn") as warn_patcher: with patch("azurelinuxagent.ga.periodic_operation.PeriodicOperation._LOG_WARNING_PERIOD", new_callable=PropertyMock, return_value=datetime.timedelta(milliseconds=1)): pop = TestPeriodicOperation.RaiseException(datetime.timedelta(milliseconds=1)) for _ in range(5): pop.run() time.sleep(0.001) self.assertEqual(self._get_number_of_warnings(warn_patcher), 5, "The error in the operation was not reported the expected number of times") class RaiseTwoExceptions(PeriodicOperation): def __init__(self, period): super(TestPeriodicOperation.RaiseTwoExceptions, self).__init__(period) self._count = 0 def _operation(self): message = "WARNING {0}".format(self._count) if self._count == 0: self._count += 1 raise Exception(message) def test_it_should_log_warnings_if_they_are_different(self): with patch("azurelinuxagent.common.logger.warn") as warn_patcher: pop = TestPeriodicOperation.RaiseTwoExceptions(0) for _ in range(5): pop.run() self.assertEqual(self._get_number_of_warnings(warn_patcher, "WARNING 0"), 1, "The first error should have been reported exactly 1 time") self.assertEqual(self._get_number_of_warnings(warn_patcher, "WARNING 1"), 1, "The second error should have been reported exactly 1 time") class NoOp(PeriodicOperation): def _operation(self): pass def test_sleep_until_next_operation_should_wait_for_the_closest_operation(self): operations = [ TestPeriodicOperation.NoOp(datetime.timedelta(seconds=60)), TestPeriodicOperation.NoOp(datetime.timedelta(hours=1)), TestPeriodicOperation.NoOp(datetime.timedelta(seconds=10)), # closest operation TestPeriodicOperation.NoOp(datetime.timedelta(minutes=11)), TestPeriodicOperation.NoOp(datetime.timedelta(days=1)) ] for op in operations: op.run() def mock_sleep(seconds): mock_sleep.seconds = seconds mock_sleep.seconds = 0 with patch("azurelinuxagent.ga.periodic_operation.time.sleep", side_effect=mock_sleep): PeriodicOperation.sleep_until_next_operation(operations) self.assertAlmostEqual(mock_sleep.seconds, 10, 0, "did not sleep for the expected time") Azure-WALinuxAgent-a976115/tests/ga/test_persist_firewall_rules.py000066400000000000000000000504051510742556200253300ustar00rootroot00000000000000# Copyright 2016 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import contextlib import os import shutil import subprocess import sys import uuid import azurelinuxagent.common.conf as conf from azurelinuxagent.common.future import ustr from azurelinuxagent.common.osutil.default import DefaultOSUtil from azurelinuxagent.ga.persist_firewall_rules import PersistFirewallRulesHandler from azurelinuxagent.common.utils import fileutil, shellutil from tests.lib.tools import AgentTestCase, MagicMock, patch class TestPersistFirewallRulesHandler(AgentTestCase): original_popen = subprocess.Popen def __init__(self, *args, **kwargs): super(TestPersistFirewallRulesHandler, self).__init__(*args, **kwargs) self._expected_service_name = "" self._expected_service_name = "" self._binary_file = "" self._network_service_unit_file = "" def setUp(self): AgentTestCase.setUp(self) # Override for mocking Popen, should be of the form - (True/False, cmd-to-execute-if-True) self.__replace_popen_cmd = lambda *_: (False, "") self._executed_commands = [] self.__test_dst_ip = "1.2.3.4" self.__systemd_dir = os.path.join(self.tmp_dir, "system") fileutil.mkdir(self.__systemd_dir) self.__agent_bin_dir = os.path.join(self.tmp_dir, "bin") fileutil.mkdir(self.__agent_bin_dir) self.__tmp_conf_lib = os.path.join(self.tmp_dir, "waagent") fileutil.mkdir(self.__tmp_conf_lib) conf.get_lib_dir = MagicMock(return_value=self.__tmp_conf_lib) def tearDown(self): shutil.rmtree(self.__systemd_dir, ignore_errors=True) shutil.rmtree(self.__agent_bin_dir, ignore_errors=True) shutil.rmtree(self.__tmp_conf_lib, ignore_errors=True) AgentTestCase.tearDown(self) def __mock_popen(self, cmd, *args, **kwargs): self._executed_commands.append(" ".join(cmd) if isinstance(cmd, list) else cmd) replace_cmd, replace_with_command = self.__replace_popen_cmd(cmd) if replace_cmd: cmd = replace_with_command return TestPersistFirewallRulesHandler.original_popen(cmd, *args, **kwargs) @contextlib.contextmanager def _get_persist_firewall_rules_handler(self, systemd=True): osutil = DefaultOSUtil() osutil.get_agent_bin_path = MagicMock(return_value=self.__agent_bin_dir) osutil.get_network_setup_service_install_path = MagicMock(return_value=self.__systemd_dir) self._expected_service_name = PersistFirewallRulesHandler._AGENT_NETWORK_SETUP_NAME_FORMAT.format( osutil.get_service_name()) self._network_service_unit_file = os.path.join(self.__systemd_dir, self._expected_service_name) self._binary_file = os.path.join(conf.get_lib_dir(), PersistFirewallRulesHandler.BINARY_FILE_NAME) # Just for these tests, ignoring the mode of mkdir to allow non-sudo tests orig_mkdir = fileutil.mkdir with patch("azurelinuxagent.ga.persist_firewall_rules.fileutil.mkdir", side_effect=lambda path, **mode: orig_mkdir(path)): with patch("azurelinuxagent.ga.persist_firewall_rules.get_osutil", return_value=osutil): with patch('azurelinuxagent.common.osutil.systemd.is_systemd', return_value=systemd): with patch("azurelinuxagent.common.utils.shellutil.subprocess.Popen", side_effect=self.__mock_popen): yield PersistFirewallRulesHandler(self.__test_dst_ip) def __assert_firewall_called(self, cmd, validate_command_called=True): accept_command = "firewall-cmd --permanent --direct {0} ipv4 -t security -A OUTPUT -d {1} -p tcp -m owner --uid-owner {2} -j ACCEPT".format(cmd, self.__test_dst_ip, os.getuid()) drop_command = "firewall-cmd --permanent --direct {0} ipv4 -t security -A OUTPUT -d {1} -p tcp -m conntrack --ctstate INVALID,NEW -j DROP".format(cmd, self.__test_dst_ip) if validate_command_called: self.assertIn(accept_command, self._executed_commands, "Firewall {0} command not found".format(cmd)) self.assertIn(drop_command, self._executed_commands, "Firewall {0} command not found".format(cmd)) else: self.assertNotIn(accept_command, self._executed_commands, "Firewall {0} command found".format(cmd)) self.assertNotIn(drop_command, self._executed_commands, "Firewall {0} command found".format(cmd)) def __assert_systemctl_called(self, cmd="enable", validate_command_called=True): systemctl_command = "systemctl {0} {1}".format(cmd, self._expected_service_name) if validate_command_called: self.assertIn(systemctl_command, self._executed_commands, "Systemctl command {0} not found".format(cmd)) else: self.assertNotIn(systemctl_command, self._executed_commands, "Systemctl command {0} found".format(cmd)) def __assert_firewall_cmd_running_called(self, validate_command_called=True): cmd = "firewall-cmd --state" if validate_command_called: self.assertIn(cmd, self._executed_commands, "Firewall state not checked") else: self.assertNotIn(cmd, self._executed_commands, "Firewall state not checked") def __assert_network_service_setup_properly(self): self.__assert_systemctl_called(cmd="is-enabled", validate_command_called=True) self.__assert_systemctl_called(cmd="enable", validate_command_called=True) self.__assert_firewall_called(cmd="--passthrough", validate_command_called=False) # *** self.assertTrue(os.path.exists(self._network_service_unit_file), "Service unit file should be there") self.assertTrue(os.path.exists(self._binary_file), "Binary file should be there") @staticmethod def __mock_network_setup_service_enabled(cmd): if "firewall-cmd" in cmd: return True, ["echo", "not-running"] if "systemctl" in cmd: return True, ["echo", "enabled"] return False, [] @staticmethod def __mock_network_setup_service_disabled(cmd): if "firewall-cmd" in cmd: return True, ["echo", "not-running"] if "systemctl" in cmd: return True, ["echo", "not enabled"] return False, [] @staticmethod def __mock_firewalld_running_and_not_applied(cmd): if cmd == ["firewall-cmd", "--state"]: return True, ["echo", "running"] # This is to fail the check if firewalld-rules are already applied cmds_to_fail = ["firewall-cmd", "--query-passthrough", "--destination-port", "53"] if all(cmd_to_fail in cmd for cmd_to_fail in cmds_to_fail): return True, ["sh", "-c", "exit 1"] if "firewall-cmd" in cmd: return True, ["echo", "enabled"] return False, [] def __setup_and_assert_network_service_setup_scenario(self, handler, mock_popen=None): mock_popen = TestPersistFirewallRulesHandler.__mock_network_setup_service_disabled if mock_popen is None else mock_popen self.__replace_popen_cmd = mock_popen handler.setup() self.__assert_systemctl_called(cmd="is-enabled", validate_command_called=True) self.__assert_systemctl_called(cmd="enable", validate_command_called=True) self.__assert_firewall_cmd_running_called(validate_command_called=True) self.__assert_firewall_called(cmd="--query-passthrough", validate_command_called=False) self.__assert_firewall_called(cmd="--remove-passthrough", validate_command_called=False) self.__assert_firewall_called(cmd="--passthrough", validate_command_called=False) self.assertTrue(os.path.exists(handler.get_service_file_path()), "Service unit file not found") def test_it_should_skip_setup_if_firewalld_already_enabled(self): self.__replace_popen_cmd = lambda cmd: ("firewall-cmd" in cmd, ["echo", "running"]) with self._get_persist_firewall_rules_handler() as handler: handler.setup() # Assert we verified that rules were set using firewall-cmd self.__assert_firewall_called(cmd="--query-passthrough", validate_command_called=True) # Assert no commands for adding rules using firewall-cmd were called self.__assert_firewall_called(cmd="--remove-passthrough", validate_command_called=False) self.__assert_firewall_called(cmd="--passthrough", validate_command_called=False) # Assert no commands for systemctl were called self.assertFalse(any("systemctl" in cmd for cmd in self._executed_commands), "Systemctl shouldn't be called") def test_it_should_skip_setup_if_agent_network_setup_service_already_enabled_and_version_same(self): with self._get_persist_firewall_rules_handler() as handler: # 1st time should setup the service self.__setup_and_assert_network_service_setup_scenario(handler) # 2nd time setup should do nothing as service is enabled and no version updated self.__replace_popen_cmd = TestPersistFirewallRulesHandler.__mock_network_setup_service_enabled # Reset state self._executed_commands = [] handler.setup() self.__assert_systemctl_called(cmd="is-enabled", validate_command_called=True) self.__assert_systemctl_called(cmd="enable", validate_command_called=False) self.__assert_firewall_cmd_running_called(validate_command_called=True) self.__assert_firewall_called(cmd="--query-passthrough", validate_command_called=False) self.__assert_firewall_called(cmd="--remove-passthrough", validate_command_called=False) self.__assert_firewall_called(cmd="--passthrough", validate_command_called=False) self.assertTrue(os.path.exists(handler.get_service_file_path()), "Service unit file not found") def test_it_should_always_replace_binary_file_only_if_using_custom_network_service(self): def _find_in_file(file_name, line_str): try: with open(file_name, 'r') as fh: content = fh.read() return line_str in content except Exception: # swallow exception pass return False test_str = 'os.system("{py_path} {egg_path} --setup-firewall={wire_ip}")' current_exe_path = os.path.join(os.getcwd(), sys.argv[0]) self.__replace_popen_cmd = TestPersistFirewallRulesHandler.__mock_network_setup_service_disabled with self._get_persist_firewall_rules_handler() as handler: self.assertFalse(os.path.exists(self._binary_file), "Binary file should not be there") self.assertFalse(os.path.exists(self._network_service_unit_file), "Unit file should not be present") handler.setup() orig_service_file_contents = "ExecStart={py_path} {binary_path}".format(py_path=sys.executable, binary_path=self._binary_file) self.__assert_systemctl_called(cmd="is-enabled", validate_command_called=True) self.__assert_firewall_called(cmd="--passthrough", validate_command_called=False) self.assertTrue(os.path.exists(self._binary_file), "Binary file should be there") self.assertTrue(_find_in_file( self._binary_file, test_str.format(py_path=sys.executable, egg_path=current_exe_path, wire_ip=self.__test_dst_ip)), "Binary file not set correctly") self.assertTrue(_find_in_file(self._network_service_unit_file, orig_service_file_contents), "Service Unit file file not set correctly") # Change test params self.__test_dst_ip = "9.8.7.6" # The service should say its enabled now self.__replace_popen_cmd = TestPersistFirewallRulesHandler.__mock_network_setup_service_enabled with self._get_persist_firewall_rules_handler() as handler: # The Binary file should be available on the 2nd run self.assertTrue(os.path.exists(self._binary_file), "Binary file should be there") handler.setup() self.assertTrue(_find_in_file( self._binary_file, test_str.format(py_path=sys.executable, egg_path=current_exe_path, wire_ip=self.__test_dst_ip)), "Binary file not updated correctly") # Unit file should NOT be updated self.assertTrue(_find_in_file(self._network_service_unit_file, orig_service_file_contents), "Service Unit file file should not be updated") def test_it_should_use_firewalld_if_available(self): self.__replace_popen_cmd = self.__mock_firewalld_running_and_not_applied with self._get_persist_firewall_rules_handler() as handler: handler.setup() self.__assert_firewall_cmd_running_called(validate_command_called=True) self.__assert_firewall_called(cmd="--query-passthrough", validate_command_called=True) self.__assert_firewall_called(cmd="--passthrough", validate_command_called=True) self.__assert_firewall_called(cmd="--remove-passthrough", validate_command_called=True) self.assertFalse(any("systemctl" in cmd for cmd in self._executed_commands), "Systemctl shouldn't be called") def test_it_should_set_up_custom_service_if_no_firewalld(self): self.__replace_popen_cmd = TestPersistFirewallRulesHandler.__mock_network_setup_service_disabled with self._get_persist_firewall_rules_handler() as handler: self.assertFalse(os.path.exists(self._network_service_unit_file), "Service unit file should not be there") self.assertFalse(os.path.exists(self._binary_file), "Binary file should not be there") handler.setup() self.__assert_network_service_setup_properly() def test_it_should_cleanup_files_on_error(self): orig_write_file = fileutil.write_file files_to_fail = [] def mock_write_file(path, _, *__): if files_to_fail[0] in path: raise IOError("Invalid file: {0}".format(path)) return orig_write_file(path, _, *__) self.__replace_popen_cmd = TestPersistFirewallRulesHandler.__mock_network_setup_service_disabled with self._get_persist_firewall_rules_handler() as handler: test_files = [self._binary_file, self._network_service_unit_file] for file_to_fail in test_files: files_to_fail = [file_to_fail] with patch("azurelinuxagent.ga.persist_firewall_rules.fileutil.write_file", side_effect=mock_write_file): with self.assertRaises(Exception) as context_manager: handler.setup() self.assertIn("Invalid file: {0}".format(file_to_fail), ustr(context_manager.exception)) self.assertFalse(os.path.exists(file_to_fail), "File should be deleted: {0}".format(file_to_fail)) # Cleanup remaining files for test clarity for test_file in test_files: try: os.remove(test_file) except Exception: pass def test_it_should_execute_binary_file_successfully(self): # A bare-bone test to ensure no simple syntactical errors in the binary file as its generated dynamically self.__replace_popen_cmd = TestPersistFirewallRulesHandler.__mock_network_setup_service_disabled with self._get_persist_firewall_rules_handler() as handler: self.assertFalse(os.path.exists(self._binary_file), "Binary file should not be there") handler.setup() self.assertTrue(os.path.exists(self._binary_file), "Binary file not set properly") shellutil.run_command([sys.executable, self._binary_file]) def test_it_should_not_fail_if_egg_not_found(self): self.__replace_popen_cmd = TestPersistFirewallRulesHandler.__mock_network_setup_service_disabled test_str = str(uuid.uuid4()) with patch("sys.argv", [test_str]): with self._get_persist_firewall_rules_handler() as handler: self.assertFalse(os.path.exists(self._binary_file), "Binary file should not be there") handler.setup() output = shellutil.run_command([sys.executable, self._binary_file], stderr=subprocess.STDOUT) expected_str = "{0} file not found, skipping execution of firewall execution setup for this boot".format( os.path.join(os.getcwd(), test_str)) self.assertIn(expected_str, output, "Unexpected output") def test_it_should_delete_custom_service_files_if_firewalld_enabled(self): with self._get_persist_firewall_rules_handler() as handler: # 1st run - Setup the Custom Service self.__setup_and_assert_network_service_setup_scenario(handler) # 2nd run - Enable Firewalld and ensure the agent sets firewall rules using firewalld and deletes custom service self._executed_commands = [] self.__replace_popen_cmd = self.__mock_firewalld_running_and_not_applied handler.setup() self.__assert_firewall_cmd_running_called(validate_command_called=True) self.__assert_firewall_called(cmd="--query-passthrough", validate_command_called=True) self.__assert_firewall_called(cmd="--remove-passthrough", validate_command_called=True) self.__assert_firewall_called(cmd="--passthrough", validate_command_called=True) self.__assert_systemctl_called(cmd="is-enabled", validate_command_called=False) self.__assert_systemctl_called(cmd="enable", validate_command_called=False) self.assertFalse(os.path.exists(handler.get_service_file_path()), "Service unit file found") self.assertFalse(os.path.exists(os.path.join(conf.get_lib_dir(), handler.BINARY_FILE_NAME)), "Binary file found") def test_it_should_reset_service_unit_files_if_version_changed(self): with self._get_persist_firewall_rules_handler() as handler: # 1st step - Setup the service with old Version test_ver = str(uuid.uuid4()) with patch.object(handler, "_UNIT_VERSION", test_ver): self.__setup_and_assert_network_service_setup_scenario(handler) self.assertIn(test_ver, fileutil.read_file(handler.get_service_file_path()), "Test version not found") # 2nd step - Re-run the setup and ensure the service file set up again even if service enabled self._executed_commands = [] self.__setup_and_assert_network_service_setup_scenario(handler, mock_popen=self.__mock_network_setup_service_enabled) self.assertNotIn(test_ver, fileutil.read_file(handler.get_service_file_path()), "Test version found incorrectly") def test_it_should_reset_service_unit_file_if_python_version_changes(self): with self._get_persist_firewall_rules_handler() as handler: # 1st step - Setup the service with some python Version python_ver = "test_python" with patch("sys.executable", python_ver): self.__setup_and_assert_network_service_setup_scenario(handler) self.assertIn(python_ver, fileutil.read_file(handler.get_service_file_path()), "Python version not found") # 2nd step - Re-run the setup and ensure the service file set up again even if service enabled self._executed_commands = [] self.__setup_and_assert_network_service_setup_scenario(handler, mock_popen=self.__mock_network_setup_service_enabled) self.assertNotIn(python_ver, fileutil.read_file(handler.get_service_file_path()), "Python version found incorrectly") Azure-WALinuxAgent-a976115/tests/ga/test_policy_engine.py000066400000000000000000000552151510742556200233700ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.4+ and Openssl 1.0+ # import json import os from azurelinuxagent.ga.policy.policy_engine import ExtensionPolicyEngine, InvalidPolicyError, \ _PolicyEngine, _DEFAULT_ALLOW_LISTED_EXTENSIONS_ONLY, _DEFAULT_SIGNATURE_REQUIRED from tests.lib.tools import AgentTestCase from tests.lib.tools import patch TEST_EXTENSION_NAME = "Microsoft.Azure.ActiveDirectory.AADSSHLoginForLinux" class _TestPolicyBase(AgentTestCase): """ Define common methods for policy engine test classes. """ def setUp(self): AgentTestCase.setUp(self) self.policy_path = os.path.join(self.tmp_dir, "waagent_policy.json") # Patch attributes to enable policy feature self.patch_policy_path = patch('azurelinuxagent.common.conf.get_policy_file_path', return_value=str(self.policy_path)) self.patch_policy_path.start() self.patch_conf_flag = patch('azurelinuxagent.ga.policy.policy_engine.conf.get_extension_policy_enabled', return_value=True) self.patch_conf_flag.start() def tearDown(self): patch.stopall() AgentTestCase.tearDown(self) def _create_policy_file(self, policy): with open(self.policy_path, mode='w') as policy_file: if isinstance(policy, dict): json.dump(policy, policy_file, indent=4) else: policy_file.write(policy) policy_file.flush() def _run_test_cases_should_fail_to_parse(self, cases, assert_msg): """ Cases should be a list of policies. For each policy in the list, we create a policy file, initialize policy engine, and assert that InvalidPolicyError is raised. """ for policy in cases: self._create_policy_file(policy) msg = "invalid policy should not have parsed successfully: {0}.\nPolicy: \n{1}".format(assert_msg, policy) with self.assertRaises(InvalidPolicyError, msg=msg): _PolicyEngine() class TestPolicyEngine(_TestPolicyBase): """ Test policy enablement and parsing logic for _PolicyEngine. """ def test_policy_enforcement_should_be_enabled_when_policy_file_exists_and_conf_flag_true(self): """ When conf flag is set to true and policy file is present at expected location, feature should be enabled. """ # Create policy file with empty policy object at the expected path to enable feature. self._create_policy_file( { "policyVersion": "0.0.1" }) engine = _PolicyEngine() self.assertTrue(engine.policy_enforcement_enabled, msg="Conf flag is set to true so policy enforcement should be enabled.") def test_policy_enforcement_should_be_disabled_when_conf_flag_false_or_no_policy_file(self): # Test when conf flag is turned off - feature should be disabled. self.patch_conf_flag.stop() engine1 = _PolicyEngine() self.assertFalse(engine1.policy_enforcement_enabled, msg="Conf flag is set to false and policy file missing so policy enforcement should be disabled.") # Turn on conf flag - feature should still be disabled, because policy file is not present. self.patch_conf_flag.start() engine2 = _PolicyEngine() self.assertFalse(engine2.policy_enforcement_enabled, msg="Policy file is not present so policy enforcement should be disabled.") # Create a policy file, but turn off conf flag - feature should be disabled due to flag. self.patch_conf_flag.stop() self._create_policy_file({}) engine3 = _PolicyEngine() self.assertFalse(engine3.policy_enforcement_enabled, msg="Conf flag is set to false so policy enforcement should be disabled.") def test_should_parse_policy_successfully(self): """ Values provided in custom policy should override any defaults. """ policy1 = \ { "policyVersion": "0.0.1", "extensionPolicies": { "allowListedExtensionsOnly": True, "signatureRequired": True, "extensions": { TEST_EXTENSION_NAME: { "signatureRequired": False, "runtimePolicy": True } } } } policy2 = \ { "policyVersion": "0.0.1", "extensionPolicies": { "allowListedExtensionsOnly": True, "signatureRequired": False, "extensions": { TEST_EXTENSION_NAME: { "signatureRequired": True, "runtimePolicy": [ True, None, { "bar": "baz" } ] } } } } for expected_policy in [policy1, policy2]: self._create_policy_file(expected_policy) engine = _PolicyEngine() actual_policy = engine._policy self.assertEqual(actual_policy.get("policyVersion"), expected_policy.get("policyVersion")) actual_extension_policy = actual_policy.get("extensionPolicies") expected_extension_policy = expected_policy.get("extensionPolicies") self.assertEqual(actual_extension_policy.get("allowListedExtensionsOnly"), expected_extension_policy.get("allowListedExtensionsOnly")) self.assertEqual(actual_extension_policy.get("signatureRequired"), expected_extension_policy.get("signatureRequired")) actual_individual_policy = actual_extension_policy.get("extensions").get(TEST_EXTENSION_NAME) expected_individual_policy = expected_extension_policy.get("extensions").get(TEST_EXTENSION_NAME) self.assertEqual(actual_individual_policy.get("signatureRequired"), expected_individual_policy.get("signatureRequired")) self.assertEqual(actual_individual_policy.get("runtimePolicy"), expected_individual_policy.get("runtimePolicy")) def test_it_should_verify_policy_version_is_required(self): self._create_policy_file({ "extensionPolicies": {} }) with self.assertRaises(InvalidPolicyError): _PolicyEngine() def test_it_should_accept_partially_specified_policy_versions(self): for policy_version in ['0', '0.1', '0.1.0']: self._create_policy_file({ "policyVersion": policy_version, }) self.assertEqual(policy_version, _PolicyEngine()._policy["policyVersion"]) def test_should_raise_error_if_policy_file_is_invalid_json(self): cases = [ ''' { "policyVersion": "0.1.0", "extensionPolicies": { ''', "", " ", "policy", ''' { not_a_string: ""} ''' ] self._run_test_cases_should_fail_to_parse(cases, "not a valid json") def test_should_raise_error_for_invalid_policy_version(self): cases = [ {"policyVersion": "1.2.a"}, {"policyVersion": 0}, {"policyVersion": None} ] self._run_test_cases_should_fail_to_parse(cases, "policy version invalid") def test_should_raise_error_for_unsupported_policy_version(self): cases = [ {"policyVersion": "9.9.9"}, {"policyVersion": "9"} ] self._run_test_cases_should_fail_to_parse(cases, "agent does not support policy version") def test_should_raise_error_if_extensions_policy_is_not_dict(self): cases = [ { "extensionPolicies": "" }, { "extensionPolicies": None } ] self._run_test_cases_should_fail_to_parse(cases, "extensionPolicies is not a dict") def test_should_raise_error_if_allowListedExtensionsOnly_is_not_bool(self): cases = [ { "policyVersion": "0.1.0", "extensionPolicies": { "allowListedExtensionsOnly": "True", # Should be bool "signatureRequired": False, "extensions": {} } } ] self._run_test_cases_should_fail_to_parse(cases, "allowListedExtensionsOnly is not a bool") def test_should_raise_error_if_signatureRequired_is_not_bool(self): cases = [ { "policyVersion": "0.1.0", "extensionPolicies": { "allowListedExtensionsOnly": True, "signatureRequired": False, "extensions": { TEST_EXTENSION_NAME: { "signatureRequired": "False" # Should be bool } } } }, { "policyVersion": "0.1.0", "extensionPolicies": { "allowListedExtensionsOnly": True, "signatureRequired": "False", # Should be bool "extensions": {} } } ] self._run_test_cases_should_fail_to_parse(cases, "signatureRequired is not a bool") def test_should_raise_error_if_extensions_is_not_dict(self): cases = [ { "extensionPolicies": { "extensions": [] } }, { "extensionPolicies": { "extensions": 0 } }, { "extensionPolicies": { "extensions": None } } ] self._run_test_cases_should_fail_to_parse(cases, "'extensions' is not a dict") def test_should_raise_error_if_individual_extension_policy_is_not_dict(self): cases = [ { "extensionPolicies": { "extensions": { "Ext.Name": 0 } } }, { "extensionPolicies": { "extensions": { "Ext.Name": [] } } } ] self._run_test_cases_should_fail_to_parse(cases, "individual extension policy is not a dict") def test_should_raise_error_for_unrecognized_attribute(self): # All cases below have either a typo or a random additional attribute. cases = [ {"policyVerion": "0.0.1"}, {"extentionPolicies": {}}, {"extensionPolicies": { "signingRequired": {} }}, {"extensionPolicies": { "extensions": { TEST_EXTENSION_NAME: { "randomAttribute": "" } } }} ] self._run_test_cases_should_fail_to_parse(cases, "unrecognized attribute in policy") class TestExtensionPolicyEngine(_TestPolicyBase): """ Test ExtensionPolicyEngine should_allow() and should_enforce_signature_validation(). """ def test_should_allow_and_should_not_enforce_signature_if_no_custom_policy_file(self): """ When custom policy file not present, should allow all extensions and not enforce signature. """ # No policy file is present - feature is disabled. engine = ExtensionPolicyEngine() should_allow = engine.should_allow_extension(TEST_EXTENSION_NAME) self.assertTrue(should_allow, msg="Policy feature is disabled because no policy file present, so all extensions should be allowed.") should_enforce = engine.should_enforce_signature_validation(TEST_EXTENSION_NAME) self.assertFalse(should_enforce, msg="Policy feature is disabled because no policy file present, so signature should not be enforced.") def test_should_allow_and_should_not_enforce_signature_if_conf_flag_false(self): """ When conf flag turned off, should allow all extensions and not enforce signature. """ self.patch_conf_flag.stop() self._create_policy_file({}) engine = ExtensionPolicyEngine() should_allow = engine.should_allow_extension(TEST_EXTENSION_NAME) self.assertTrue(should_allow, msg="Policy feature is disabled because conf flag false, so all extensions should be allowed.") should_enforce = engine.should_enforce_signature_validation(TEST_EXTENSION_NAME) self.assertFalse(should_enforce, msg="Policy feature is disabled because conf flag false, so signature should not be enforced.") def test_should_use_default_policy_if_no_extension_policy_specified(self): """ Test that default policy is used when policy file does not specify the extension policy. """ policy_cases = [ { "policyVersion": "0.1.0" }, { "policyVersion": "0.1.0", "extensionPolicies": {} } ] for policy in policy_cases: self._create_policy_file(policy) engine = ExtensionPolicyEngine() should_allow = engine.should_allow_extension(TEST_EXTENSION_NAME) self.assertEqual(should_allow, not _DEFAULT_ALLOW_LISTED_EXTENSIONS_ONLY, msg="Extension policy is not specified, so should use default policy.") should_enforce = engine.should_enforce_signature_validation(TEST_EXTENSION_NAME) self.assertEqual(should_enforce, _DEFAULT_SIGNATURE_REQUIRED, msg="Extension policy is not specified, so should use default policy.") def test_should_allow_if_allowListedExtensionsOnly_true_and_extension_in_list(self): """ If allowListedExtensionsOnly is true and extension in list, should_allow = True. """ TEST_EXTENSION_NAME_2 = "Test.Extension.Name" policy = \ { "policyVersion": "0.1.0", "extensionPolicies": { "allowListedExtensionsOnly": True, "signatureRequired": False, "extensions": { TEST_EXTENSION_NAME: {}, TEST_EXTENSION_NAME_2: { "signatureRequired": False } } } } self._create_policy_file(policy) engine = ExtensionPolicyEngine() should_allow = engine.should_allow_extension(TEST_EXTENSION_NAME) self.assertTrue(should_allow, msg="Extension is in allowlist, so should be allowed.") should_allow = engine.should_allow_extension(TEST_EXTENSION_NAME_2) self.assertTrue(should_allow, msg="Extension is in allowlist, so should be allowed.") def test_should_not_allow_if_allowListedExtensionsOnly_true_and_extension_not_in_list(self): """ If allowListedExtensionsOnly is true and extension not in list, should_allow = False. """ policy = \ { "policyVersion": "0.1.0", "extensionPolicies": { "allowListedExtensionsOnly": True, "signatureRequired": False, "extensions": {} # Extension not in allowed list. } } self._create_policy_file(policy) engine = ExtensionPolicyEngine() should_allow = engine.should_allow_extension(TEST_EXTENSION_NAME) self.assertFalse(should_allow, msg="allowListedExtensionsOnly is true and extension is not in allowlist, so should not be allowed.") def test_should_allow_if_allowListedExtensionsOnly_false(self): """ If allowListedExtensionsOnly is false, should_allow = True (whether extension in list or not). """ # Test an extension in the allowlist, and an extension not in the allowlist. Both should be allowed. policy = \ { "policyVersion": "0.1.0", "extensionPolicies": { "allowListedExtensionsOnly": False, "signatureRequired": False, "extensions": { TEST_EXTENSION_NAME: { "signatureRequired": False } } } } self._create_policy_file(policy) engine = ExtensionPolicyEngine() self.assertTrue(engine.should_allow_extension(TEST_EXTENSION_NAME), msg="allowListedExtensionsOnly is false, so extension should be allowed.") self.assertTrue(engine.should_allow_extension("Random.Ext"), msg="allowListedExtensionsOnly is false, so extension should be allowed.") def test_should_enforce_signature_if_individual_signatureRequired_true(self): """ If signatureRequired is true for individual extension, should_enforce_signature_validation = True (whether global signatureRequired is true or false). """ for global_rule in [True, False]: policy = \ { "policyVersion": "0.1.0", "extensionPolicies": { "allowListedExtensionsOnly": False, "signatureRequired": global_rule, "extensions": { TEST_EXTENSION_NAME: { "signatureRequired": True } } } } self._create_policy_file(policy) engine = ExtensionPolicyEngine() should_enforce_signature = engine.should_enforce_signature_validation(TEST_EXTENSION_NAME) self.assertTrue(should_enforce_signature, msg="Individual signatureRequired policy is true, so signature should be enforced.") def test_should_not_enforce_signature_if_individual_signatureRequired_false(self): """ If signatureRequired is false for individual extension policy, should_enforce_signature_validation = False (whether global signatureRequired is true or false). """ for global_rule in [True, False]: policy = \ { "policyVersion": "0.1.0", "extensionPolicies": { "allowListedExtensionsOnly": False, "signatureRequired": global_rule, "extensions": { TEST_EXTENSION_NAME: { "signatureRequired": False, } } } } self._create_policy_file(policy) engine = ExtensionPolicyEngine() should_enforce_signature = engine.should_enforce_signature_validation(TEST_EXTENSION_NAME) self.assertFalse(should_enforce_signature, msg="Individual signatureRequired policy is false, so signature should be not enforced.") def test_should_use_global_signatureRequired_when_an_individual_policy_is_not_specified(self): for global_policy in [True, False]: extensions_test_cases = [ None, {}, { TEST_EXTENSION_NAME: {} }, { TEST_EXTENSION_NAME: { "runtimePolicy": "an arbitrary object" } } ] for extensions in extensions_test_cases: policy = { "policyVersion": "0.1.0", "extensionPolicies": { "allowListedExtensionsOnly": True, "signatureRequired": global_policy, } } if extensions is not None: policy["extensionPolicies"]["extensions"] = extensions self._create_policy_file(policy) self.assertEqual( global_policy, ExtensionPolicyEngine().should_enforce_signature_validation(TEST_EXTENSION_NAME), "The global signatureRequired ({0}) should have been used. Policy:\n{1}".format(global_policy, policy)) def test_extension_name_in_policy_should_be_case_insensitive(self): """ Extension name is allowed to be any case. Test that should_allow() and should_enforce_signature_validation() return expected results, even when the extension name does not match the case of the name specified in policy. """ ext_name_in_policy = "Microsoft.Azure.ActiveDirectory.AADSSHLoginForLinux" for ext_name_to_test in [ "MicrOsoft.aZure.activedirectory.aaDsShloginFORlinux", "microsoft.azure.activedirectory.aadsshloginforlinux" ]: policy = \ { "policyVersion": "0.1.0", "extensionPolicies": { "allowListedExtensionsOnly": True, "signatureRequired": False, "extensions": { ext_name_in_policy: { "signatureRequired": True } } } } self._create_policy_file(policy) engine = ExtensionPolicyEngine() should_allow = engine.should_allow_extension(ext_name_to_test) should_enforce_signature = engine.should_enforce_signature_validation(ext_name_to_test) self.assertTrue(should_allow, msg="Extension should have been found in allowlist regardless of extension name case.") self.assertTrue(should_enforce_signature, msg="Individual signatureRequired policy should have been found and used, regardless of extension name case.") Azure-WALinuxAgent-a976115/tests/ga/test_remoteaccess.py000066400000000000000000000124171510742556200232160ustar00rootroot00000000000000# Copyright Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import xml from azurelinuxagent.common.protocol.goal_state import GoalState, RemoteAccess # pylint: disable=unused-import from tests.lib.tools import AgentTestCase, load_data, patch, Mock # pylint: disable=unused-import from tests.lib import wire_protocol_data from tests.lib.mock_wire_protocol import mock_wire_protocol class TestRemoteAccess(AgentTestCase): def test_parse_remote_access(self): data_str = load_data('wire/remote_access_single_account.xml') remote_access = RemoteAccess(data_str) self.assertNotEqual(None, remote_access) self.assertEqual("1", remote_access.incarnation) self.assertEqual(1, len(remote_access.user_list.users), "User count does not match.") self.assertEqual("testAccount", remote_access.user_list.users[0].name, "Account name does not match") self.assertEqual("encryptedPasswordString", remote_access.user_list.users[0].encrypted_password, "Encrypted password does not match.") self.assertEqual("2019-01-01", remote_access.user_list.users[0].expiration, "Expiration does not match.") def test_goal_state_with_no_remote_access(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: self.assertIsNone(protocol.client.get_remote_access()) def test_parse_two_remote_access_accounts(self): data_str = load_data('wire/remote_access_two_accounts.xml') remote_access = RemoteAccess(data_str) self.assertNotEqual(None, remote_access) self.assertEqual("1", remote_access.incarnation) self.assertEqual(2, len(remote_access.user_list.users), "User count does not match.") self.assertEqual("testAccount1", remote_access.user_list.users[0].name, "Account name does not match") self.assertEqual("encryptedPasswordString", remote_access.user_list.users[0].encrypted_password, "Encrypted password does not match.") self.assertEqual("2019-01-01", remote_access.user_list.users[0].expiration, "Expiration does not match.") self.assertEqual("testAccount2", remote_access.user_list.users[1].name, "Account name does not match") self.assertEqual("encryptedPasswordString", remote_access.user_list.users[1].encrypted_password, "Encrypted password does not match.") self.assertEqual("2019-01-01", remote_access.user_list.users[1].expiration, "Expiration does not match.") def test_parse_ten_remote_access_accounts(self): data_str = load_data('wire/remote_access_10_accounts.xml') remote_access = RemoteAccess(data_str) self.assertNotEqual(None, remote_access) self.assertEqual(10, len(remote_access.user_list.users), "User count does not match.") def test_parse_duplicate_remote_access_accounts(self): data_str = load_data('wire/remote_access_duplicate_accounts.xml') remote_access = RemoteAccess(data_str) self.assertNotEqual(None, remote_access) self.assertEqual(2, len(remote_access.user_list.users), "User count does not match.") self.assertEqual("testAccount", remote_access.user_list.users[0].name, "Account name does not match") self.assertEqual("encryptedPasswordString", remote_access.user_list.users[0].encrypted_password, "Encrypted password does not match.") self.assertEqual("2019-01-01", remote_access.user_list.users[0].expiration, "Expiration does not match.") self.assertEqual("testAccount", remote_access.user_list.users[1].name, "Account name does not match") self.assertEqual("encryptedPasswordString", remote_access.user_list.users[1].encrypted_password, "Encrypted password does not match.") self.assertEqual("2019-01-01", remote_access.user_list.users[1].expiration, "Expiration does not match.") def test_parse_zero_remote_access_accounts(self): data_str = load_data('wire/remote_access_no_accounts.xml') remote_access = RemoteAccess(data_str) self.assertNotEqual(None, remote_access) self.assertEqual(0, len(remote_access.user_list.users), "User count does not match.") def test_update_remote_access_conf_remote_access(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_REMOTE_ACCESS) as protocol: self.assertIsNotNone(protocol.client.get_remote_access()) self.assertEqual(1, len(protocol.client.get_remote_access().user_list.users)) self.assertEqual('testAccount', protocol.client.get_remote_access().user_list.users[0].name) self.assertEqual('encryptedPasswordString', protocol.client.get_remote_access().user_list.users[0].encrypted_password) def test_parse_bad_remote_access_data(self): data = "foobar" self.assertRaises(xml.parsers.expat.ExpatError, RemoteAccess, data)Azure-WALinuxAgent-a976115/tests/ga/test_remoteaccess_handler.py000066400000000000000000000657011510742556200247170ustar00rootroot00000000000000# Copyright Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from datetime import timedelta, datetime from mock import Mock, MagicMock from azurelinuxagent.common.future import UTC from azurelinuxagent.common.osutil.default import DefaultOSUtil from azurelinuxagent.common.protocol.goal_state import RemoteAccess from azurelinuxagent.common.protocol.util import ProtocolUtil from azurelinuxagent.common.protocol.wire import WireProtocol from azurelinuxagent.ga.remoteaccess import RemoteAccessHandler from tests.lib.tools import AgentTestCase, load_data, patch, clear_singleton_instances from tests.lib.mock_wire_protocol import mock_wire_protocol from tests.lib.wire_protocol_data import DATA_FILE, DATA_FILE_REMOTE_ACCESS class MockOSUtil(DefaultOSUtil): def __init__(self): # pylint: disable=super-init-not-called self.all_users = {} self.sudo_users = set() self.jit_enabled = True def useradd(self, username, expiration=None, comment=None): if username == "": raise Exception("test exception for bad username") if username in self.all_users: raise Exception("test exception, user already exists") self.all_users[username] = (username, None, None, None, comment, None, None, expiration) def conf_sudoer(self, username, nopasswd=False, remove=False): if not remove: self.sudo_users.add(username) else: self.sudo_users.remove(username) def chpasswd(self, username, password, crypt_id=6, salt_len=10): if password == "": raise Exception("test exception for bad password") user = self.all_users[username] self.all_users[username] = (user[0], password, user[2], user[3], user[4], user[5], user[6], user[7]) def del_account(self, username): if username == "": raise Exception("test exception, bad data") if username not in self.all_users: raise Exception("test exception, user does not exist to delete") self.all_users.pop(username) def get_users(self): return self.all_users.values() def get_user_dictionary(users): user_dictionary = {} for user in users: user_dictionary[user[0]] = user return user_dictionary def mock_add_event(name, op, is_success, version, message): TestRemoteAccessHandler.eventing_data = (name, op, is_success, version, message) class TestRemoteAccessHandler(AgentTestCase): eventing_data = () def setUp(self): super(TestRemoteAccessHandler, self).setUp() # Since ProtocolUtil is a singleton per thread, we need to clear it to ensure that the test cases do not # reuse a previous state clear_singleton_instances(ProtocolUtil) TestRemoteAccessHandler.eventing_data = () # add_user tests @patch('azurelinuxagent.common.utils.cryptutil.CryptUtil.decrypt_secret', return_value="]aPPEv}uNg1FPnl?") def test_add_user(self, *_): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) tstpassword = "]aPPEv}uNg1FPnl?" tstuser = "foobar" expiration_date = datetime.now(UTC) + timedelta(days=1) pwd = tstpassword rah._add_user(tstuser, pwd, expiration_date) users = get_user_dictionary(rah._os_util.get_users()) self.assertTrue(tstuser in users, "{0} missing from users".format(tstuser)) actual_user = users[tstuser] expected_expiration = (expiration_date + timedelta(days=1)).strftime("%Y-%m-%d") self.assertEqual(actual_user[7], expected_expiration) self.assertEqual(actual_user[4], "JIT_Account") @patch('azurelinuxagent.common.utils.cryptutil.CryptUtil.decrypt_secret', return_value="]aPPEv}uNg1FPnl?") def test_add_user_bad_creation_data(self, *_): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) tstpassword = "]aPPEv}uNg1FPnl?" tstuser = "" expiration = datetime.now(UTC) + timedelta(days=1) pwd = tstpassword error = "test exception for bad username" self.assertRaisesRegex(Exception, error, rah._add_user, tstuser, pwd, expiration) self.assertEqual(0, len(rah._os_util.get_users())) @patch('azurelinuxagent.common.utils.cryptutil.CryptUtil.decrypt_secret', return_value="") def test_add_user_bad_password_data(self, *_): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) tstpassword = "" tstuser = "foobar" expiration = datetime.now(UTC) + timedelta(days=1) pwd = tstpassword error = "test exception for bad password" self.assertRaisesRegex(Exception, error, rah._add_user, tstuser, pwd, expiration) self.assertEqual(0, len(rah._os_util.get_users())) @patch('azurelinuxagent.common.utils.cryptutil.CryptUtil.decrypt_secret', return_value="]aPPEv}uNg1FPnl?") def test_add_user_already_existing(self, _): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) tstpassword = "]aPPEv}uNg1FPnl?" tstuser = "foobar" expiration_date = datetime.now(UTC) + timedelta(days=1) pwd = tstpassword rah._add_user(tstuser, pwd, expiration_date) users = get_user_dictionary(rah._os_util.get_users()) self.assertTrue(tstuser in users, "{0} missing from users".format(tstuser)) self.assertEqual(1, len(users.keys())) actual_user = users[tstuser] self.assertEqual(actual_user[7], (expiration_date + timedelta(days=1)).strftime("%Y-%m-%d")) # add the new duplicate user, ensure it's not created and does not overwrite the existing user. # this does not test the user add function as that's mocked, it tests processing skips the remaining # calls after the initial failure new_user_expiration = datetime.now(UTC) + timedelta(days=5) self.assertRaises(Exception, rah._add_user, tstuser, pwd, new_user_expiration) # refresh users users = get_user_dictionary(rah._os_util.get_users()) self.assertTrue(tstuser in users, "{0} missing from users after dup user attempted".format(tstuser)) self.assertEqual(1, len(users.keys())) actual_user = users[tstuser] self.assertEqual(actual_user[7], (expiration_date + timedelta(days=1)).strftime("%Y-%m-%d")) # delete_user tests @patch('azurelinuxagent.common.utils.cryptutil.CryptUtil.decrypt_secret', return_value="]aPPEv}uNg1FPnl?") def test_delete_user(self, *_): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) tstpassword = "]aPPEv}uNg1FPnl?" tstuser = "foobar" expiration_date = datetime.now(UTC) + timedelta(days=1) expected_expiration = (expiration_date + timedelta(days=1)).strftime("%Y-%m-%d") # pylint: disable=unused-variable pwd = tstpassword rah._add_user(tstuser, pwd, expiration_date) users = get_user_dictionary(rah._os_util.get_users()) self.assertTrue(tstuser in users, "{0} missing from users".format(tstuser)) rah._remove_user(tstuser) # refresh users users = get_user_dictionary(rah._os_util.get_users()) self.assertFalse(tstuser in users) @patch('azurelinuxagent.common.utils.cryptutil.CryptUtil.decrypt_secret', return_value="]aPPEv}uNg1FPnl?") def test_handle_new_user(self, _): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) data_str = load_data('wire/remote_access_single_account.xml') remote_access = RemoteAccess(data_str) tstuser = remote_access.user_list.users[0].name expiration_date = datetime.now(UTC) + timedelta(days=1) expiration = expiration_date.strftime("%a, %d %b %Y %H:%M:%S ") + "UTC" remote_access.user_list.users[0].expiration = expiration rah._remote_access = remote_access rah._handle_remote_access() users = get_user_dictionary(rah._os_util.get_users()) self.assertTrue(tstuser in users, "{0} missing from users".format(tstuser)) actual_user = users[tstuser] expected_expiration = (expiration_date + timedelta(days=1)).strftime("%Y-%m-%d") self.assertEqual(actual_user[7], expected_expiration) self.assertEqual(actual_user[4], "JIT_Account") def test_do_not_add_expired_user(self): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) data_str = load_data('wire/remote_access_single_account.xml') remote_access = RemoteAccess(data_str) expiration = (datetime.now(UTC) - timedelta(days=2)).strftime("%a, %d %b %Y %H:%M:%S ") + "UTC" remote_access.user_list.users[0].expiration = expiration rah._remote_access = remote_access rah._handle_remote_access() users = get_user_dictionary(rah._os_util.get_users()) self.assertFalse("testAccount" in users) def test_error_add_user(self): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) tstuser = "foobar" expiration = datetime.now(UTC) + timedelta(days=1) pwd = "bad password" error = r"\[CryptError\] Error decoding secret\nInner error: Incorrect padding" self.assertRaisesRegex(Exception, error, rah._add_user, tstuser, pwd, expiration) users = get_user_dictionary(rah._os_util.get_users()) self.assertEqual(0, len(users)) def test_handle_remote_access_no_users(self): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) data_str = load_data('wire/remote_access_no_accounts.xml') remote_access = RemoteAccess(data_str) rah._remote_access = remote_access rah._handle_remote_access() users = get_user_dictionary(rah._os_util.get_users()) self.assertEqual(0, len(users.keys())) def test_handle_remote_access_validate_jit_user_valid(self): rah = RemoteAccessHandler(Mock()) comment = "JIT_Account" result = rah._is_jit_user(comment) self.assertTrue(result, "Did not identify '{0}' as a JIT_Account".format(comment)) def test_handle_remote_access_validate_jit_user_invalid(self): rah = RemoteAccessHandler(Mock()) test_users = ["John Doe", None, "", " "] failed_results = "" for user in test_users: if rah._is_jit_user(user): failed_results += "incorrectly identified '{0} as a JIT_Account'. ".format(user) if len(failed_results) > 0: self.fail(failed_results) @patch('azurelinuxagent.common.utils.cryptutil.CryptUtil.decrypt_secret', return_value="]aPPEv}uNg1FPnl?") def test_handle_remote_access_multiple_users(self, _): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) data_str = load_data('wire/remote_access_two_accounts.xml') remote_access = RemoteAccess(data_str) testusers = [] count = 0 while count < 2: user = remote_access.user_list.users[count].name expiration_date = datetime.now(UTC) + timedelta(days=count + 1) expiration = expiration_date.strftime("%a, %d %b %Y %H:%M:%S ") + "UTC" remote_access.user_list.users[count].expiration = expiration testusers.append(user) count += 1 rah._remote_access = remote_access rah._handle_remote_access() users = get_user_dictionary(rah._os_util.get_users()) self.assertTrue(testusers[0] in users, "{0} missing from users".format(testusers[0])) self.assertTrue(testusers[1] in users, "{0} missing from users".format(testusers[1])) @patch('azurelinuxagent.common.utils.cryptutil.CryptUtil.decrypt_secret', return_value="]aPPEv}uNg1FPnl?") # max fabric supports in the Goal State def test_handle_remote_access_ten_users(self, _): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) data_str = load_data('wire/remote_access_10_accounts.xml') remote_access = RemoteAccess(data_str) count = 0 for user in remote_access.user_list.users: count += 1 user.name = "tstuser{0}".format(count) expiration_date = datetime.now(UTC) + timedelta(days=count) user.expiration = expiration_date.strftime("%a, %d %b %Y %H:%M:%S ") + "UTC" rah._remote_access = remote_access rah._handle_remote_access() users = get_user_dictionary(rah._os_util.get_users()) self.assertEqual(10, len(users.keys())) @patch('azurelinuxagent.common.utils.cryptutil.CryptUtil.decrypt_secret', return_value="]aPPEv}uNg1FPnl?") def test_handle_remote_access_user_removed(self, _): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) data_str = load_data('wire/remote_access_10_accounts.xml') remote_access = RemoteAccess(data_str) count = 0 for user in remote_access.user_list.users: count += 1 user.name = "tstuser{0}".format(count) expiration_date = datetime.now(UTC) + timedelta(days=count) user.expiration = expiration_date.strftime("%a, %d %b %Y %H:%M:%S ") + "UTC" rah._remote_access = remote_access rah._handle_remote_access() users = get_user_dictionary(rah._os_util.get_users()) self.assertEqual(10, len(users.keys())) del rah._remote_access.user_list.users[:] self.assertEqual(10, len(users.keys())) @patch('azurelinuxagent.common.utils.cryptutil.CryptUtil.decrypt_secret', return_value="]aPPEv}uNg1FPnl?") def test_handle_remote_access_bad_data_and_good_data(self, _): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) data_str = load_data('wire/remote_access_10_accounts.xml') remote_access = RemoteAccess(data_str) count = 0 for user in remote_access.user_list.users: count += 1 user.name = "tstuser{0}".format(count) if count == 2: user.name = "" expiration_date = datetime.now(UTC) + timedelta(days=count) user.expiration = expiration_date.strftime("%a, %d %b %Y %H:%M:%S ") + "UTC" rah._remote_access = remote_access rah._handle_remote_access() users = get_user_dictionary(rah._os_util.get_users()) self.assertEqual(9, len(users.keys())) @patch('azurelinuxagent.common.utils.cryptutil.CryptUtil.decrypt_secret', return_value="]aPPEv}uNg1FPnl?") def test_handle_remote_access_deleted_user_readded(self, _): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) data_str = load_data('wire/remote_access_single_account.xml') remote_access = RemoteAccess(data_str) tstuser = remote_access.user_list.users[0].name expiration_date = datetime.now(UTC) + timedelta(days=1) expiration = expiration_date.strftime("%a, %d %b %Y %H:%M:%S ") + "UTC" remote_access.user_list.users[0].expiration = expiration rah._remote_access = remote_access rah._handle_remote_access() users = get_user_dictionary(rah._os_util.get_users()) self.assertTrue(tstuser in users, "{0} missing from users".format(tstuser)) os_util = rah._os_util os_util.__class__ = MockOSUtil os_util.all_users.clear() # pylint: disable=no-member # refresh users users = get_user_dictionary(rah._os_util.get_users()) self.assertTrue(tstuser not in users) rah._handle_remote_access() # refresh users users = get_user_dictionary(rah._os_util.get_users()) self.assertTrue(tstuser in users, "{0} missing from users".format(tstuser)) @patch('azurelinuxagent.common.utils.cryptutil.CryptUtil.decrypt_secret', return_value="]aPPEv}uNg1FPnl?") @patch('azurelinuxagent.common.osutil.get_osutil', return_value=MockOSUtil()) @patch('azurelinuxagent.common.protocol.util.ProtocolUtil.get_protocol', return_value=WireProtocol("12.34.56.78")) @patch('azurelinuxagent.common.protocol.wire.WireClient.get_remote_access', return_value="asdf") def test_remote_access_handler_run_bad_data(self, _1, _2, _3, _4): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) tstpassword = "]aPPEv}uNg1FPnl?" tstuser = "foobar" expiration_date = datetime.now(UTC) + timedelta(days=1) pwd = tstpassword rah._add_user(tstuser, pwd, expiration_date) users = get_user_dictionary(rah._os_util.get_users()) self.assertTrue(tstuser in users, "{0} missing from users".format(tstuser)) rah.run() self.assertTrue(tstuser in users, "{0} missing from users".format(tstuser)) @patch('azurelinuxagent.common.utils.cryptutil.CryptUtil.decrypt_secret', return_value="]aPPEv}uNg1FPnl?") def test_handle_remote_access_multiple_users_one_removed(self, _): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) data_str = load_data('wire/remote_access_10_accounts.xml') remote_access = RemoteAccess(data_str) count = 0 for user in remote_access.user_list.users: count += 1 user.name = "tstuser{0}".format(count) expiration_date = datetime.now(UTC) + timedelta(days=count) user.expiration = expiration_date.strftime("%a, %d %b %Y %H:%M:%S ") + "UTC" rah._remote_access = remote_access rah._handle_remote_access() users = rah._os_util.get_users() self.assertEqual(10, len(users)) # now remove the user from RemoteAccess deleted_user = rah._remote_access.user_list.users[3] del rah._remote_access.user_list.users[3] rah._handle_remote_access() users = rah._os_util.get_users() self.assertTrue(deleted_user not in users, "{0} still in users".format(deleted_user)) self.assertEqual(9, len(users)) @patch('azurelinuxagent.common.utils.cryptutil.CryptUtil.decrypt_secret', return_value="]aPPEv}uNg1FPnl?") def test_handle_remote_access_multiple_users_null_remote_access(self, _): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) data_str = load_data('wire/remote_access_10_accounts.xml') remote_access = RemoteAccess(data_str) count = 0 for user in remote_access.user_list.users: count += 1 user.name = "tstuser{0}".format(count) expiration_date = datetime.now(UTC) + timedelta(days=count) user.expiration = expiration_date.strftime("%a, %d %b %Y %H:%M:%S ") + "UTC" rah._remote_access = remote_access rah._handle_remote_access() users = rah._os_util.get_users() self.assertEqual(10, len(users)) # now remove the user from RemoteAccess rah._remote_access = None rah._handle_remote_access() users = rah._os_util.get_users() self.assertEqual(0, len(users)) @patch('azurelinuxagent.common.utils.cryptutil.CryptUtil.decrypt_secret', return_value="]aPPEv}uNg1FPnl?") def test_handle_remote_access_multiple_users_error_with_null_remote_access(self, _): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) data_str = load_data('wire/remote_access_10_accounts.xml') remote_access = RemoteAccess(data_str) count = 0 for user in remote_access.user_list.users: count += 1 user.name = "tstuser{0}".format(count) expiration_date = datetime.now(UTC) + timedelta(days=count) user.expiration = expiration_date.strftime("%a, %d %b %Y %H:%M:%S ") + "UTC" rah._remote_access = remote_access rah._handle_remote_access() users = rah._os_util.get_users() self.assertEqual(10, len(users)) # now remove the user from RemoteAccess rah._remote_access = None rah._handle_remote_access() users = rah._os_util.get_users() self.assertEqual(0, len(users)) def test_remove_user_error(self): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) error = "test exception, bad data" self.assertRaisesRegex(Exception, error, rah._remove_user, "") def test_remove_user_not_exists(self): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) user = "bob" error = "test exception, user does not exist to delete" self.assertRaisesRegex(Exception, error, rah._remove_user, user) @patch('azurelinuxagent.common.utils.cryptutil.CryptUtil.decrypt_secret', return_value="]aPPEv}uNg1FPnl?") def test_handle_remote_access_remove_and_add(self, _): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) data_str = load_data('wire/remote_access_10_accounts.xml') remote_access = RemoteAccess(data_str) count = 0 for user in remote_access.user_list.users: count += 1 user.name = "tstuser{0}".format(count) expiration_date = datetime.now(UTC) + timedelta(days=count) user.expiration = expiration_date.strftime("%a, %d %b %Y %H:%M:%S ") + "UTC" rah._remote_access = remote_access rah._handle_remote_access() users = rah._os_util.get_users() self.assertEqual(10, len(users)) # now remove the user from RemoteAccess new_user = "tstuser11" deleted_user = rah._remote_access.user_list.users[3] rah._remote_access.user_list.users[3].name = new_user rah._handle_remote_access() users = rah._os_util.get_users() self.assertTrue(deleted_user not in users, "{0} still in users".format(deleted_user)) self.assertTrue(new_user in [u[0] for u in users], "user {0} not in users".format(new_user)) self.assertEqual(10, len(users)) @patch('azurelinuxagent.ga.remoteaccess.add_event', side_effect=mock_add_event) def test_remote_access_handler_run_error(self, _): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): mock_protocol = WireProtocol("foo.bar") mock_protocol.client.get_remote_access = MagicMock(side_effect=Exception("foobar!")) rah = RemoteAccessHandler(mock_protocol) rah.run() print(TestRemoteAccessHandler.eventing_data) check_message = "foobar!" self.assertTrue(check_message in TestRemoteAccessHandler.eventing_data[4], "expected message {0} not found in {1}" .format(check_message, TestRemoteAccessHandler.eventing_data[4])) self.assertEqual(False, TestRemoteAccessHandler.eventing_data[2], "is_success is true") def test_remote_access_handler_should_retrieve_users_when_it_is_invoked_the_first_time(self): mock_os_util = MagicMock() with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=mock_os_util): with mock_wire_protocol(DATA_FILE) as mock_protocol: rah = RemoteAccessHandler(mock_protocol) rah.run() self.assertTrue(len(mock_os_util.get_users.call_args_list) == 1, "The first invocation of remote access should have retrieved the current users") def test_remote_access_handler_should_retrieve_users_when_goal_state_contains_jit_users(self): mock_os_util = MagicMock() with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=mock_os_util): with mock_wire_protocol(DATA_FILE_REMOTE_ACCESS) as mock_protocol: rah = RemoteAccessHandler(mock_protocol) rah.run() self.assertTrue(len(mock_os_util.get_users.call_args_list) > 0, "A goal state with jit users did not retrieve the current users") def test_remote_access_handler_should_not_retrieve_users_when_goal_state_does_not_contain_jit_users(self): mock_os_util = MagicMock() with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=mock_os_util): with mock_wire_protocol(DATA_FILE) as mock_protocol: rah = RemoteAccessHandler(mock_protocol) rah.run() # this will trigger one call to retrieve the users mock_protocol.mock_wire_data.set_incarnation(123) # mock a new goal state; the data file does not include any jit users rah.run() self.assertTrue(len(mock_os_util.get_users.call_args_list) == 1, "A goal state without jit users retrieved the current users") Azure-WALinuxAgent-a976115/tests/ga/test_report_status.py000066400000000000000000000241431510742556200234560ustar00rootroot00000000000000# Copyright (c) Microsoft Corporation. All rights reserved. # Licensed under the Apache License. import json from azurelinuxagent.common.protocol.restapi import VMStatus, ExtHandlerStatus, ExtensionStatus from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from azurelinuxagent.ga.agent_update_handler import get_agent_update_handler from azurelinuxagent.ga.exthandlers import ExtHandlersHandler from azurelinuxagent.ga.update import get_update_handler from tests.lib.mock_update_handler import mock_update_handler from tests.lib.mock_wire_protocol import mock_wire_protocol, MockHttpResponse from tests.lib.tools import AgentTestCase, patch from tests.lib import wire_protocol_data from tests.lib.http_request_predicates import HttpRequestPredicates class ReportStatusTestCase(AgentTestCase): """ Tests for UpdateHandler._report_status() """ def test_update_handler_should_report_status_when_fetch_goal_state_fails(self): # The test executes the main loop of UpdateHandler.run() twice, failing requests for the goal state # on the second iteration. We expect the 2 iterations to report status, despite the goal state failure. fail_goal_state_request = [False] def http_get_handler(url, *_, **__): if HttpRequestPredicates.is_goal_state_request(url) and fail_goal_state_request[0]: return MockHttpResponse(status=410) return None def on_new_iteration(iteration): fail_goal_state_request[0] = iteration == 2 with mock_wire_protocol(wire_protocol_data.DATA_FILE, http_get_handler=http_get_handler) as protocol: exthandlers_handler = ExtHandlersHandler(protocol) with patch.object(exthandlers_handler, "run", wraps=exthandlers_handler.run) as exthandlers_handler_run: with mock_update_handler(protocol, iterations=2, on_new_iteration=on_new_iteration, exthandlers_handler=exthandlers_handler) as update_handler: with patch("azurelinuxagent.common.version.get_daemon_version", return_value=FlexibleVersion("2.2.53")): update_handler.run(debug=True) self.assertEqual(1, exthandlers_handler_run.call_count, "Extensions should have been executed only once.") self.assertEqual(2, len(protocol.mock_wire_data.status_blobs), "Status should have been reported for the 2 iterations.") # # Verify that we reported status for the extension in the test data # first_status = json.loads(protocol.mock_wire_data.status_blobs[0]) handler_aggregate_status = first_status.get('aggregateStatus', {}).get("handlerAggregateStatus") self.assertIsNotNone(handler_aggregate_status, "Could not find the handlerAggregateStatus") self.assertEqual(1, len(handler_aggregate_status), "Expected 1 extension status. Got: {0}".format(handler_aggregate_status)) extension_status = handler_aggregate_status[0] self.assertEqual("OSTCExtensions.ExampleHandlerLinux", extension_status["handlerName"], "The status does not correspond to the test data") # # Verify that we reported the same status (minus timestamps) in the 2 iterations # second_status = json.loads(protocol.mock_wire_data.status_blobs[1]) def remove_timestamps(x): if isinstance(x, list): for v in x: remove_timestamps(v) elif isinstance(x, dict): for k, v in x.items(): if k == "timestampUTC": x[k] = '' else: remove_timestamps(v) remove_timestamps(first_status) remove_timestamps(second_status) self.assertEqual(first_status, second_status) def test_report_status_should_log_errors_only_once_per_goal_state(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: with patch("azurelinuxagent.common.conf.get_autoupdate_enabled", return_value=False): # skip agent update with patch("azurelinuxagent.ga.update.logger.warn") as logger_warn: with patch("azurelinuxagent.common.version.get_daemon_version", return_value=FlexibleVersion("2.2.53")): update_handler = get_update_handler() update_handler._goal_state = protocol.get_goal_state() # these tests skip the initialization of the goal state. so do that here exthandlers_handler = ExtHandlersHandler(protocol) agent_update_handler = get_agent_update_handler(protocol) update_handler._report_status(exthandlers_handler, agent_update_handler) self.assertEqual(0, logger_warn.call_count, "UpdateHandler._report_status() should not report WARNINGS when there are no errors") with patch("azurelinuxagent.ga.update.ExtensionsSummary.__init__", side_effect=Exception("TEST EXCEPTION")): # simulate an error during _report_status() get_warnings = lambda: [args[0] for args, _ in logger_warn.call_args_list if "TEST EXCEPTION" in args[0]] update_handler._report_status(exthandlers_handler, agent_update_handler) update_handler._report_status(exthandlers_handler, agent_update_handler) update_handler._report_status(exthandlers_handler, agent_update_handler) self.assertEqual(1, len(get_warnings()), "UpdateHandler._report_status() should report only 1 WARNING when there are multiple errors within the same goal state") exthandlers_handler.protocol.mock_wire_data.set_incarnation(999) update_handler._try_update_goal_state(exthandlers_handler.protocol) update_handler._report_status(exthandlers_handler, agent_update_handler) self.assertEqual(2, len(get_warnings()), "UpdateHandler._report_status() should continue reporting errors after a new goal state") def test_report_status_should_redact_sas_tokens(self): original = r'''ONE https://foo.blob.core.windows.net/bar?sv=2000&ss=bfqt&srt=sco&sp=rw&se=2025&st=2022&spr=https&sig=SI%3D TWO:HTTPS://bar.blob.core.com/foo/bar/foo.txt?sv=2018&sr=b&sig=Yx%3D&st=2023%3A52Z&se=9999%3A59%3A59Z&sp=r TWO https://bar.com/foo?uid=2018&sr=b THREE''' expected = r'''ONE https://foo.blob.core.windows.net/bar? TWO:HTTPS://bar.blob.core.com/foo/bar/foo.txt? TWO https://bar.com/foo?uid=2018&sr=b THREE''' def create_vm_status(): vm_status = VMStatus(status="Ready", message="Ready") vm_status.vmAgent.extensionHandlers = [ExtHandlerStatus(name="TestHandler", message=original)] vm_status.vmAgent.extensionHandlers[0].extension_status = ExtensionStatus(name="TestExtension", message=original) vm_status.vmAgent.extensionHandlers[0].extension_status.status = "Ready" return vm_status with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: protocol.client.status_blob.vm_status = create_vm_status() protocol.client.upload_status_blob() first_status = json.loads(protocol.mock_wire_data.status_blobs[0]) handler_aggregate_status = first_status.get('aggregateStatus', {}).get("handlerAggregateStatus") self.assertIsNotNone(handler_aggregate_status, "Could not find the handlerAggregateStatus") self.assertEqual(1, len(handler_aggregate_status), "Expected 1 extension status. Got: {0}".format(handler_aggregate_status)) self.assertEqual(expected, handler_aggregate_status[0]['formattedMessage']['message'], "sas tokens not redacted in handler status") runtime_settings_status = handler_aggregate_status[0].get("runtimeSettingsStatus") self.assertIsNotNone(runtime_settings_status, "Could not find the runtimeSettingsStatus") settings_status = runtime_settings_status.get("settingsStatus", {}).get('status') self.assertIsNotNone(runtime_settings_status, "Could not find the settingsStatus") self.assertEqual(expected, settings_status['formattedMessage']['message'], "sas tokens not redacted in extension status") def test_update_handler_should_add_fast_track_to_supported_features_when_it_is_supported(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: self._test_supported_features_includes_fast_track(protocol, True) def test_update_handler_should_not_add_fast_track_to_supported_features_when_it_is_not_supported(self): def http_get_handler(url, *_, **__): if HttpRequestPredicates.is_host_plugin_vm_settings_request(url): return MockHttpResponse(status=404) return None with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS, http_get_handler=http_get_handler) as protocol: self._test_supported_features_includes_fast_track(protocol, False) def _test_supported_features_includes_fast_track(self, protocol, expected): with mock_update_handler(protocol) as update_handler: update_handler.run(debug=True) status = json.loads(protocol.mock_wire_data.status_blobs[0]) supported_features = status['supportedFeatures'] includes_fast_track = any(f['Key'] == 'FastTrack' for f in supported_features) self.assertEqual(expected, includes_fast_track, "supportedFeatures should {0}include FastTrack. Got: {1}".format("" if expected else "not ", supported_features)) Azure-WALinuxAgent-a976115/tests/ga/test_send_telemetry_events.py000066400000000000000000000564111510742556200251520ustar00rootroot00000000000000# Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import contextlib import json import os import platform import re import tempfile import time import uuid from datetime import datetime, timedelta from mock import MagicMock, Mock, patch, PropertyMock from azurelinuxagent.common.datacontract import get_properties from azurelinuxagent.common.event import WALAEventOperation, EVENTS_DIRECTORY from azurelinuxagent.common.exception import HttpError, ServiceStoppedError from azurelinuxagent.common.future import ustr, UTC from azurelinuxagent.common.osutil.factory import get_osutil from azurelinuxagent.common.protocol.util import ProtocolUtil from azurelinuxagent.common.protocol.wire import event_to_v1_encoded from azurelinuxagent.common.telemetryevent import TelemetryEvent, TelemetryEventParam, \ GuestAgentExtensionEventsSchema from azurelinuxagent.common.utils import restutil, fileutil, timeutil from azurelinuxagent.common.version import CURRENT_VERSION, DISTRO_NAME, DISTRO_VERSION, AGENT_VERSION, CURRENT_AGENT, \ DISTRO_CODE_NAME from azurelinuxagent.ga.collect_telemetry_events import _CollectAndEnqueueEvents from azurelinuxagent.ga.send_telemetry_events import get_send_telemetry_events_handler from tests.ga.test_monitor import random_generator from tests.lib.mock_wire_protocol import MockHttpResponse, mock_wire_protocol from tests.lib.http_request_predicates import HttpRequestPredicates from tests.lib.wire_protocol_data import DATA_FILE from tests.lib.tools import AgentTestCase, clear_singleton_instances, mock_sleep from tests.lib.event_logger_tools import EventLoggerTools class TestSendTelemetryEventsHandler(AgentTestCase, HttpRequestPredicates): def setUp(self): AgentTestCase.setUp(self) clear_singleton_instances(ProtocolUtil) self.lib_dir = tempfile.mkdtemp() self.event_dir = os.path.join(self.lib_dir, EVENTS_DIRECTORY) EventLoggerTools.initialize_event_logger(self.event_dir) def tearDown(self): AgentTestCase.tearDown(self) fileutil.rm_dirs(self.lib_dir) _TEST_EVENT_PROVIDER_ID = "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" _TEST_EVENT_OPERATION = "TEST_EVENT_OPERATION" @contextlib.contextmanager def _create_send_telemetry_events_handler(self, timeout=0.5, start_thread=True, batching_queue_limit=1): def http_post_handler(url, body, **__): if self.is_telemetry_request(url): send_telemetry_events_handler.event_calls.append((datetime.now(UTC), body)) return MockHttpResponse(status=200) return None with mock_wire_protocol(DATA_FILE, http_post_handler=http_post_handler) as protocol: protocol_util = MagicMock() protocol_util.get_protocol = Mock(return_value=protocol) send_telemetry_events_handler = get_send_telemetry_events_handler(protocol_util) send_telemetry_events_handler.event_calls = [] with patch("azurelinuxagent.ga.send_telemetry_events.SendTelemetryEventsHandler._MIN_EVENTS_TO_BATCH", batching_queue_limit): with patch("azurelinuxagent.ga.send_telemetry_events.SendTelemetryEventsHandler._MAX_TIMEOUT", timeout): send_telemetry_events_handler.get_mock_wire_protocol = lambda: protocol if start_thread: send_telemetry_events_handler.start() self.assertTrue(send_telemetry_events_handler.is_alive(), "Thread didn't start properly!") yield send_telemetry_events_handler @staticmethod def _stop_handler(telemetry_handler, timeout=0.001): # Giving it some grace time to finish execution and then stopping thread time.sleep(timeout) telemetry_handler.stop() def _assert_test_data_in_event_body(self, telemetry_handler, test_events): # Stop the thread and Wait for the queue and thread to join TestSendTelemetryEventsHandler._stop_handler(telemetry_handler) for telemetry_event in test_events: event_str = event_to_v1_encoded(telemetry_event) found = False for _, event_body in telemetry_handler.event_calls: if event_str in event_body: found = True break self.assertTrue(found, "Event {0} not found in any telemetry calls".format(event_str)) def _assert_error_event_reported(self, mock_add_event, expected_msg, operation=WALAEventOperation.ReportEventErrors): found_msg = False for call_args in mock_add_event.call_args_list: _, kwargs = call_args if expected_msg in kwargs['message'] and kwargs['op'] == operation: found_msg = True break self.assertTrue(found_msg, "Error msg: {0} not reported".format(expected_msg)) def _setup_and_assert_bad_request_scenarios(self, http_post_handler, expected_msgs): with self._create_send_telemetry_events_handler() as telemetry_handler: telemetry_handler.get_mock_wire_protocol().set_http_handlers(http_post_handler=http_post_handler) with patch("azurelinuxagent.common.event.add_event") as mock_add_event: telemetry_handler.enqueue_event(TelemetryEvent()) TestSendTelemetryEventsHandler._stop_handler(telemetry_handler) for msg in expected_msgs: self._assert_error_event_reported(mock_add_event, msg) def test_it_should_send_events_properly(self): events = [TelemetryEvent(eventId=ustr(uuid.uuid4())), TelemetryEvent(eventId=ustr(uuid.uuid4()))] with self._create_send_telemetry_events_handler() as telemetry_handler: for test_event in events: telemetry_handler.enqueue_event(test_event) self._assert_test_data_in_event_body(telemetry_handler, events) def test_it_should_send_as_soon_as_events_available_in_queue_with_minimal_batching_limits(self): events = [TelemetryEvent(eventId=ustr(uuid.uuid4())), TelemetryEvent(eventId=ustr(uuid.uuid4()))] with self._create_send_telemetry_events_handler() as telemetry_handler: test_start_time = datetime.now(UTC) for test_event in events: telemetry_handler.enqueue_event(test_event) self._assert_test_data_in_event_body(telemetry_handler, events) # Ensure that we send out the data as soon as we enqueue the events for event_time, _ in telemetry_handler.event_calls: elapsed = event_time - test_start_time self.assertLessEqual(elapsed, timedelta(seconds=2), "Request was not sent as soon as possible") def test_thread_should_wait_for_events_to_get_in_queue_before_processing(self): events = [TelemetryEvent(eventId=ustr(uuid.uuid4())), TelemetryEvent(eventId=ustr(uuid.uuid4()))] with self._create_send_telemetry_events_handler(timeout=0.1) as telemetry_handler: # Do nothing for some time time.sleep(0.3) # Ensure that no events were transmitted by the telemetry handler during this time, i.e. telemetry thread was idle self.assertEqual(0, len(telemetry_handler.event_calls), "Unwanted calls to telemetry") # Now enqueue data and verify send_telemetry_events sends them asap for test_event in events: telemetry_handler.enqueue_event(test_event) self._assert_test_data_in_event_body(telemetry_handler, events) def test_it_should_honor_batch_time_limits_before_sending_telemetry(self): events = [TelemetryEvent(eventId=ustr(uuid.uuid4())), TelemetryEvent(eventId=ustr(uuid.uuid4()))] wait_time = timedelta(seconds=10) orig_sleep = time.sleep with patch("time.sleep", lambda *_: orig_sleep(0.01)): with patch("azurelinuxagent.ga.send_telemetry_events.SendTelemetryEventsHandler._MIN_BATCH_WAIT_TIME", wait_time): with self._create_send_telemetry_events_handler(batching_queue_limit=5) as telemetry_handler: for test_event in events: telemetry_handler.enqueue_event(test_event) self.assertEqual(0, len(telemetry_handler.event_calls), "No events should have been logged") TestSendTelemetryEventsHandler._stop_handler(telemetry_handler, timeout=0.01) wait_time = timedelta(seconds=0.2) with patch("time.sleep", lambda *_: orig_sleep(0.05)): with patch("azurelinuxagent.ga.send_telemetry_events.SendTelemetryEventsHandler._MIN_BATCH_WAIT_TIME", wait_time): with self._create_send_telemetry_events_handler(batching_queue_limit=5) as telemetry_handler: test_start_time = datetime.now(UTC) for test_event in events: telemetry_handler.enqueue_event(test_event) while not telemetry_handler.event_calls and (test_start_time + timedelta(seconds=1)) > datetime.now(UTC): # Wait for event calls to be made, wait a max of 1 secs orig_sleep(0.1) self.assertGreater(len(telemetry_handler.event_calls), 0, "No event calls made at all!") self._assert_test_data_in_event_body(telemetry_handler, events) for event_time, _ in telemetry_handler.event_calls: elapsed = event_time - test_start_time # Technically we should send out data after 0.2 secs, but keeping a buffer of 1sec while testing self.assertLessEqual(elapsed, timedelta(seconds=1), "Request was not sent properly") def test_it_should_clear_queue_before_stopping(self): events = [TelemetryEvent(eventId=ustr(uuid.uuid4())), TelemetryEvent(eventId=ustr(uuid.uuid4()))] wait_time = timedelta(seconds=10) with patch("time.sleep", lambda *_: mock_sleep(0.01)): with patch("azurelinuxagent.ga.send_telemetry_events.SendTelemetryEventsHandler._MIN_BATCH_WAIT_TIME", wait_time): with self._create_send_telemetry_events_handler(batching_queue_limit=5) as telemetry_handler: for test_event in events: telemetry_handler.enqueue_event(test_event) self.assertEqual(0, len(telemetry_handler.event_calls), "No events should have been logged") TestSendTelemetryEventsHandler._stop_handler(telemetry_handler, timeout=0.01) # After the service is asked to stop, we should send all data in the queue self._assert_test_data_in_event_body(telemetry_handler, events) def test_it_should_honor_batch_queue_limits_before_sending_telemetry(self): batch_limit = 5 with self._create_send_telemetry_events_handler(batching_queue_limit=batch_limit) as telemetry_handler: events = [] for _ in range(batch_limit-1): test_event = TelemetryEvent(eventId=ustr(uuid.uuid4())) events.append(test_event) telemetry_handler.enqueue_event(test_event) self.assertEqual(0, len(telemetry_handler.event_calls), "No events should have been logged") for _ in range(batch_limit): test_event = TelemetryEvent(eventId=ustr(uuid.uuid4())) events.append(test_event) telemetry_handler.enqueue_event(test_event) self._assert_test_data_in_event_body(telemetry_handler, events) def test_it_should_raise_on_enqueue_if_service_stopped(self): with self._create_send_telemetry_events_handler(start_thread=False) as telemetry_handler: # Ensure the thread is stopped telemetry_handler.stop() with self.assertRaises(ServiceStoppedError) as context_manager: telemetry_handler.enqueue_event(TelemetryEvent(eventId=ustr(uuid.uuid4()))) exception = context_manager.exception self.assertIn("{0} is stopped, not accepting anymore events".format(telemetry_handler.get_thread_name()), str(exception)) def test_it_should_honour_the_incoming_order_of_events(self): with self._create_send_telemetry_events_handler(timeout=0.3, start_thread=False) as telemetry_handler: for index in range(5): telemetry_handler.enqueue_event(TelemetryEvent(eventId=index)) telemetry_handler.start() self.assertTrue(telemetry_handler.is_alive(), "Thread not alive") TestSendTelemetryEventsHandler._stop_handler(telemetry_handler) _, event_body = telemetry_handler.event_calls[0] event_orders = re.findall(r'', event_body.decode('utf-8')) self.assertEqual(sorted(event_orders), event_orders, "Events not ordered correctly") def test_send_telemetry_events_should_report_event_if_wireserver_returns_http_error(self): test_str = "A test exception, Guid: {0}".format(str(uuid.uuid4())) def http_post_handler(url, _, **__): if self.is_telemetry_request(url): return HttpError(test_str) return None self._setup_and_assert_bad_request_scenarios(http_post_handler, [test_str]) def test_send_telemetry_events_should_report_event_when_http_post_returning_503(self): def http_post_handler(url, _, **__): if self.is_telemetry_request(url): return MockHttpResponse(restutil.httpclient.SERVICE_UNAVAILABLE) return None expected_msgs = ["[ProtocolError] [Wireserver Exception] [ProtocolError] [Wireserver Failed]", "[HTTP Failed] Status Code 503"] self._setup_and_assert_bad_request_scenarios(http_post_handler, expected_msgs) def test_send_telemetry_events_should_add_event_on_unexpected_errors(self): with self._create_send_telemetry_events_handler(timeout=0.1) as telemetry_handler: with patch("azurelinuxagent.ga.send_telemetry_events.add_event") as mock_add_event: with patch("azurelinuxagent.common.protocol.wire.WireClient.report_event") as patch_report_event: test_str = "Test exception, Guid: {0}".format(str(uuid.uuid4())) patch_report_event.side_effect = Exception(test_str) telemetry_handler.enqueue_event(TelemetryEvent()) TestSendTelemetryEventsHandler._stop_handler(telemetry_handler, timeout=0.01) self._assert_error_event_reported(mock_add_event, test_str, operation=WALAEventOperation.UnhandledError) def _create_extension_event(self, size=0, name="DummyExtension", message="DummyMessage"): event_data = self._get_event_data(name=size if size != 0 else name, message=random_generator(size) if size != 0 else message) event_file = os.path.join(self.event_dir, "{0}.tld".format(int(time.time() * 1000000))) with open(event_file, 'wb+') as file_descriptor: file_descriptor.write(event_data.encode('utf-8')) @staticmethod def _get_event_data(message, name): event = TelemetryEvent(1, TestSendTelemetryEventsHandler._TEST_EVENT_PROVIDER_ID) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Name, name)) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Version, str(CURRENT_VERSION))) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Operation, TestSendTelemetryEventsHandler._TEST_EVENT_OPERATION)) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.OperationSuccess, True)) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Message, message)) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Duration, 0)) data = get_properties(event) return json.dumps(data) @patch("azurelinuxagent.common.event.TELEMETRY_EVENT_PROVIDER_ID", _TEST_EVENT_PROVIDER_ID) @patch("azurelinuxagent.common.conf.get_lib_dir") def test_it_should_enqueue_and_send_events_properly(self, mock_lib_dir, *_): mock_lib_dir.return_value = self.lib_dir with self._create_send_telemetry_events_handler() as telemetry_handler: monitor_handler = _CollectAndEnqueueEvents(telemetry_handler) self._create_extension_event(message="Message-Test") test_mtime = 1000 # epoch time, in ms test_opcodename = timeutil.create_utc_timestamp(datetime.fromtimestamp(test_mtime).replace(tzinfo=UTC)) test_eventtid = 42 test_eventpid = 24 test_taskname = "TEST_TaskName" with patch("os.path.getmtime", return_value=test_mtime): with patch('os.getpid', return_value=test_eventpid): with patch("threading.Thread.ident", new_callable=PropertyMock(return_value=test_eventtid)): with patch("threading.Thread.name", new_callable=PropertyMock(return_value=test_taskname)): monitor_handler.run() TestSendTelemetryEventsHandler._stop_handler(telemetry_handler) # Validating the crafted message by the collect_and_send_events call. extension_events = self._get_extension_events(telemetry_handler) self.assertEqual(1, len(extension_events), "Only 1 event should be sent") collected_event = extension_events[0] # Some of those expected values come from the mock protocol and imds client set up during test initialization osutil = get_osutil() osversion = u"{0}:{1}-{2}-{3}:{4}".format(platform.system(), DISTRO_NAME, DISTRO_VERSION, DISTRO_CODE_NAME, platform.release()) sample_message = '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ ']]>'.format(AGENT_VERSION, TestSendTelemetryEventsHandler._TEST_EVENT_OPERATION, CURRENT_AGENT, test_opcodename, test_eventtid, test_eventpid, test_taskname, osversion, int(osutil.get_total_mem()), osutil.get_processor_cores(), json.dumps({"CpuArchitecture": platform.machine()})).encode('utf-8') self.assertIn(sample_message, collected_event) @patch("azurelinuxagent.common.conf.get_lib_dir") def test_collect_and_send_events_with_small_events(self, mock_lib_dir): mock_lib_dir.return_value = self.lib_dir with self._create_send_telemetry_events_handler() as telemetry_handler: sizes = [15, 15, 15, 15] # get the powers of 2 - 2**16 is the limit for power in sizes: size = 2 ** power self._create_extension_event(size) _CollectAndEnqueueEvents(telemetry_handler).run() # The send_event call would be called each time, as we are filling up the buffer up to the brim for each call. TestSendTelemetryEventsHandler._stop_handler(telemetry_handler) self.assertEqual(4, len(self._get_extension_events(telemetry_handler))) @patch("azurelinuxagent.common.conf.get_lib_dir") def test_collect_and_send_events_with_large_events(self, mock_lib_dir): mock_lib_dir.return_value = self.lib_dir with self._create_send_telemetry_events_handler() as telemetry_handler: sizes = [17, 17, 17] # get the powers of 2 for power in sizes: size = 2 ** power self._create_extension_event(size) with patch("azurelinuxagent.common.logger.periodic_warn") as patch_periodic_warn: _CollectAndEnqueueEvents(telemetry_handler).run() TestSendTelemetryEventsHandler._stop_handler(telemetry_handler) self.assertEqual(3, patch_periodic_warn.call_count) # The send_event call should never be called as the events are larger than 2**16. self.assertEqual(0, len(self._get_extension_events(telemetry_handler))) @staticmethod def _get_extension_events(telemetry_handler): return [event_xml for _, event_xml in telemetry_handler.event_calls if TestSendTelemetryEventsHandler._TEST_EVENT_OPERATION in event_xml.decode()]Azure-WALinuxAgent-a976115/tests/ga/test_signature_validation.py000066400000000000000000000444771510742556200247670ustar00rootroot00000000000000# Windows Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import sys from tests.lib.tools import AgentTestCase, data_dir, patch, skip_if_predicate_true from azurelinuxagent.ga.signing_certificate_util import write_signing_certificates from azurelinuxagent.ga.signature_validation_util import validate_signature, SignatureValidationError, validate_handler_manifest_signing_info, \ ManifestValidationError, _get_openssl_version, openssl_version_supported_for_signature_validation from azurelinuxagent.ga.exthandlers import HandlerManifest from azurelinuxagent.common.event import WALAEventOperation from azurelinuxagent.common.protocol.restapi import Extension from azurelinuxagent.common.utils.shellutil import CommandError from azurelinuxagent.common.logger import LogLevel class TestSignatureValidation(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) write_signing_certificates() self.vm_access_zip_path = os.path.join(data_dir, "signing/Microsoft.OSTCExtensions.Edp.VMAccessForLinux__1.7.0.zip") vm_access_signature_path = os.path.join(data_dir, "signing/vm_access_signature.txt") with open(vm_access_signature_path, 'r') as f: self.vm_access_signature = f.read() self.package_name_and_version = "Microsoft.OSTCExtensions.Edp.VMAccessForLinux-1.5.0" def tearDown(self): patch.stopall() AgentTestCase.tearDown(self) def test_should_validate_signature_successfully(self): """ Test that the signature can be validated successfully without raising an exception. Note: The test extension (VMAccess) was signed with a leaf certificate that expires in 2025. Even after the expiry date, validation should still succeed because the signature was generated when all certs were unexpired. While we could request newly signed versions, leaf certs expire fairly quickly (within a year) and we would need to frequently update the test with a new signature and package. """ validate_signature(self.vm_access_zip_path, self.vm_access_signature, self.package_name_and_version, failure_log_level=LogLevel.WARNING) def test_should_raise_error_if_signature_does_not_match_package(self): # This signature is correctly formatted but belongs to a different extension (CSE), # signature validation should fail for VMAccess with open(os.path.join(data_dir, "signing/invalid_signature.txt"), 'r') as f: invalid_signature = f.read() with self.assertRaises(SignatureValidationError, msg="Signature is invalid, should have raised error"): validate_signature(self.vm_access_zip_path, invalid_signature, self.package_name_and_version, failure_log_level=LogLevel.WARNING) def test_should_raise_error_if_package_is_tampered_with(self): # This is the VMAccess test extension zip package with one byte modified, signature validation should fail modified_ext = os.path.join(data_dir, "signing/Modified_Microsoft.OSTCExtensions.Edp.VMAccessForLinux__1.7.0.zip") with self.assertRaises(SignatureValidationError, msg="Zip package does not match signature, should have raised error"): validate_signature(modified_ext, self.vm_access_signature, self.package_name_and_version, failure_log_level=LogLevel.WARNING) def test_should_raise_error_on_incorrect_signing_certificate(self): # The root certificate used here is valid (unexpired) and issued by the Microsoft CA, but it does not match the # one that signed the package - signature validation should fail. incorrect_root_cert_path = os.path.join(data_dir, "signing/incorrect_microsoft_root_cert.pem") with patch("azurelinuxagent.ga.signature_validation_util.get_microsoft_signing_certificate_path", return_value=incorrect_root_cert_path): with self.assertRaises(SignatureValidationError, msg="Signing certificate does not match, should have raised error") as ex: validate_signature(self.vm_access_zip_path, self.vm_access_signature, self.package_name_and_version, failure_log_level=LogLevel.WARNING) expected_error_regex = r"Verify\s*error\s*:\s*unable\s*to\s*get\s*local\s*issuer\s*certificate" self.assertRegex(ex.exception.args[0], expected_error_regex, msg="Raised SignatureValidationError but error did not indicate certificate failure") def test_should_raise_error_on_missing_signing_certificate(self): root_cert_path = os.path.join(self.tmp_dir, "missing_root_cert.pem") with patch("azurelinuxagent.ga.signature_validation_util.get_microsoft_signing_certificate_path", return_value=root_cert_path): with self.assertRaises(SignatureValidationError, msg="Signing certificate missing, should have raised error") as ex: validate_signature(self.vm_access_zip_path, self.vm_access_signature, self.package_name_and_version, failure_log_level=LogLevel.WARNING) self.assertIn("signing certificate was not found", ex.exception.args[0], msg="Error message did not indicate that certificate is missing.") def test_should_handle_and_report_error_raised_when_writing_signing_certificate(self): # If an error is raised when writing signing certificates, the error should be handled/swallowed but reported # via telemetry and log. with patch('azurelinuxagent.ga.signing_certificate_util.event.error') as report_err: open_target = "builtins.open" if sys.version_info[0] >= 3 else "__builtin__.open" with patch(open_target, side_effect=OSError): write_signing_certificates() signing_errors = [kw for _, kw in report_err.call_args_list if kw['op'] == WALAEventOperation.SignatureValidation] self.assertEqual(1, len(signing_errors), "Error writing signing certificates not logged or sent as telemetry") def test_should_get_openssl_version(self): # Tests cases in format (<'openssl version' output>, ) test_cases = [ ("OpenSSL version: OpenSSL 3.0.13 30 Jan 2024 (Library: OpenSSL 3.0.13 30 Jan 2024)", "3.0.13"), ("OpenSSL version: OpenSSL 1.1.1f 31 Mar 2020", "1.1.1"), ("OpenSSL version: OpenSSL 1.0.2zi-fips 1 Aug 2023", "1.0.2"), ("OpenSSL 1.1.1 1 Aug 2023", "1.1.1") ] for case in test_cases: with patch("azurelinuxagent.ga.signature_validation_util.run_command", return_value=case[0]): version = _get_openssl_version() self.assertEqual(version, case[1], "Returned incorrect openssl version") def test_should_not_support_signature_validation_if_fail_to_get_openssl_version(self): with patch("azurelinuxagent.ga.signature_validation_util.run_command", side_effect=CommandError("cmd", 1, "", "error")): self.assertFalse(openssl_version_supported_for_signature_validation()) with patch("azurelinuxagent.ga.signature_validation_util.run_command", return_value=None): self.assertFalse(openssl_version_supported_for_signature_validation()) with patch("azurelinuxagent.ga.signature_validation_util.run_command", return_value="some junk output"): self.assertFalse(openssl_version_supported_for_signature_validation()) @skip_if_predicate_true(lambda: True, "Enable this test when timestamp validation has been implemented.") def test_should_raise_error_if_root_cert_was_expired_at_signing_time(self): # TODO: Test is skipped because it requires timestamp validation implementation. Write this test after # timestamp validation has been implemented. self.fail() @skip_if_predicate_true(lambda: True, "Enable this test when timestamp validation has been implemented.") def test_should_raise_error_if_intermediate_cert_was_expired_at_signing_time(self): # TODO: Test is skipped because it requires timestamp validation implementation. Write this test after # timestamp validation has been implemented. self.fail() @skip_if_predicate_true(lambda: True, "Enable this test when timestamp validation has been implemented.") def test_should_raise_error_if_leaf_cert_was_expired_at_signing_time(self): # TODO: Test is skipped because it requires timestamp validation implementation. Write this test after # timestamp validation has been implemented. self.fail() class TestHandlerManifestValidation(AgentTestCase): def test_should_validate_manifest_successfully(self): data = { "handlerManifest": {}, "signingInfo": { "type": "CustomScript", "publisher": "Microsoft.Azure.Extensions", "version": "2.1.13" } } ext_name = "Microsoft.Azure.Extensions.CustomScript" ext_version = "2.1.13" ext_signature = "nonemptysignature" manifest = HandlerManifest(data) ext_handler = Extension(name=ext_name) ext_handler.version = ext_version ext_handler.signature = ext_signature validate_handler_manifest_signing_info(manifest, ext_handler, failure_log_level=LogLevel.WARNING) def test_should_validate_manifest_successfully_for_case_mismatch(self): # Manifest validation should be case-insensitive for type and publisher. data = { "handlerManifest": {}, "signingInfo": { "type": "CustomScript", "publisher": "Microsoft.Azure.Extensions", "version": "2.1.13" } } ext_name = "microsoft.azure.extensions.customscript" # Does not match case of handler manifest ext_version = "2.1.13" ext_signature = "nonemptysignature" manifest = HandlerManifest(data) ext_handler = Extension(name=ext_name) ext_handler.version = ext_version ext_handler.signature = ext_signature validate_handler_manifest_signing_info(manifest, ext_handler, failure_log_level=LogLevel.WARNING) def test_should_raise_error_if_manifest_type_does_not_match(self): data = { "handlerManifest": {}, "signingInfo": { "type": "CustomScript", "publisher": "Microsoft.Azure.Extensions", "version": "2.1.13" } } ext_name = "Microsoft.Azure.Extensions.RunCommand" ext_version = "2.1.13" ext_signature = "nonemptysignature" manifest = HandlerManifest(data) ext_handler = Extension(name=ext_name) ext_handler.version = ext_version ext_handler.signature = ext_signature with self.assertRaises(ManifestValidationError, msg="HandlerManifest type does not match extension type, should have raised error") as ex: validate_handler_manifest_signing_info(manifest, ext_handler, failure_log_level=LogLevel.WARNING) expected_error_msg = "expected extension type 'RunCommand' does not match downloaded package type 'CustomScript'" self.assertIn(expected_error_msg, str(ex.exception.args[0]), msg="Raised ManifestValidationError but error did not indicate type mismatch") def test_should_raise_error_if_manifest_publisher_does_not_match(self): data = { "handlerManifest": {}, "signingInfo": { "type": "CustomScript", "publisher": "Microsoft.Azure.Extensions", "version": "2.1.13" } } ext_name = "Microsoft.CPlat.Core.CustomScript" ext_version = "2.1.13" ext_signature = "nonemptysignature" manifest = HandlerManifest(data) ext_handler = Extension(name=ext_name) ext_handler.version = ext_version ext_handler.signature = ext_signature with self.assertRaises(ManifestValidationError, msg="HandlerManifest publisher does not match extension publisher, should have raised error") as ex: validate_handler_manifest_signing_info(manifest, ext_handler, failure_log_level=LogLevel.WARNING) expected_error_msg = "expected extension publisher 'Microsoft.CPlat.Core' does not match downloaded package publisher 'Microsoft.Azure.Extensions'" self.assertIn(expected_error_msg, str(ex.exception.args[0]), msg="Raised ManifestValidationError but error did not indicate publisher mismatch") def test_should_raise_error_if_manifest_version_does_not_match(self): data = { "handlerManifest": {}, "signingInfo": { "type": "CustomScript", "publisher": "Microsoft.Azure.Extensions", "version": "2.1.13" } } ext_name = "Microsoft.Azure.Extensions.CustomScript" ext_version = "2.2.0" ext_signature = "nonemptysignature" manifest = HandlerManifest(data) ext_handler = Extension(name=ext_name) ext_handler.version = ext_version ext_handler.signature = ext_signature with self.assertRaises(ManifestValidationError, msg="HandlerManifest version does not match extension version, should have raised error") as ex: validate_handler_manifest_signing_info(manifest, ext_handler, failure_log_level=LogLevel.WARNING) expected_error_msg = "expected extension version '2.2.0' does not match downloaded package version '2.1.13'" self.assertIn(expected_error_msg, str(ex.exception.args[0]), msg="Raised ManifestValidationError but error did not indicate version mismatch") def test_should_raise_error_if_manifest_does_not_contain_signing_info(self): data = { "handlerManifest": {} } ext_name = "Microsoft.Azure.Extensions.CustomScript" ext_version = "2.1.13" ext_signature = "nonemptysignature" manifest = HandlerManifest(data) ext_handler = Extension(name=ext_name) ext_handler.version = ext_version ext_handler.signature = ext_signature with self.assertRaises(ManifestValidationError, msg="HandlerManifest does not contain signingInfo, should have raised error") as ex: validate_handler_manifest_signing_info(manifest, ext_handler, failure_log_level=LogLevel.WARNING) expected_error_msg = "HandlerManifest.json does not contain 'signingInfo'" self.assertIn(expected_error_msg, str(ex.exception.args[0]), msg="Raised ManifestValidationError but error did not indicate missing signingInfo") def test_should_raise_error_if_manifest_does_not_contain_signing_info_type(self): data = { "handlerManifest": {}, "signingInfo": { "publisher": "Microsoft.Azure.Extensions", "version": "2.1.13" } } ext_name = "Microsoft.Azure.Extensions.CustomScript" ext_version = "2.1.13" ext_signature = "nonemptysignature" manifest = HandlerManifest(data) ext_handler = Extension(name=ext_name) ext_handler.version = ext_version ext_handler.signature = ext_signature with self.assertRaises(ManifestValidationError, msg="HandlerManifest does not contain signingInfo.type, should have raised error") as ex: validate_handler_manifest_signing_info(manifest, ext_handler, failure_log_level=LogLevel.WARNING) expected_error_msg = "HandlerManifest.json does not contain attribute 'signingInfo.type'" self.assertIn(expected_error_msg, str(ex.exception.args[0]), msg="Raised ManifestValidationError but error did not indicate missing signingInfo.type") def test_should_raise_error_if_manifest_does_not_contain_signing_info_publisher(self): data = { "handlerManifest": {}, "signingInfo": { "type": "CustomScript", "version": "2.1.13" } } ext_name = "Microsoft.Azure.Extensions.CustomScript" ext_version = "2.1.13" ext_signature = "nonemptysignature" manifest = HandlerManifest(data) ext_handler = Extension(name=ext_name) ext_handler.version = ext_version ext_handler.signature = ext_signature with self.assertRaises(ManifestValidationError, msg="HandlerManifest does not contain signingInfo.publisher, should have raised error") as ex: validate_handler_manifest_signing_info(manifest, ext_handler, failure_log_level=LogLevel.WARNING) expected_error_msg = "HandlerManifest.json does not contain attribute 'signingInfo.publisher'" self.assertIn(expected_error_msg, str(ex.exception.args[0]), msg="Raised ManifestValidationError but error did not indicate missing signingInfo.publisher") def test_should_raise_error_if_manifest_does_not_contain_signing_info_version(self): data = { "handlerManifest": {}, "signingInfo": { "type": "CustomScript", "publisher": "Microsoft.Azure.Extensions" } } ext_name = "Microsoft.Azure.Extensions.CustomScript" ext_version = "2.1.13" ext_signature = "nonemptysignature" manifest = HandlerManifest(data) ext_handler = Extension(name=ext_name) ext_handler.version = ext_version ext_handler.signature = ext_signature with self.assertRaises(ManifestValidationError, msg="HandlerManifest does not contain signingInfo.version, should have raised error") as ex: validate_handler_manifest_signing_info(manifest, ext_handler, failure_log_level=LogLevel.WARNING) expected_error_msg = "HandlerManifest.json does not contain attribute 'signingInfo.version'" self.assertIn(expected_error_msg, str(ex.exception.args[0]), msg="Raised ManifestValidationError but error did not indicate missing signingInfo.version") Azure-WALinuxAgent-a976115/tests/ga/test_signature_validation_sudo.py000066400000000000000000000111421510742556200260000ustar00rootroot00000000000000# Windows Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os from tests.lib.tools import AgentTestCase, data_dir, patch, i_am_root from azurelinuxagent.ga.signing_certificate_util import write_signing_certificates from azurelinuxagent.ga.signature_validation_util import validate_signature from azurelinuxagent.common.utils import shellutil from azurelinuxagent.common.logger import LogLevel class TestSignatureValidationSudo(AgentTestCase): """ Tests signature validation scenarios involving certificate expiry, simulated by moving the system clock forward. Since modifying system time requires admin privileges, tests in this suite must be run with sudo. """ def setUp(self): AgentTestCase.setUp(self) write_signing_certificates() self.vm_access_zip_path = os.path.join(data_dir, "signing/Microsoft.OSTCExtensions.Edp.VMAccessForLinux__1.7.0.zip") vm_access_signature_path = os.path.join(data_dir, "signing/vm_access_signature.txt") with open(vm_access_signature_path, 'r') as f: self.vm_access_signature = f.read() self.package_name_and_version = "Microsoft.OSTCExtensions.Edp.VMAccessForLinux-1.5.0" def tearDown(self): patch.stopall() AgentTestCase.tearDown(self) @staticmethod def _validate_signature_in_another_year(target_year, package_path, signature, package_name_and_version): original_system_year = None try: original_system_year = shellutil.run_command(["date", "+%Y"]).strip() delta = target_year - int(original_system_year) if delta > 0: shellutil.run_command(["sudo", "date", "-s", "{0} years".format(delta)]) validate_signature(package_path, signature, package_name_and_version, failure_log_level=LogLevel.WARNING) except shellutil.CommandError as ex: raise Exception("Failed to retrieve or update system time.\nExit code: {0}\nError details: {1}".format(ex.returncode, ex.stderr)) finally: if original_system_year is not None: current_system_year = shellutil.run_command(["date", "+%Y"]).strip() if current_system_year != original_system_year: delta = int(current_system_year) - int(original_system_year) shellutil.run_command(["sudo", "date", "-s", "-{0} years".format(delta)]) def test_should_validate_signature_for_package_signed_with_expired_root_cert(self): # Root certificate expires in 2036. This test changes system time to 2037 to simulate root cert expiry. # Signature validation should still pass, because the signature was generated when the root certificate was unexpired. self.assertTrue(i_am_root(), "Test does not run when non-root") TestSignatureValidationSudo._validate_signature_in_another_year(2037, self.vm_access_zip_path, self.vm_access_signature, self.package_name_and_version) def test_should_validate_signature_for_package_signed_with_expired_intermediate_cert(self): # Root certificate expires in 2036. This test changes system time to 2037 to simulate root cert expiry. # Signature validation should still pass, because the signature was generated when the root certificate was unexpired. self.assertTrue(i_am_root(), "Test does not run when non-root") TestSignatureValidationSudo._validate_signature_in_another_year(2027, self.vm_access_zip_path, self.vm_access_signature, self.package_name_and_version) def test_should_validate_signature_for_package_signed_with_leaf_root_cert(self): # Leaf certificate expires in September 2025. This test changes system time to 2026 to simulate root cert expiry. # Signature validation should still pass, because the signature was generated when the root certificate was unexpired. self.assertTrue(i_am_root(), "Test does not run when non-root") TestSignatureValidationSudo._validate_signature_in_another_year(2026, self.vm_access_zip_path, self.vm_access_signature, self.package_name_and_version) Azure-WALinuxAgent-a976115/tests/ga/test_update.py000066400000000000000000004471361510742556200220350ustar00rootroot00000000000000# Copyright (c) Microsoft Corporation. All rights reserved. # Licensed under the Apache License. from __future__ import print_function import contextlib import glob import json import os import random import re import shutil import stat import sys import tempfile import time import unittest import uuid import zipfile from datetime import datetime, timedelta from threading import current_thread from azurelinuxagent.common.utils.restutil import KNOWN_WIRESERVER_IP from azurelinuxagent.ga.guestagent import GuestAgent, GuestAgentError, AGENT_ERROR_FILE, INITIAL_UPDATE_STATE_FILE, \ RSM_UPDATE_STATE_FILE from azurelinuxagent.common import conf from azurelinuxagent.common.logger import LogLevel from azurelinuxagent.common.event import EVENTS_DIRECTORY, WALAEventOperation from azurelinuxagent.common.exception import HttpError, \ ExitException, AgentMemoryExceededException from azurelinuxagent.common.future import ustr, UTC, datetime_min_utc, httpclient from azurelinuxagent.common.protocol.extensions_goal_state import GoalStateSource from azurelinuxagent.common.protocol.hostplugin import HostPluginProtocol from azurelinuxagent.common.protocol.restapi import VMAgentFamily, \ ExtHandlerPackage, ExtHandlerPackageList, Extension, VMStatus, ExtHandlerStatus, ExtensionStatus, \ VMAgentUpdateStatuses from azurelinuxagent.common.protocol.util import ProtocolUtil from azurelinuxagent.common.utils import fileutil, textutil, timeutil, shellutil from azurelinuxagent.common.utils.archive import ARCHIVE_DIRECTORY_NAME, AGENT_STATUS_FILE from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from azurelinuxagent.common.version import AGENT_PKG_GLOB, AGENT_DIR_GLOB, AGENT_NAME, AGENT_DIR_PATTERN, \ AGENT_VERSION, CURRENT_AGENT, CURRENT_VERSION, set_daemon_version, __DAEMON_VERSION_ENV_VARIABLE as DAEMON_VERSION_ENV_VARIABLE from azurelinuxagent.ga.exthandlers import ExtHandlersHandler, ExtHandlerInstance, HandlerEnvironment, ExtensionStatusValue from azurelinuxagent.ga.update import \ get_update_handler, ORPHAN_POLL_INTERVAL, ORPHAN_WAIT_INTERVAL, \ CHILD_LAUNCH_RESTART_MAX, CHILD_HEALTH_INTERVAL, GOAL_STATE_PERIOD_EXTENSIONS_DISABLED, UpdateHandler, \ READONLY_FILE_GLOBS, ExtensionsSummary from azurelinuxagent.ga.signing_certificate_util import _MICROSOFT_ROOT_CERT_2011_03_22, get_microsoft_signing_certificate_path from tests.lib.mock_firewall_command import MockIpTables, MockFirewallCmd from tests.lib.mock_update_handler import mock_update_handler from tests.lib.mock_wire_protocol import mock_wire_protocol, MockHttpResponse from tests.lib.wire_protocol_data import DATA_FILE, DATA_FILE_MULTIPLE_EXT, DATA_FILE_VM_SETTINGS from tests.lib.tools import AgentTestCase, data_dir, DEFAULT, patch, load_bin_data, Mock, MagicMock, \ clear_singleton_instances, skip_if_predicate_true, load_data from tests.lib import wire_protocol_data from tests.lib.http_request_predicates import HttpRequestPredicates NO_ERROR = { "last_failure": 0.0, "failure_count": 0, "was_fatal": False, "reason": '' } FATAL_ERROR = { "last_failure": 42.42, "failure_count": 2, "was_fatal": True, "reason": "Test failure" } WITH_ERROR = { "last_failure": 42.42, "failure_count": 2, "was_fatal": False, "reason": "Test failure" } EMPTY_MANIFEST = { "name": "WALinuxAgent", "version": 1.0, "handlerManifest": { "installCommand": "", "uninstallCommand": "", "updateCommand": "", "enableCommand": "", "disableCommand": "", "rebootAfterInstall": False, "reportHeartbeat": False } } def faux_logger(): print("STDOUT message") print("STDERR message", file=sys.stderr) return DEFAULT @contextlib.contextmanager def _get_update_handler(iterations=1, test_data=None, protocol=None, autoupdate_enabled=True): """ This function returns a mocked version of the UpdateHandler object to be used for testing. It will only run the main loop [iterations] no of times. """ test_data = DATA_FILE if test_data is None else test_data with patch.object(HostPluginProtocol, "is_default_channel", False): if protocol is None: with mock_wire_protocol(test_data) as mock_protocol: with mock_update_handler(mock_protocol, iterations=iterations, autoupdate_enabled=autoupdate_enabled) as update_handler: yield update_handler, mock_protocol else: with mock_update_handler(protocol, iterations=iterations, autoupdate_enabled=autoupdate_enabled) as update_handler: yield update_handler, protocol class UpdateTestCase(AgentTestCase): _test_suite_tmp_dir = None _agent_zip_dir = None @classmethod def setUpClass(cls): super(UpdateTestCase, cls).setUpClass() # copy data_dir/ga/WALinuxAgent-0.0.0.0.zip to _test_suite_tmp_dir/waagent-zip/WALinuxAgent-.zip sample_agent_zip = "WALinuxAgent-0.0.0.0.zip" test_agent_zip = sample_agent_zip.replace("0.0.0.0", AGENT_VERSION) UpdateTestCase._test_suite_tmp_dir = tempfile.mkdtemp() UpdateTestCase._agent_zip_dir = os.path.join(UpdateTestCase._test_suite_tmp_dir, "waagent-zip") os.mkdir(UpdateTestCase._agent_zip_dir) source = os.path.join(data_dir, "ga", sample_agent_zip) target = os.path.join(UpdateTestCase._agent_zip_dir, test_agent_zip) shutil.copyfile(source, target) # The update_handler inherently calls agent update handler, which in turn calls daemon version. So now daemon version logic has fallback if env variable is not set. # The fallback calls popen which is not mocked. So we set the env variable to avoid the fallback. # This will not change any of the test validations. At the ene of all update test validations, we reset the env variable. set_daemon_version("1.2.3.4") @classmethod def tearDownClass(cls): super(UpdateTestCase, cls).tearDownClass() shutil.rmtree(UpdateTestCase._test_suite_tmp_dir) os.environ.pop(DAEMON_VERSION_ENV_VARIABLE) @staticmethod def _get_agent_pkgs(in_dir=None): if in_dir is None: in_dir = UpdateTestCase._agent_zip_dir path = os.path.join(in_dir, AGENT_PKG_GLOB) return glob.glob(path) @staticmethod def _get_agents(in_dir=None): if in_dir is None: in_dir = UpdateTestCase._agent_zip_dir path = os.path.join(in_dir, AGENT_DIR_GLOB) return [a for a in glob.glob(path) if os.path.isdir(a)] @staticmethod def _get_agent_file_path(): return UpdateTestCase._get_agent_pkgs()[0] @staticmethod def _get_agent_file_name(): return os.path.basename(UpdateTestCase._get_agent_file_path()) @staticmethod def _get_agent_path(): return fileutil.trim_ext(UpdateTestCase._get_agent_file_path(), "zip") @staticmethod def _get_agent_name(): return os.path.basename(UpdateTestCase._get_agent_path()) @staticmethod def _get_agent_version(): return FlexibleVersion(UpdateTestCase._get_agent_name().split("-")[1]) @staticmethod def _add_write_permission_to_goal_state_files(): # UpdateHandler.run() marks some of the files from the goal state as read-only. Those files are overwritten when # a new goal state is fetched. This is not a problem for the agent, since it runs as root, but tests need # to make those files writtable before fetching a new goal state. Note that UpdateHandler.run() fetches a new # goal state, so tests that make multiple calls to that method need to call this function in-between calls. for gb in READONLY_FILE_GLOBS: for path in glob.iglob(os.path.join(conf.get_lib_dir(), gb)): fileutil.chmod(path, stat.S_IRUSR | stat.S_IWUSR) def agent_bin(self, version, suffix): return "bin/{0}-{1}{2}.egg".format(AGENT_NAME, version, suffix) def rename_agent_bin(self, path, dst_v): src_bin = glob.glob(os.path.join(path, self.agent_bin("*.*.*.*", '*')))[0] dst_bin = os.path.join(path, self.agent_bin(dst_v, '')) shutil.move(src_bin, dst_bin) def agents(self): return [GuestAgent.from_installed_agent(path) for path in self.agent_dirs()] def agent_count(self): return len(self.agent_dirs()) def agent_dirs(self): return self._get_agents(in_dir=self.tmp_dir) def agent_dir(self, version): return os.path.join(self.tmp_dir, "{0}-{1}".format(AGENT_NAME, version)) def agent_paths(self): paths = glob.glob(os.path.join(self.tmp_dir, "*")) paths.sort() return paths def agent_pkgs(self): return self._get_agent_pkgs(in_dir=self.tmp_dir) def agent_versions(self): v = [FlexibleVersion(AGENT_DIR_PATTERN.match(a).group(1)) for a in self.agent_dirs()] v.sort(reverse=True) return v @contextlib.contextmanager def get_error_file(self, error_data=None): if error_data is None: error_data = NO_ERROR with tempfile.NamedTemporaryFile(mode="w") as fp: json.dump(error_data if error_data is not None else NO_ERROR, fp) fp.seek(0) yield fp def create_error(self, error_data=None): if error_data is None: error_data = NO_ERROR with self.get_error_file(error_data) as path: err = GuestAgentError(path.name) err.load() return err def copy_agents(self, *agents): if len(agents) <= 0: agents = self._get_agent_pkgs() for agent in agents: shutil.copy(agent, self.tmp_dir) return def expand_agents(self): for agent in self.agent_pkgs(): path = os.path.join(self.tmp_dir, fileutil.trim_ext(agent, "zip")) zipfile.ZipFile(agent).extractall(path) def prepare_agent(self, version): """ Create a download for the current agent version, copied from test data """ self.copy_agents(self._get_agent_pkgs()[0]) self.expand_agents() versions = self.agent_versions() src_v = FlexibleVersion(str(versions[0])) from_path = self.agent_dir(src_v) dst_v = FlexibleVersion(str(version)) to_path = self.agent_dir(dst_v) if from_path != to_path: shutil.move(from_path + ".zip", to_path + ".zip") shutil.move(from_path, to_path) self.rename_agent_bin(to_path, dst_v) return def prepare_agents(self, count=20, is_available=True): # Ensure the test data is copied over agent_count = self.agent_count() if agent_count <= 0: self.copy_agents(self._get_agent_pkgs()[0]) self.expand_agents() count -= 1 # Determine the most recent agent version versions = self.agent_versions() src_v = FlexibleVersion(str(versions[0])) # Create agent packages and directories return self.replicate_agents( src_v=src_v, count=count - agent_count, is_available=is_available) def remove_agents(self): for agent in self.agent_paths(): try: if os.path.isfile(agent): os.remove(agent) else: shutil.rmtree(agent) except: # pylint: disable=bare-except pass return def replicate_agents(self, count=5, src_v=AGENT_VERSION, is_available=True, increment=1): from_path = self.agent_dir(src_v) dst_v = FlexibleVersion(str(src_v)) for i in range(0, count): # pylint: disable=unused-variable dst_v += increment to_path = self.agent_dir(dst_v) shutil.copyfile(from_path + ".zip", to_path + ".zip") shutil.copytree(from_path, to_path) self.rename_agent_bin(to_path, dst_v) if not is_available: GuestAgent.from_installed_agent(to_path).mark_failure(is_fatal=True) return dst_v class TestUpdate(UpdateTestCase): def setUp(self): UpdateTestCase.setUp(self) self.event_patch = patch('azurelinuxagent.common.event.add_event') self.update_handler = get_update_handler() protocol = Mock() self.update_handler.protocol_util = Mock() self.update_handler.protocol_util.get_protocol = Mock(return_value=protocol) self.update_handler._goal_state = Mock() self.update_handler._goal_state.extensions_goal_state = Mock() self.update_handler._goal_state.extensions_goal_state.source = "Fabric" # Since ProtocolUtil is a singleton per thread, we need to clear it to ensure that the test cases do not reuse # a previous state clear_singleton_instances(ProtocolUtil) def test_creation(self): self.assertEqual(0, len(self.update_handler.agents)) self.assertEqual(None, self.update_handler.child_agent) self.assertEqual(None, self.update_handler.child_launch_time) self.assertEqual(0, self.update_handler.child_launch_attempts) self.assertEqual(None, self.update_handler.child_process) self.assertEqual(None, self.update_handler.signal_handler) def test_emit_restart_event_emits_event_if_not_clean_start(self): try: mock_event = self.event_patch.start() self.update_handler._set_sentinel() self.update_handler._emit_restart_event() self.assertEqual(1, mock_event.call_count) except Exception as e: # pylint: disable=unused-variable pass self.event_patch.stop() def _create_protocol(self, count=20, versions=None): latest_version = self.prepare_agents(count=count) if versions is None or len(versions) <= 0: versions = [latest_version] return ProtocolMock(versions=versions) def _test_ensure_no_orphans(self, invocations=3, interval=ORPHAN_WAIT_INTERVAL, pid_count=0): with patch.object(self.update_handler, 'osutil') as mock_util: # Note: # - Python only allows mutations of objects to which a function has # a reference. Incrementing an integer directly changes the # reference. Incrementing an item of a list changes an item to # which the code has a reference. # See http://stackoverflow.com/questions/26408941/python-nested-functions-and-variable-scope iterations = [0] def iterator(*args, **kwargs): # pylint: disable=unused-argument iterations[0] += 1 return iterations[0] < invocations mock_util.check_pid_alive = Mock(side_effect=iterator) pid_files = self.update_handler._get_pid_files() self.assertEqual(pid_count, len(pid_files)) with patch('os.getpid', return_value=42): with patch('time.sleep', return_value=None) as mock_sleep: # pylint: disable=redefined-outer-name self.update_handler._ensure_no_orphans(orphan_wait_interval=interval) for pid_file in pid_files: self.assertFalse(os.path.exists(pid_file)) return mock_util.check_pid_alive.call_count, mock_sleep.call_count def test_ensure_no_orphans(self): fileutil.write_file(os.path.join(self.tmp_dir, "0_waagent.pid"), ustr(41)) calls, sleeps = self._test_ensure_no_orphans(invocations=3, pid_count=1) self.assertEqual(3, calls) self.assertEqual(2, sleeps) def test_ensure_no_orphans_skips_if_no_orphans(self): calls, sleeps = self._test_ensure_no_orphans(invocations=3) self.assertEqual(0, calls) self.assertEqual(0, sleeps) def test_ensure_no_orphans_ignores_exceptions(self): with patch('azurelinuxagent.common.utils.fileutil.read_file', side_effect=Exception): calls, sleeps = self._test_ensure_no_orphans(invocations=3) self.assertEqual(0, calls) self.assertEqual(0, sleeps) def test_ensure_no_orphans_kills_after_interval(self): fileutil.write_file(os.path.join(self.tmp_dir, "0_waagent.pid"), ustr(41)) with patch('os.kill') as mock_kill: calls, sleeps = self._test_ensure_no_orphans( invocations=4, interval=3 * ORPHAN_POLL_INTERVAL, pid_count=1) self.assertEqual(3, calls) self.assertEqual(2, sleeps) self.assertEqual(1, mock_kill.call_count) def test_ensure_readonly_sets_readonly(self): test_files = [ os.path.join(conf.get_lib_dir(), "faux_certificate.crt"), os.path.join(conf.get_lib_dir(), "faux_certificate.p7m"), os.path.join(conf.get_lib_dir(), "faux_certificate.pem"), os.path.join(conf.get_lib_dir(), "faux_certificate.prv"), os.path.join(conf.get_lib_dir(), "ovf-env.xml") ] for path in test_files: fileutil.write_file(path, "Faux content") os.chmod(path, stat.S_IRUSR | stat.S_IWUSR | stat.S_IRGRP | stat.S_IROTH) self.update_handler._ensure_readonly_files() for path in test_files: mode = os.stat(path).st_mode mode &= (stat.S_IRWXU | stat.S_IRWXG | stat.S_IRWXO) self.assertEqual(0, mode ^ stat.S_IRUSR) def test_ensure_readonly_leaves_unmodified(self): test_files = [ os.path.join(conf.get_lib_dir(), "faux.xml"), os.path.join(conf.get_lib_dir(), "faux.json"), os.path.join(conf.get_lib_dir(), "faux.txt"), os.path.join(conf.get_lib_dir(), "faux") ] for path in test_files: fileutil.write_file(path, "Faux content") os.chmod(path, stat.S_IRUSR | stat.S_IWUSR | stat.S_IRGRP | stat.S_IROTH) self.update_handler._ensure_readonly_files() for path in test_files: mode = os.stat(path).st_mode mode &= (stat.S_IRWXU | stat.S_IRWXG | stat.S_IRWXO) self.assertEqual( stat.S_IRUSR | stat.S_IWUSR | stat.S_IRGRP | stat.S_IROTH, mode) def _test_evaluate_agent_health(self, child_agent_index=0): self.prepare_agents() latest_agent = self.update_handler.get_latest_agent_greater_than_daemon() self.assertTrue(latest_agent.is_available) self.assertFalse(latest_agent.is_blacklisted) self.assertTrue(len(self.update_handler.agents) > 1) child_agent = self.update_handler.agents[child_agent_index] self.assertTrue(child_agent.is_available) self.assertFalse(child_agent.is_blacklisted) self.update_handler.child_agent = child_agent self.update_handler._evaluate_agent_health(latest_agent) def test_evaluate_agent_health_ignores_installed_agent(self): self.update_handler._evaluate_agent_health(None) def test_evaluate_agent_health_raises_exception_for_restarting_agent(self): self.update_handler.child_launch_time = time.time() - (4 * 60) self.update_handler.child_launch_attempts = CHILD_LAUNCH_RESTART_MAX - 1 self.assertRaises(Exception, self._test_evaluate_agent_health) def test_evaluate_agent_health_will_not_raise_exception_for_long_restarts(self): self.update_handler.child_launch_time = time.time() - 24 * 60 self.update_handler.child_launch_attempts = CHILD_LAUNCH_RESTART_MAX self._test_evaluate_agent_health() def test_evaluate_agent_health_will_not_raise_exception_too_few_restarts(self): self.update_handler.child_launch_time = time.time() self.update_handler.child_launch_attempts = CHILD_LAUNCH_RESTART_MAX - 2 self._test_evaluate_agent_health() def test_evaluate_agent_health_resets_with_new_agent(self): self.update_handler.child_launch_time = time.time() - (4 * 60) self.update_handler.child_launch_attempts = CHILD_LAUNCH_RESTART_MAX - 1 self._test_evaluate_agent_health(child_agent_index=1) self.assertEqual(1, self.update_handler.child_launch_attempts) def test_filter_blacklisted_agents(self): self.prepare_agents() self.update_handler._set_and_sort_agents([GuestAgent.from_installed_agent(path) for path in self.agent_dirs()]) self.assertEqual(len(self.agent_dirs()), len(self.update_handler.agents)) kept_agents = self.update_handler.agents[::2] blacklisted_agents = self.update_handler.agents[1::2] for agent in blacklisted_agents: agent.mark_failure(is_fatal=True) self.update_handler._filter_blacklisted_agents() self.assertEqual(kept_agents, self.update_handler.agents) def test_find_agents(self): self.prepare_agents() self.assertTrue(0 <= len(self.update_handler.agents)) self.update_handler._find_agents() self.assertEqual(len(self._get_agents(self.tmp_dir)), len(self.update_handler.agents)) def test_find_agents_does_reload(self): self.prepare_agents() self.update_handler._find_agents() agents = self.update_handler.agents self.update_handler._find_agents() self.assertNotEqual(agents, self.update_handler.agents) def test_find_agents_sorts(self): self.prepare_agents() self.update_handler._find_agents() v = FlexibleVersion("100000") for a in self.update_handler.agents: self.assertTrue(v > a.version) v = a.version def test_get_latest_agent(self): latest_version = self.prepare_agents() latest_agent = self.update_handler.get_latest_agent_greater_than_daemon() self.assertEqual(len(self._get_agents(self.tmp_dir)), len(self.update_handler.agents)) self.assertEqual(latest_version, latest_agent.version) def test_get_latest_agent_excluded(self): self.prepare_agent(AGENT_VERSION) self.assertEqual(None, self.update_handler.get_latest_agent_greater_than_daemon()) def test_get_latest_agent_no_updates(self): self.assertEqual(None, self.update_handler.get_latest_agent_greater_than_daemon()) def test_get_latest_agent_skip_updates(self): conf.get_autoupdate_enabled = Mock(return_value=False) self.assertEqual(None, self.update_handler.get_latest_agent_greater_than_daemon()) def test_get_latest_agent_skips_unavailable(self): self.prepare_agents() prior_agent = self.update_handler.get_latest_agent_greater_than_daemon() latest_version = self.prepare_agents(count=self.agent_count() + 1, is_available=False) latest_path = os.path.join(self.tmp_dir, "{0}-{1}".format(AGENT_NAME, latest_version)) self.assertFalse(GuestAgent.from_installed_agent(latest_path).is_available) latest_agent = self.update_handler.get_latest_agent_greater_than_daemon() self.assertTrue(latest_agent.version < latest_version) self.assertEqual(latest_agent.version, prior_agent.version) def test_get_pid_files(self): pid_files = self.update_handler._get_pid_files() self.assertEqual(0, len(pid_files)) def test_get_pid_files_returns_previous(self): for n in range(1250): fileutil.write_file(os.path.join(self.tmp_dir, str(n) + "_waagent.pid"), ustr(n + 1)) pid_files = self.update_handler._get_pid_files() self.assertEqual(1250, len(pid_files)) pid_dir, pid_name, pid_re = self.update_handler._get_pid_parts() # pylint: disable=unused-variable for p in pid_files: self.assertTrue(pid_re.match(os.path.basename(p))) def test_is_clean_start_returns_true_when_no_sentinel(self): self.assertFalse(os.path.isfile(self.update_handler._sentinel_file_path())) self.assertTrue(self.update_handler._is_clean_start) def test_is_clean_start_returns_false_when_sentinel_exists(self): self.update_handler._set_sentinel(agent=CURRENT_AGENT) self.assertFalse(self.update_handler._is_clean_start) def test_is_clean_start_returns_false_for_exceptions(self): self.update_handler._set_sentinel() with patch("azurelinuxagent.common.utils.fileutil.read_file", side_effect=Exception): self.assertFalse(self.update_handler._is_clean_start) def test_is_orphaned_returns_false_if_parent_exists(self): fileutil.write_file(conf.get_agent_pid_file_path(), ustr(42)) with patch('os.getppid', return_value=42): self.assertFalse(self.update_handler._is_orphaned) def test_is_orphaned_returns_true_if_parent_is_init(self): with patch('os.getppid', return_value=1): self.assertTrue(self.update_handler._is_orphaned) def test_is_orphaned_returns_true_if_parent_does_not_exist(self): fileutil.write_file(conf.get_agent_pid_file_path(), ustr(24)) with patch('os.getppid', return_value=42): self.assertTrue(self.update_handler._is_orphaned) def test_purge_agents(self): self.prepare_agents() self.update_handler._find_agents() # Ensure at least three agents initially exist self.assertTrue(2 < len(self.update_handler.agents)) # Purge every other agent. Don't add the current version to agents_to_keep explicitly; # the current version is never purged agents_to_keep = [] kept_agents = [] purged_agents = [] for i in range(0, len(self.update_handler.agents)): if self.update_handler.agents[i].version == CURRENT_VERSION: kept_agents.append(self.update_handler.agents[i]) else: if i % 2 == 0: agents_to_keep.append(self.update_handler.agents[i]) kept_agents.append(self.update_handler.agents[i]) else: purged_agents.append(self.update_handler.agents[i]) # Reload and assert only the kept agents remain on disk self.update_handler.agents = agents_to_keep self.update_handler._purge_agents() self.update_handler._find_agents() self.assertEqual( [agent.version for agent in kept_agents], [agent.version for agent in self.update_handler.agents]) # Ensure both directories and packages are removed for agent in purged_agents: agent_path = os.path.join(self.tmp_dir, "{0}-{1}".format(AGENT_NAME, agent.version)) self.assertFalse(os.path.exists(agent_path)) self.assertFalse(os.path.exists(agent_path + ".zip")) # Ensure kept agent directories and packages remain for agent in kept_agents: agent_path = os.path.join(self.tmp_dir, "{0}-{1}".format(AGENT_NAME, agent.version)) self.assertTrue(os.path.exists(agent_path)) self.assertTrue(os.path.exists(agent_path + ".zip")) def _test_run_latest(self, mock_child=None, mock_time=None, child_args=None): if mock_child is None: mock_child = ChildMock() if mock_time is None: mock_time = TimeMock() with patch('azurelinuxagent.ga.update.subprocess.Popen', return_value=mock_child) as mock_popen: with patch('time.time', side_effect=mock_time.time): with patch('time.sleep', side_effect=mock_time.sleep): self.update_handler.run_latest(child_args=child_args) agent_calls = [args[0] for (args, _) in mock_popen.call_args_list if "run-exthandlers" in ''.join(args[0])] self.assertEqual(1, len(agent_calls), "Expected a single call to the latest agent; got: {0}. All mocked calls: {1}".format( agent_calls, mock_popen.call_args_list)) return mock_popen.call_args def test_run_latest(self): self.prepare_agents() with patch("azurelinuxagent.common.conf.get_autoupdate_enabled", return_value=True): agent = self.update_handler.get_latest_agent_greater_than_daemon() args, kwargs = self._test_run_latest() args = args[0] cmds = textutil.safe_shlex_split(agent.get_agent_cmd()) if cmds[0].lower() == "python": cmds[0] = sys.executable self.assertEqual(args, cmds) self.assertTrue(len(args) > 1) self.assertRegex(args[0], r"^(/.*/python[\d.]*)$", "The command doesn't contain full python path") self.assertEqual("-run-exthandlers", args[len(args) - 1]) self.assertEqual(True, 'cwd' in kwargs) self.assertEqual(agent.get_agent_dir(), kwargs['cwd']) self.assertEqual(False, '\x00' in cmds[0]) def test_run_latest_picks_latest_agent_when_update_to_latest_version_is_used(self): self.prepare_agents(10) with patch("azurelinuxagent.common.conf.is_present", return_value=True): with patch("azurelinuxagent.common.conf.get_autoupdate_enabled", return_value=False): running_agent_args, running_agent_kwargs = self._test_run_latest() running_agent_args = running_agent_args[0] latest_agent = self.update_handler.get_latest_agent_greater_than_daemon() latest_agent_cmds = textutil.safe_shlex_split(latest_agent.get_agent_cmd()) if latest_agent_cmds[0].lower() == "python": latest_agent_cmds[0] = sys.executable self.assertEqual(running_agent_args, latest_agent_cmds) self.assertTrue(len(running_agent_args) > 1) self.assertRegex(running_agent_args[0], r"^(/.*/python[\d.]*)$", "The command doesn't contain full python path") self.assertEqual("-run-exthandlers", running_agent_args[len(running_agent_args) - 1]) self.assertEqual(True, 'cwd' in running_agent_kwargs) self.assertEqual(latest_agent.get_agent_dir(), running_agent_kwargs['cwd']) def test_run_latest_picks_installed_agent_when_update_to_latest_version_is_not_used_and_autoupdates_disabled(self): self.prepare_agents(10) with patch("azurelinuxagent.common.conf.is_present", return_value=False): with patch("azurelinuxagent.common.conf.get_autoupdate_enabled", return_value=False): running_agent_args, _ = self._test_run_latest() running_agent_args = running_agent_args[0] latest_agent = self.update_handler.get_latest_agent_greater_than_daemon() latest_agent_cmds = textutil.safe_shlex_split(latest_agent.get_agent_cmd()) if latest_agent_cmds[0].lower() == "python": latest_agent_cmds[0] = sys.executable self.assertNotEqual(running_agent_args, latest_agent_cmds) def test_run_latest_passes_child_args(self): self.prepare_agents() self.update_handler.get_latest_agent_greater_than_daemon() args, _ = self._test_run_latest(child_args="AnArgument") args = args[0] self.assertTrue(len(args) > 1) self.assertRegex(args[0], r"^(/.*/python[\d.]*)$", "The command doesn't contain full python path") self.assertEqual("AnArgument", args[len(args) - 1]) def test_run_latest_polls_and_waits_for_success(self): mock_child = ChildMock(return_value=None) mock_time = TimeMock(time_increment=CHILD_HEALTH_INTERVAL / 3) self._test_run_latest(mock_child=mock_child, mock_time=mock_time) self.assertEqual(2, mock_child.poll.call_count) self.assertEqual(1, mock_child.wait.call_count) def test_run_latest_polling_stops_at_success(self): mock_child = ChildMock(return_value=0) mock_time = TimeMock(time_increment=CHILD_HEALTH_INTERVAL / 3) self._test_run_latest(mock_child=mock_child, mock_time=mock_time) self.assertEqual(1, mock_child.poll.call_count) self.assertEqual(0, mock_child.wait.call_count) def test_run_latest_polling_stops_at_failure(self): mock_child = ChildMock(return_value=42) mock_time = TimeMock() self._test_run_latest(mock_child=mock_child, mock_time=mock_time) self.assertEqual(1, mock_child.poll.call_count) self.assertEqual(0, mock_child.wait.call_count) def test_run_latest_polls_frequently_if_installed_is_latest(self): mock_child = ChildMock(return_value=0) # pylint: disable=unused-variable mock_time = TimeMock(time_increment=CHILD_HEALTH_INTERVAL / 2) self._test_run_latest(mock_time=mock_time) self.assertEqual(1, mock_time.sleep_interval) def test_run_latest_polls_every_second_if_installed_not_latest(self): self.prepare_agents() mock_time = TimeMock(time_increment=CHILD_HEALTH_INTERVAL / 2) self._test_run_latest(mock_time=mock_time) self.assertEqual(1, mock_time.sleep_interval) def test_run_latest_defaults_to_current(self): self.assertEqual(None, self.update_handler.get_latest_agent_greater_than_daemon()) args, kwargs = self._test_run_latest() self.assertEqual(args[0], [sys.executable, "-u", sys.argv[0], "-run-exthandlers"]) self.assertEqual(True, 'cwd' in kwargs) self.assertEqual(os.getcwd(), kwargs['cwd']) def test_run_latest_forwards_output(self): try: tempdir = tempfile.mkdtemp() stdout_path = os.path.join(tempdir, "stdout") stderr_path = os.path.join(tempdir, "stderr") with open(stdout_path, "w") as stdout: with open(stderr_path, "w") as stderr: saved_stdout, sys.stdout = sys.stdout, stdout saved_stderr, sys.stderr = sys.stderr, stderr try: self._test_run_latest(mock_child=ChildMock(side_effect=faux_logger)) finally: sys.stdout = saved_stdout sys.stderr = saved_stderr with open(stdout_path, "r") as stdout: self.assertEqual(1, len(stdout.readlines())) with open(stderr_path, "r") as stderr: self.assertEqual(1, len(stderr.readlines())) finally: shutil.rmtree(tempdir, True) def test_run_latest_nonzero_code_does_not_mark_failure(self): self.prepare_agents() latest_agent = self.update_handler.get_latest_agent_greater_than_daemon() self.assertTrue(latest_agent.is_available) self.assertEqual(0.0, latest_agent.error.last_failure) self.assertEqual(0, latest_agent.error.failure_count) with patch('azurelinuxagent.ga.update.UpdateHandler.get_latest_agent_greater_than_daemon', return_value=latest_agent): self._test_run_latest(mock_child=ChildMock(return_value=1)) self.assertFalse(latest_agent.is_blacklisted, "Agent should not be blacklisted") def test_run_latest_exception_blacklists(self): self.prepare_agents() latest_agent = self.update_handler.get_latest_agent_greater_than_daemon() self.assertTrue(latest_agent.is_available) self.assertEqual(0.0, latest_agent.error.last_failure) self.assertEqual(0, latest_agent.error.failure_count) verify_string = "Force blacklisting: {0}".format(str(uuid.uuid4())) with patch('azurelinuxagent.ga.update.UpdateHandler.get_latest_agent_greater_than_daemon', return_value=latest_agent): with patch("azurelinuxagent.common.conf.get_autoupdate_enabled", return_value=True): self._test_run_latest(mock_child=ChildMock(side_effect=Exception(verify_string))) self.assertFalse(latest_agent.is_available) self.assertTrue(latest_agent.error.is_blacklisted) self.assertNotEqual(0.0, latest_agent.error.last_failure) self.assertEqual(1, latest_agent.error.failure_count) self.assertIn(verify_string, latest_agent.error.reason, "Error reason not found while blacklisting") def test_run_latest_exception_does_not_blacklist_if_terminating(self): self.prepare_agents() latest_agent = self.update_handler.get_latest_agent_greater_than_daemon() self.assertTrue(latest_agent.is_available) self.assertEqual(0.0, latest_agent.error.last_failure) self.assertEqual(0, latest_agent.error.failure_count) with patch('azurelinuxagent.ga.update.UpdateHandler.get_latest_agent_greater_than_daemon', return_value=latest_agent): self.update_handler.is_running = False self._test_run_latest(mock_child=ChildMock(side_effect=Exception("Attempt blacklisting"))) self.assertTrue(latest_agent.is_available) self.assertFalse(latest_agent.error.is_blacklisted) self.assertEqual(0.0, latest_agent.error.last_failure) self.assertEqual(0, latest_agent.error.failure_count) @patch('signal.signal') def test_run_latest_captures_signals(self, mock_signal): self._test_run_latest() self.assertEqual(1, mock_signal.call_count) @patch('signal.signal') def test_run_latest_creates_only_one_signal_handler(self, mock_signal): self.update_handler.signal_handler = "Not None" self._test_run_latest() self.assertEqual(0, mock_signal.call_count) def test_get_latest_agent_should_return_latest_agent_even_on_bad_error_json(self): dst_ver = self.prepare_agents() # Add a malformed error.json file in all existing agents for agent_dir in self.agent_dirs(): error_file_path = os.path.join(agent_dir, AGENT_ERROR_FILE) with open(error_file_path, 'w') as f: f.write("") latest_agent = self.update_handler.get_latest_agent_greater_than_daemon() self.assertEqual(latest_agent.version, dst_ver, "Latest agent version is invalid") def test_set_agents_sets_agents(self): self.prepare_agents() self.update_handler._set_and_sort_agents([GuestAgent.from_installed_agent(path) for path in self.agent_dirs()]) self.assertTrue(len(self.update_handler.agents) > 0) self.assertEqual(len(self.agent_dirs()), len(self.update_handler.agents)) def test_set_agents_sorts_agents(self): self.prepare_agents() self.update_handler._set_and_sort_agents([GuestAgent.from_installed_agent(path) for path in self.agent_dirs()]) v = FlexibleVersion("100000") for a in self.update_handler.agents: self.assertTrue(v > a.version) v = a.version def test_set_sentinel(self): self.assertFalse(os.path.isfile(self.update_handler._sentinel_file_path())) self.update_handler._set_sentinel() self.assertTrue(os.path.isfile(self.update_handler._sentinel_file_path())) def test_set_sentinel_writes_current_agent(self): self.update_handler._set_sentinel() self.assertTrue( fileutil.read_file(self.update_handler._sentinel_file_path()), CURRENT_AGENT) def test_shutdown(self): self.update_handler._set_sentinel() self.update_handler._shutdown() self.assertFalse(self.update_handler.is_running) self.assertFalse(os.path.isfile(self.update_handler._sentinel_file_path())) def test_shutdown_ignores_missing_sentinel_file(self): self.assertFalse(os.path.isfile(self.update_handler._sentinel_file_path())) self.update_handler._shutdown() self.assertFalse(self.update_handler.is_running) self.assertFalse(os.path.isfile(self.update_handler._sentinel_file_path())) def test_shutdown_ignores_exceptions(self): self.update_handler._set_sentinel() try: with patch("os.remove", side_effect=Exception): self.update_handler._shutdown() except Exception as e: # pylint: disable=unused-variable self.assertTrue(False, "Unexpected exception") # pylint: disable=redundant-unittest-assert def test_write_pid_file(self): for n in range(1112): fileutil.write_file(os.path.join(self.tmp_dir, str(n) + "_waagent.pid"), ustr(n + 1)) with patch('os.getpid', return_value=1112): pid_files, pid_file = self.update_handler._write_pid_file() self.assertEqual(1112, len(pid_files)) self.assertEqual("1111_waagent.pid", os.path.basename(pid_files[-1])) self.assertEqual("1112_waagent.pid", os.path.basename(pid_file)) self.assertEqual(fileutil.read_file(pid_file), ustr(1112)) def test_write_pid_file_ignores_exceptions(self): with patch('azurelinuxagent.common.utils.fileutil.write_file', side_effect=Exception): with patch('os.getpid', return_value=42): pid_files, pid_file = self.update_handler._write_pid_file() self.assertEqual(0, len(pid_files)) self.assertEqual(None, pid_file) def test_update_happens_when_extensions_disabled(self): """ Although the extension enabled config will not get checked before an update is found, this test attempts to ensure that behavior never changes. """ with patch('azurelinuxagent.common.conf.get_extensions_enabled', return_value=False): with patch('azurelinuxagent.ga.agent_update_handler.AgentUpdateHandler.run') as download_agent: with mock_wire_protocol(DATA_FILE) as protocol: with mock_update_handler(protocol, autoupdate_enabled=True) as update_handler: update_handler.run() self.assertEqual(1, download_agent.call_count, "Agent update did not execute (no attempts to download the agent") @staticmethod def _get_test_ext_handler_instance(protocol, name="OSTCExtensions.ExampleHandlerLinux", version="1.0.0"): eh = Extension(name=name) eh.version = version return ExtHandlerInstance(eh, protocol) def test_update_handler_recovers_from_error_with_no_certs(self): data = DATA_FILE.copy() data['goal_state'] = 'wire/goal_state_no_certs.xml' def fail_gs_fetch(url, *_, **__): if HttpRequestPredicates.is_goal_state_request(url): return MockHttpResponse(status=500) return None with mock_wire_protocol(data) as protocol: def fail_fetch_on_second_iter(iteration): if iteration == 2: protocol.set_http_handlers(http_get_handler=fail_gs_fetch) if iteration > 2: # Zero out the fail handler for subsequent iterations. protocol.set_http_handlers(http_get_handler=None) with mock_update_handler(protocol, 3, on_new_iteration=fail_fetch_on_second_iter) as update_handler: with patch("azurelinuxagent.ga.update.logger.error") as patched_error: with patch("azurelinuxagent.ga.update.logger.info") as patched_info: def match_unexpected_errors(): unexpected_msg_fragment = "Error fetching the goal state:" matching_errors = [] for (args, _) in filter(lambda a: len(a) > 0, patched_error.call_args_list): if unexpected_msg_fragment in args[0]: matching_errors.append(args[0]) if len(matching_errors) > 1: self.fail("Guest Agent did not recover, with new error(s): {}"\ .format(matching_errors[1:])) def match_expected_info(): expected_msg_fragment = "Fetching the goal state recovered from previous errors" for (call_args, _) in filter(lambda a: len(a) > 0, patched_info.call_args_list): if expected_msg_fragment in call_args[0]: break else: self.fail("Expected the guest agent to recover with '{}', but it didn't"\ .format(expected_msg_fragment)) update_handler.run(debug=True) match_unexpected_errors() # Match on errors first, they can provide more info. match_expected_info() def test_it_should_recreate_handler_env_on_service_startup(self): iterations = 5 with _get_update_handler(iterations, autoupdate_enabled=False) as (update_handler, protocol): update_handler.run(debug=True) expected_handler = self._get_test_ext_handler_instance(protocol) handler_env_file = expected_handler.get_env_file() self.assertTrue(os.path.exists(expected_handler.get_base_dir()), "Extension not found") # First iteration should install the extension handler and # subsequent iterations should not recreate the HandlerEnvironment file last_modification_time = os.path.getmtime(handler_env_file) self.assertEqual(os.path.getctime(handler_env_file), last_modification_time, "The creation time and last modified time of the HandlerEnvironment file dont match") # Simulate a service restart by getting a new instance of the update handler and protocol and # re-runnning the update handler. Then,ensure that the HandlerEnvironment file is recreated with eventsFolder # flag in HandlerEnvironment.json file. self._add_write_permission_to_goal_state_files() with _get_update_handler(iterations=1, autoupdate_enabled=False) as (update_handler, protocol): with patch("azurelinuxagent.common.agent_supported_feature._ETPFeature.is_supported", True): update_handler.run(debug=True) self.assertGreater(os.path.getmtime(handler_env_file), last_modification_time, "HandlerEnvironment file didn't get overwritten") with open(handler_env_file, 'r') as handler_env_content_file: content = json.load(handler_env_content_file) self.assertIn(HandlerEnvironment.eventsFolder, content[0][HandlerEnvironment.handlerEnvironment], "{0} not found in HandlerEnv file".format(HandlerEnvironment.eventsFolder)) def test_it_should_setup_the_firewall(self): with patch('azurelinuxagent.common.conf.enable_firewall', return_value=True): with MockIpTables() as mock_iptables: with MockFirewallCmd() as mock_firewall_cmd: # Make the check commands for the regular rules return 1 to indicate these # rules are not yet set, and 0 for the legacy rule to indicate it is set mock_iptables.set_return_values("-C", accept_dns=1, accept=1, drop=1, legacy=0) mock_firewall_cmd.set_return_values("--query-passthrough", accept_dns=1, accept=1, drop=1, legacy=0) with _get_update_handler(test_data=DATA_FILE) as (update_handler, _): update_handler.run(debug=True) # # Check regular rules # self.assertEqual( [ # Remove the legacy rule MockIpTables.get_legacy_command("-C"), MockIpTables.get_legacy_command("-D"), # Setup the firewall rules MockIpTables.get_accept_dns_command("-C"), MockIpTables.get_accept_command("-C"), MockIpTables.get_drop_command("-C"), MockIpTables.get_accept_dns_command("-A"), MockIpTables.get_accept_command("-A"), MockIpTables.get_drop_command("-A"), ], mock_iptables.call_list, "Expected 2 calls for the legacy rule (-C and -D), followed by 3 sets of calls for the current rules (-C and -A)") # # Check permanent rules # self.assertEqual( [ # Remove the legacy rule MockFirewallCmd.get_legacy_command("--query-passthrough"), MockFirewallCmd.get_legacy_command("--remove-passthrough"), # Setup the firewall rules MockFirewallCmd.get_accept_dns_command("--query-passthrough"), MockFirewallCmd.get_accept_command("--query-passthrough"), MockFirewallCmd.get_drop_command("--query-passthrough"), MockFirewallCmd.get_accept_dns_command("--passthrough"), MockFirewallCmd.get_accept_command("--passthrough"), MockFirewallCmd.get_drop_command("--passthrough"), ], mock_firewall_cmd.call_list, "Expected 2 calls for the legacy rule (-C and -D), followed by 3 sets of calls for the current rules (-C and -A)") @contextlib.contextmanager def _setup_test_for_ext_event_dirs_retention(self): try: # In _get_update_handler() contextmanager, yield is used inside an if-else block and that's creating a false positive pylint warning with _get_update_handler(test_data=DATA_FILE_MULTIPLE_EXT, autoupdate_enabled=False) as (update_handler, protocol): # pylint: disable=contextmanager-generator-missing-cleanup with patch("azurelinuxagent.common.agent_supported_feature._ETPFeature.is_supported", True): update_handler.run(debug=True) expected_events_dirs = glob.glob(os.path.join(conf.get_ext_log_dir(), "*", EVENTS_DIRECTORY)) no_of_extensions = protocol.mock_wire_data.get_no_of_plugins_in_extension_config() # Ensure extensions installed and events directory created self.assertEqual(len(expected_events_dirs), no_of_extensions, "Extension events directories dont match") for ext_dir in expected_events_dirs: self.assertTrue(os.path.exists(ext_dir), "Extension directory {0} not created!".format(ext_dir)) yield update_handler, expected_events_dirs finally: # The TestUpdate.setUp() initializes the self.tmp_dir to be used as a placeholder # for everything (event logger, status logger, conf.get_lib_dir() and more). # Since we add more data to the dir for this test, ensuring its completely clean before exiting the test. shutil.rmtree(self.tmp_dir, ignore_errors=True) self.tmp_dir = None def test_it_should_delete_extension_events_directory_if_extension_telemetry_pipeline_disabled(self): # Disable extension telemetry pipeline and ensure events directory got deleted with self._setup_test_for_ext_event_dirs_retention() as (update_handler, expected_events_dirs): with patch("azurelinuxagent.common.agent_supported_feature._ETPFeature.is_supported", False): self._add_write_permission_to_goal_state_files() update_handler.run(debug=True) for ext_dir in expected_events_dirs: self.assertFalse(os.path.exists(ext_dir), "Extension directory {0} still exists!".format(ext_dir)) def test_it_should_retain_extension_events_directories_if_extension_telemetry_pipeline_enabled(self): # Rerun update handler again with extension telemetry pipeline enabled to ensure we dont delete events directories with self._setup_test_for_ext_event_dirs_retention() as (update_handler, expected_events_dirs): self._add_write_permission_to_goal_state_files() update_handler.run(debug=True) for ext_dir in expected_events_dirs: self.assertTrue(os.path.exists(ext_dir), "Extension directory {0} should exist!".format(ext_dir)) def test_it_should_recreate_extension_event_directories_for_existing_extensions_if_extension_telemetry_pipeline_enabled(self): with self._setup_test_for_ext_event_dirs_retention() as (update_handler, expected_events_dirs): # Delete existing events directory for ext_dir in expected_events_dirs: shutil.rmtree(ext_dir, ignore_errors=True) self.assertFalse(os.path.exists(ext_dir), "Extension directory not deleted") with patch("azurelinuxagent.common.agent_supported_feature._ETPFeature.is_supported", True): self._add_write_permission_to_goal_state_files() update_handler.run(debug=True) for ext_dir in expected_events_dirs: self.assertTrue(os.path.exists(ext_dir), "Extension directory {0} should exist!".format(ext_dir)) def test_it_should_report_update_status_in_status_blob(self): with mock_wire_protocol(DATA_FILE) as protocol: with patch.object(conf, "get_autoupdate_gafamily", return_value="Prod"): with patch("azurelinuxagent.common.conf.get_enable_ga_versioning", return_value=True): with patch("azurelinuxagent.common.logger.warn") as patch_warn: protocol.aggregate_status = None protocol.incarnation = 1 def get_handler(url, **kwargs): if HttpRequestPredicates.is_agent_package_request(url): return MockHttpResponse(status=httpclient.SERVICE_UNAVAILABLE) return protocol.mock_wire_data.mock_http_get(url, **kwargs) def put_handler(url, *args, **_): if HttpRequestPredicates.is_host_plugin_status_request(url): # Skip reading the HostGA request data as its encoded return MockHttpResponse(status=500) protocol.aggregate_status = json.loads(args[0]) return MockHttpResponse(status=201) def update_goal_state_and_run_handler(autoupdate_enabled=True): protocol.incarnation += 1 protocol.mock_wire_data.set_incarnation(protocol.incarnation) self._add_write_permission_to_goal_state_files() with _get_update_handler(iterations=1, protocol=protocol, autoupdate_enabled=autoupdate_enabled) as (update_handler, _): update_handler.run(debug=True) self.assertEqual(0, update_handler.get_exit_code(), "Exit code should be 0; List of all warnings logged by the agent: {0}".format( patch_warn.call_args_list)) protocol.set_http_handlers(http_get_handler=get_handler, http_put_handler=put_handler) # mocking first agent update attempted open(os.path.join(conf.get_lib_dir(), INITIAL_UPDATE_STATE_FILE), "a").close() # mocking rsm update attempted open(os.path.join(conf.get_lib_dir(), RSM_UPDATE_STATE_FILE), "a").close() # Case 1: rsm version missing in GS when vm opt-in for rsm upgrades; report missing rsm version error protocol.mock_wire_data.set_extension_config("wire/ext_conf_version_missing_in_agent_family.xml") update_goal_state_and_run_handler() self.assertTrue("updateStatus" in protocol.aggregate_status['aggregateStatus']['guestAgentStatus'], "updateStatus should be reported") update_status = protocol.aggregate_status['aggregateStatus']['guestAgentStatus']["updateStatus"] self.assertEqual(VMAgentUpdateStatuses.Error, update_status['status'], "Status should be an error") self.assertEqual(update_status['code'], 1, "incorrect code reported") self.assertIn("missing version property. So, skipping agent update", update_status['formattedMessage']['message'], "incorrect message reported") # Case 2: rsm version in GS == Current Version; updateStatus should be Success protocol.mock_wire_data.set_extension_config("wire/ext_conf_rsm_version.xml") protocol.mock_wire_data.set_version_in_agent_family(str(CURRENT_VERSION)) update_goal_state_and_run_handler() self.assertTrue("updateStatus" in protocol.aggregate_status['aggregateStatus']['guestAgentStatus'], "updateStatus should be reported if asked in GS") update_status = protocol.aggregate_status['aggregateStatus']['guestAgentStatus']["updateStatus"] self.assertEqual(VMAgentUpdateStatuses.Success, update_status['status'], "Status should be successful") self.assertEqual(update_status['expectedVersion'], str(CURRENT_VERSION), "incorrect version reported") self.assertEqual(update_status['code'], 0, "incorrect code reported") # Case 3: rsm version in GS != Current Version; update fail and report error protocol.mock_wire_data.set_extension_config("wire/ext_conf_rsm_version.xml") protocol.mock_wire_data.set_version_in_agent_family("9.9.9.999") update_goal_state_and_run_handler() self.assertTrue("updateStatus" in protocol.aggregate_status['aggregateStatus']['guestAgentStatus'], "updateStatus should be in status blob. Warns: {0}".format(patch_warn.call_args_list)) update_status = protocol.aggregate_status['aggregateStatus']['guestAgentStatus']["updateStatus"] self.assertEqual(VMAgentUpdateStatuses.Error, update_status['status'], "Status should be an error") self.assertEqual(update_status['expectedVersion'], str(CURRENT_VERSION), "incorrect version reported") self.assertEqual(update_status['code'], 1, "incorrect code reported") def test_it_should_wait_to_fetch_first_goal_state(self): with _get_update_handler() as (update_handler, protocol): with patch("azurelinuxagent.common.logger.error") as patch_error: with patch("azurelinuxagent.common.logger.info") as patch_info: # Fail GS fetching for the 1st 5 times the agent asks for it update_handler._fail_gs_count = 5 def get_handler(url, **kwargs): if HttpRequestPredicates.is_goal_state_request(url) and update_handler._fail_gs_count > 0: update_handler._fail_gs_count -= 1 return MockHttpResponse(status=500) return protocol.mock_wire_data.mock_http_get(url, **kwargs) protocol.set_http_handlers(http_get_handler=get_handler) update_handler.run(debug=True) self.assertEqual(0, update_handler.get_exit_code(), "Exit code should be 0; List of all errors logged by the agent: {0}".format( patch_error.call_args_list)) error_msgs = [args[0] for (args, _) in patch_error.call_args_list if "Error fetching the goal state" in args[0]] self.assertTrue(len(error_msgs) > 0, "Error should've been reported when failed to retrieve GS") info_msgs = [args[0] for (args, _) in patch_info.call_args_list if "Fetching the goal state recovered from previous errors." in args[0]] self.assertTrue(len(info_msgs) > 0, "Agent should've logged a message when recovered from GS errors") def test_it_should_write_signing_certificate_string_to_file(self): with _get_update_handler() as (update_handler, _): update_handler.run(debug=True) cert_path = get_microsoft_signing_certificate_path() self.assertTrue(os.path.isfile(cert_path)) with open(cert_path, 'r') as f: self.assertEqual(f.read(), _MICROSOFT_ROOT_CERT_2011_03_22, msg="Signing certificate was not correctly written to expected file location") def test_agent_should_send_event_if_known_wireserver_ip_not_used(self): with _get_update_handler() as (update_handler, _): # Mock WireProtocol endpoint with known wireserver ip with patch('azurelinuxagent.common.protocol.wire.WireProtocol.get_endpoint', return_value=KNOWN_WIRESERVER_IP): with patch('azurelinuxagent.common.event.EventLogger.add_event') as patch_add_event: update_handler.run(debug=True) # Get any events for ProtocolEndpoint operation protocol_endpoint_events = [kwargs for _, kwargs in patch_add_event.call_args_list if kwargs['op'] == 'ProtocolEndpoint'] # Daemon should not send ProtocolEndpoint event if endpoint is known wireserver IP self.assertTrue(len(protocol_endpoint_events) == 0) # Mock WireProtocol endpoint with unknown ip with patch('azurelinuxagent.common.protocol.wire.WireProtocol.get_endpoint', return_value='1.1.1.1'): with patch('azurelinuxagent.common.event.EventLogger.add_event') as patch_add_event: update_handler.run(debug=True) # Get any events for ProtocolEndpoint operation protocol_endpoint_events = [kwargs for _, kwargs in patch_add_event.call_args_list if kwargs['op'] == 'ProtocolEndpoint'] # Daemon should send ProtocolEndpoint event if endpoint is not known wireserver IP self.assertTrue(len(protocol_endpoint_events) == 1) class TestUpdateWaitForCloudInit(AgentTestCase): @staticmethod @contextlib.contextmanager def create_mock_run_command(delay=None): def run_command_mock(cmd, *args, **kwargs): if cmd == ["cloud-init", "status", "--wait"]: if delay is not None: original_run_command(['sleep', str(delay)], *args, **kwargs) return "cloud-init completed" return original_run_command(cmd, *args, **kwargs) original_run_command = shellutil.run_command with patch("azurelinuxagent.ga.update.shellutil.run_command", side_effect=run_command_mock) as run_command_patch: yield run_command_patch def test_it_should_not_wait_for_cloud_init_by_default(self): update_handler = UpdateHandler() with self.create_mock_run_command() as run_command_patch: update_handler._wait_for_cloud_init() self.assertTrue(run_command_patch.call_count == 0, "'cloud-init status --wait' should not be called by default") def test_it_should_wait_for_cloud_init_when_requested(self): update_handler = UpdateHandler() with patch("azurelinuxagent.ga.update.conf.get_wait_for_cloud_init", return_value=True): with self.create_mock_run_command() as run_command_patch: update_handler._wait_for_cloud_init() self.assertEqual(1, run_command_patch.call_count, "'cloud-init status --wait' should have be called once") @skip_if_predicate_true(lambda: sys.version_info[0] == 2, "Timeouts are not supported on Python 2") def test_it_should_enforce_timeout_waiting_for_cloud_init(self): update_handler = UpdateHandler() with patch("azurelinuxagent.ga.update.conf.get_wait_for_cloud_init", return_value=True): with patch("azurelinuxagent.ga.update.conf.get_wait_for_cloud_init_timeout", return_value=1): with self.create_mock_run_command(delay=5): with patch("azurelinuxagent.ga.update.logger.error") as mock_logger: update_handler._wait_for_cloud_init() call_args = [args for args, _ in mock_logger.call_args_list if "An error occurred while waiting for cloud-init" in args[0]] self.assertTrue( len(call_args) == 1 and len(call_args[0]) == 1 and "command timeout" in call_args[0][0], "Expected a timeout waiting for cloud-init. Log calls: {0}".format(mock_logger.call_args_list)) def test_update_handler_should_wait_for_cloud_init_after_agent_update_and_before_extension_processing(self): method_calls = [] agent_update_handler = Mock() agent_update_handler.run = lambda *_, **__: method_calls.append("AgentUpdateHandler.run()") exthandlers_handler = Mock() exthandlers_handler.run = lambda *_, **__: method_calls.append("ExtHandlersHandler.run()") with mock_wire_protocol(DATA_FILE) as protocol: with mock_update_handler(protocol, iterations=1, agent_update_handler=agent_update_handler, exthandlers_handler=exthandlers_handler) as update_handler: with patch('azurelinuxagent.ga.update.UpdateHandler._wait_for_cloud_init', side_effect=lambda *_, **__: method_calls.append("UpdateHandler._wait_for_cloud_init()")): update_handler.run() self.assertListEqual(["AgentUpdateHandler.run()", "UpdateHandler._wait_for_cloud_init()", "ExtHandlersHandler.run()"], method_calls, "Wait for cloud-init should happen after agent update and before extension processing") class UpdateHandlerRunTestCase(AgentTestCase): def _test_run(self, autoupdate_enabled=False, check_daemon_running=False, expected_exit_code=0, emit_restart_event=None): fileutil.write_file(conf.get_agent_pid_file_path(), ustr(42)) with patch('azurelinuxagent.ga.update.get_monitor_handler') as mock_monitor: with patch('azurelinuxagent.ga.remoteaccess.get_remote_access_handler') as mock_ra_handler: with patch('azurelinuxagent.ga.update.get_env_handler') as mock_env: with patch('azurelinuxagent.ga.update.get_collect_logs_handler') as mock_collect_logs: with patch('azurelinuxagent.ga.update.get_send_telemetry_events_handler') as mock_telemetry_send_events: with patch('azurelinuxagent.ga.update.get_collect_telemetry_events_handler') as mock_event_collector: with patch('azurelinuxagent.ga.update.initialize_event_logger_vminfo_common_parameters_and_protocol'): with patch('azurelinuxagent.ga.update.is_log_collection_allowed', return_value=True): with mock_wire_protocol(DATA_FILE) as protocol: mock_exthandlers_handler = Mock() with mock_update_handler( protocol, exthandlers_handler=mock_exthandlers_handler, remote_access_handler=mock_ra_handler, autoupdate_enabled=autoupdate_enabled, check_daemon_running=check_daemon_running ) as update_handler: if emit_restart_event is not None: update_handler._emit_restart_event = emit_restart_event if isinstance(os.getppid, MagicMock): update_handler.run() else: with patch('os.getppid', return_value=42): update_handler.run() self.assertEqual(1, mock_monitor.call_count) self.assertEqual(1, mock_env.call_count) self.assertEqual(1, mock_collect_logs.call_count) self.assertEqual(1, mock_telemetry_send_events.call_count) self.assertEqual(1, mock_event_collector.call_count) self.assertEqual(expected_exit_code, update_handler.get_exit_code()) if update_handler.get_iterations_completed() > 0: # some test cases exit before executing extensions or remote access self.assertEqual(1, mock_exthandlers_handler.run.call_count) self.assertEqual(1, mock_ra_handler.run.call_count) return update_handler def test_run(self): self._test_run() def test_run_stops_if_orphaned(self): with patch('os.getppid', return_value=1): update_handler = self._test_run(check_daemon_running=True) self.assertEqual(0, update_handler.get_iterations_completed()) def test_run_clears_sentinel_on_successful_exit(self): update_handler = self._test_run() self.assertFalse(os.path.isfile(update_handler._sentinel_file_path())) def test_run_leaves_sentinel_on_unsuccessful_exit(self): with patch('azurelinuxagent.ga.agent_update_handler.AgentUpdateHandler.run', side_effect=Exception): update_handler = self._test_run(autoupdate_enabled=True,expected_exit_code=1) self.assertTrue(os.path.isfile(update_handler._sentinel_file_path())) def test_run_emits_restart_event(self): update_handler = self._test_run(emit_restart_event=Mock()) self.assertEqual(1, update_handler._emit_restart_event.call_count) class TestAgentUpgrade(UpdateTestCase): @contextlib.contextmanager def create_conf_mocks(self, autoupdate_frequency, hotfix_frequency, normal_frequency): # Disabling extension processing to speed up tests as this class deals with testing agent upgrades with patch("azurelinuxagent.common.conf.get_extensions_enabled", return_value=False): with patch("azurelinuxagent.common.conf.get_autoupdate_frequency", return_value=autoupdate_frequency): with patch("azurelinuxagent.common.conf.get_self_update_hotfix_frequency", return_value=hotfix_frequency): with patch("azurelinuxagent.common.conf.get_self_update_regular_frequency", return_value=normal_frequency): with patch("azurelinuxagent.common.conf.get_autoupdate_gafamily", return_value="Prod"): with patch("azurelinuxagent.common.conf.get_enable_ga_versioning", return_value=True): yield @contextlib.contextmanager def __get_update_handler(self, iterations=1, test_data=None, reload_conf=None, autoupdate_frequency=0.001, hotfix_frequency=10, normal_frequency=10, initial_update_attempted=True, mock_random_update_time=True): if initial_update_attempted: open(os.path.join(conf.get_lib_dir(), INITIAL_UPDATE_STATE_FILE), "a").close() test_data = DATA_FILE if test_data is None else test_data # In _get_update_handler() contextmanager, yield is used inside an if-else block and that's creating a false positive pylint warning with _get_update_handler(iterations, test_data) as (update_handler, protocol): # pylint: disable=contextmanager-generator-missing-cleanup protocol.aggregate_status = None def get_handler(url, **kwargs): if reload_conf is not None: reload_conf(url, protocol) if HttpRequestPredicates.is_agent_package_request(url): agent_pkg = load_bin_data(self._get_agent_file_name(), self._agent_zip_dir) protocol.mock_wire_data.call_counts['agentArtifact'] += 1 return MockHttpResponse(status=httpclient.OK, body=agent_pkg) return protocol.mock_wire_data.mock_http_get(url, **kwargs) def put_handler(url, *args, **_): if HttpRequestPredicates.is_host_plugin_status_request(url): # Skip reading the HostGA request data as its encoded return MockHttpResponse(status=500) protocol.aggregate_status = json.loads(args[0]) return MockHttpResponse(status=201) original_randint = random.randint def _mock_random_update_time(a, b): if mock_random_update_time: # update should occur immediately return 0 if b == 1: # handle tests where the normal or hotfix frequency is mocked to be very short (e.g., 1 second). Returning a very small delay (0.001 seconds) ensures the logic is tested without introducing significant waiting time return 0.001 return original_randint(a, b) + 10 # If none of the above conditions are met, the function returns additional 10-seconds delay. This might represent a normal delay for updates in scenarios where updates are not expected immediately protocol.set_http_handlers(http_get_handler=get_handler, http_put_handler=put_handler) with self.create_conf_mocks(autoupdate_frequency, hotfix_frequency, normal_frequency): with patch("azurelinuxagent.ga.self_update_version_updater.random.randint", side_effect=_mock_random_update_time): with patch("azurelinuxagent.common.event.EventLogger.add_event") as mock_telemetry: update_handler._protocol = protocol yield update_handler, mock_telemetry def __assert_exit_code_successful(self, update_handler): self.assertEqual(0, update_handler.get_exit_code(), "Exit code should be 0") def __assert_upgrade_telemetry_emitted(self, mock_telemetry, upgrade=True, version="9.9.9.10"): upgrade_event_msgs = [kwarg['message'] for _, kwarg in mock_telemetry.call_args_list if 'Current Agent {0} completed all update checks, exiting current process to {1} to the new Agent version {2}'.format(CURRENT_VERSION, "upgrade" if upgrade else "downgrade", version) in kwarg['message'] and kwarg[ 'op'] == WALAEventOperation.AgentUpgrade] self.assertEqual(1, len(upgrade_event_msgs), "Did not find the event indicating that the agent was upgraded. Got: {0}".format( mock_telemetry.call_args_list)) def __assert_agent_directories_available(self, versions): for version in versions: self.assertTrue(os.path.exists(self.agent_dir(version)), "Agent directory {0} not found".format(version)) def __assert_agent_directories_exist_and_others_dont_exist(self, versions): self.__assert_agent_directories_available(versions=versions) other_agents = [agent_dir for agent_dir in self.agent_dirs() if agent_dir not in [self.agent_dir(version) for version in versions]] self.assertFalse(any(other_agents), "All other agents should be purged from agent dir: {0}".format(other_agents)) def __assert_ga_version_in_status(self, aggregate_status, version=str(CURRENT_VERSION)): self.assertIsNotNone(aggregate_status, "Status should be reported") self.assertEqual(aggregate_status['aggregateStatus']['guestAgentStatus']['version'], version, "Status should be reported from the Current version") self.assertEqual(aggregate_status['aggregateStatus']['guestAgentStatus']['status'], 'Ready', "Guest Agent should be reported as Ready") def test_it_should_upgrade_agent_on_process_start_if_auto_upgrade_enabled(self): data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" with self.__get_update_handler(test_data=data_file, iterations=10) as (update_handler, mock_telemetry): update_handler.run(debug=True) self.__assert_exit_code_successful(update_handler) self.assertEqual(1, update_handler.get_iterations(), "Update handler should've exited after the first run") self.__assert_agent_directories_available(versions=["9.9.9.10"]) self.__assert_upgrade_telemetry_emitted(mock_telemetry) def test_it_should_not_update_agent_with_rsm_if_gs_not_updated_in_next_attempts(self): no_of_iterations = 10 data_file = DATA_FILE.copy() data_file['ext_conf'] = "wire/ext_conf_rsm_version.xml" self.prepare_agents(1) test_frequency = 10 with self.__get_update_handler(iterations=no_of_iterations, test_data=data_file, autoupdate_frequency=test_frequency) as (update_handler, _): # Given version which will fail on first attempt, then rsm shouldn't make any futher attempts since GS is not updated update_handler._protocol.mock_wire_data.set_version_in_agent_family("9.9.9.999") update_handler._protocol.mock_wire_data.set_incarnation(2) update_handler.run(debug=True) self.__assert_exit_code_successful(update_handler) self.assertEqual(no_of_iterations, update_handler.get_iterations(), "Update handler should've run its course") self.assertFalse(os.path.exists(self.agent_dir("5.2.0.1")), "New agent directory should not be found") self.assertGreaterEqual(update_handler._protocol.mock_wire_data.call_counts["manifest_of_ga.xml"], 1, "only 1 agent manifest call should've been made - 1 per incarnation") def test_it_should_not_auto_upgrade_if_auto_update_disabled(self): with self.__get_update_handler(iterations=10) as (update_handler, _): with patch("azurelinuxagent.common.conf.get_autoupdate_enabled", return_value=False): update_handler.run(debug=True) self.__assert_exit_code_successful(update_handler) self.assertGreaterEqual(update_handler.get_iterations(), 10, "Update handler should've run 10 times") self.assertFalse(os.path.exists(self.agent_dir("99999.0.0.0")), "New agent directory should not be found") def test_it_should_download_only_rsm_version_if_available(self): data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" with self.__get_update_handler(test_data=data_file) as (update_handler, mock_telemetry): update_handler.run(debug=True) self.__assert_exit_code_successful(update_handler) self.__assert_upgrade_telemetry_emitted(mock_telemetry, version="9.9.9.10") self.__assert_agent_directories_exist_and_others_dont_exist(versions=["9.9.9.10"]) def test_it_should_download_largest_version_if_ga_versioning_disabled(self): data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" with self.__get_update_handler(test_data=data_file) as (update_handler, mock_telemetry): with patch.object(conf, "get_enable_ga_versioning", return_value=False): update_handler.run(debug=True) self.__assert_exit_code_successful(update_handler) self.__assert_upgrade_telemetry_emitted(mock_telemetry, version="99999.0.0.0") self.__assert_agent_directories_exist_and_others_dont_exist(versions=["99999.0.0.0"]) def test_it_should_cleanup_all_agents_except_rsm_version_and_current_version(self): data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" # Set the test environment by adding 20 random agents to the agent directory self.prepare_agents() self.assertEqual(20, self.agent_count(), "Agent directories not set properly") with self.__get_update_handler(test_data=data_file) as (update_handler, mock_telemetry): update_handler.run(debug=True) self.__assert_exit_code_successful(update_handler) self.__assert_upgrade_telemetry_emitted(mock_telemetry, version="9.9.9.10") self.__assert_agent_directories_exist_and_others_dont_exist(versions=["9.9.9.10", str(CURRENT_VERSION)]) def test_it_should_not_update_if_rsm_version_not_found_in_manifest(self): self.prepare_agents(1) data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_version_missing_in_manifest.xml" with self.__get_update_handler(test_data=data_file) as (update_handler, mock_telemetry): update_handler.run(debug=True) self.__assert_exit_code_successful(update_handler) self.__assert_agent_directories_exist_and_others_dont_exist(versions=[str(CURRENT_VERSION)]) agent_msgs = [kwarg for _, kwarg in mock_telemetry.call_args_list if kwarg['op'] in (WALAEventOperation.AgentUpgrade, WALAEventOperation.Download)] # This will throw if corresponding message not found so not asserting on that rsm_version_found = next(kwarg for kwarg in agent_msgs if "New agent version:9.9.9.999 requested by RSM in Goal state incarnation_1, will update the agent before processing the goal state" in kwarg['message']) self.assertTrue(rsm_version_found['is_success'], "The rsm version found op should be reported as a success") skipping_update = next(kwarg for kwarg in agent_msgs if "No matching package found in the agent manifest for version: 9.9.9.999 in goal state incarnation: incarnation_1, skipping agent update" in kwarg['message']) self.assertEqual(skipping_update['version'], str(CURRENT_VERSION), "The not found message should be reported from current agent version") self.assertFalse(skipping_update['is_success'], "The not found op should be reported as a failure") def test_it_should_try_downloading_rsm_version_on_new_incarnation(self): no_of_iterations = 1000 # Set the test environment by adding 20 random agents to the agent directory self.prepare_agents() self.assertEqual(20, self.agent_count(), "Agent directories not set properly") def reload_conf(url, protocol): mock_wire_data = protocol.mock_wire_data # This function reloads the conf mid-run to mimic an actual customer scenario if HttpRequestPredicates.is_goal_state_request(url) and mock_wire_data.call_counts[ "goalstate"] >= 10 and mock_wire_data.call_counts["goalstate"] < 15: # Ensure we didn't try to download any agents except during the incarnation change self.__assert_agent_directories_available(versions=[str(CURRENT_VERSION)]) # Update the rsm version to "99999.0.0.0" update_handler._protocol.mock_wire_data.set_version_in_agent_family("99999.0.0.0") reload_conf.call_count += 1 self._add_write_permission_to_goal_state_files() reload_conf.incarnation += 1 mock_wire_data.set_incarnation(reload_conf.incarnation) reload_conf.call_count = 0 reload_conf.incarnation = 2 data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" with self.__get_update_handler(iterations=no_of_iterations, test_data=data_file, reload_conf=reload_conf) as (update_handler, mock_telemetry): update_handler._protocol.mock_wire_data.set_version_in_agent_family(str(CURRENT_VERSION)) update_handler._protocol.mock_wire_data.set_incarnation(2) update_handler.run(debug=True) self.assertGreaterEqual(reload_conf.call_count, 1, "Reload conf not updated as expected") self.__assert_exit_code_successful(update_handler) self.__assert_upgrade_telemetry_emitted(mock_telemetry, version="99999.0.0.0") self.__assert_agent_directories_exist_and_others_dont_exist(versions=["99999.0.0.0", str(CURRENT_VERSION)]) self.assertEqual(update_handler._protocol.mock_wire_data.call_counts['agentArtifact'], 1, "only 1 agent should've been downloaded - 1 per incarnation") self.assertGreaterEqual(update_handler._protocol.mock_wire_data.call_counts["manifest_of_ga.xml"], 1, "only 1 agent manifest call should've been made - 1 per incarnation") def test_it_should_update_to_largest_version_if_rsm_version_not_available(self): no_of_iterations = 100 # Set the test environment by adding 20 random agents to the agent directory self.prepare_agents() self.assertEqual(20, self.agent_count(), "Agent directories not set properly") def reload_conf(url, protocol): mock_wire_data = protocol.mock_wire_data # This function reloads the conf mid-run to mimic an actual customer scenario if HttpRequestPredicates.is_goal_state_request(url) and mock_wire_data.call_counts[ "goalstate"] >= 5: reload_conf.call_count += 1 # By this point, the GS with rsm version should've been executed. Verify that self.__assert_agent_directories_available(versions=[str(CURRENT_VERSION)]) # Update the ga_manifest and incarnation to send largest version manifest # this should download largest version requested in config mock_wire_data.data_files["ga_manifest"] = "wire/ga_manifest.xml" mock_wire_data.reload() self._add_write_permission_to_goal_state_files() reload_conf.incarnation += 1 mock_wire_data.set_incarnation(reload_conf.incarnation) reload_conf.call_count = 0 reload_conf.incarnation = 2 data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf.xml" data_file["ga_manifest"] = "wire/ga_manifest_no_upgrade.xml" with self.__get_update_handler(iterations=no_of_iterations, test_data=data_file, reload_conf=reload_conf) as (update_handler, mock_telemetry): update_handler._protocol.mock_wire_data.set_incarnation(2) update_handler.run(debug=True) self.assertGreater(reload_conf.call_count, 0, "Reload conf not updated") self.__assert_exit_code_successful(update_handler) self.__assert_upgrade_telemetry_emitted(mock_telemetry, version="99999.0.0.0") self.__assert_agent_directories_exist_and_others_dont_exist(versions=["99999.0.0.0", str(CURRENT_VERSION)]) def test_it_should_not_update_largest_version_if_time_window_not_elapsed(self): no_of_iterations = 20 # Set the test environment by adding 20 random agents to the agent directory self.prepare_agents() self.assertEqual(20, self.agent_count(), "Agent directories not set properly") def reload_conf(url, protocol): mock_wire_data = protocol.mock_wire_data # This function reloads the conf mid-run to mimic an actual customer scenario if HttpRequestPredicates.is_goal_state_request(url) and mock_wire_data.call_counts[ "goalstate"] >= 5: reload_conf.call_count += 1 self.__assert_agent_directories_available(versions=[str(CURRENT_VERSION)]) # Update the ga_manifest and incarnation to send largest version manifest mock_wire_data.data_files["ga_manifest"] = "wire/ga_manifest.xml" mock_wire_data.reload() self._add_write_permission_to_goal_state_files() reload_conf.incarnation += 1 mock_wire_data.set_incarnation(reload_conf.incarnation) reload_conf.call_count = 0 reload_conf.incarnation = 2 data_file = wire_protocol_data.DATA_FILE.copy() # This is to fail the agent update at first attempt so that agent doesn't go through update data_file["ga_manifest"] = "wire/ga_manifest_no_uris.xml" with self.__get_update_handler(iterations=no_of_iterations, test_data=data_file, reload_conf=reload_conf, mock_random_update_time=False) as (update_handler, _): update_handler._protocol.mock_wire_data.set_incarnation(2) update_handler.run(debug=True) self.assertGreater(reload_conf.call_count, 0, "Reload conf not updated") self.__assert_exit_code_successful(update_handler) self.assertFalse(os.path.exists(self.agent_dir("99999.0.0.0")), "New agent directory should not be found") def test_it_should_update_largest_version_if_time_window_elapsed(self): no_of_iterations = 20 # Set the test environment by adding 20 random agents to the agent directory self.prepare_agents() self.assertEqual(20, self.agent_count(), "Agent directories not set properly") def reload_conf(url, protocol): mock_wire_data = protocol.mock_wire_data # This function reloads the conf mid-run to mimic an actual customer scenario if HttpRequestPredicates.is_goal_state_request(url) and mock_wire_data.call_counts[ "goalstate"] >= 5: reload_conf.call_count += 1 self.__assert_agent_directories_available(versions=[str(CURRENT_VERSION)]) # Update the ga_manifest and incarnation to send largest version manifest mock_wire_data.data_files["ga_manifest"] = "wire/ga_manifest.xml" mock_wire_data.reload() self._add_write_permission_to_goal_state_files() reload_conf.incarnation += 1 mock_wire_data.set_incarnation(reload_conf.incarnation) reload_conf.call_count = 0 reload_conf.incarnation = 2 data_file = wire_protocol_data.DATA_FILE.copy() data_file["ga_manifest"] = "wire/ga_manifest_no_uris.xml" with self.__get_update_handler(iterations=no_of_iterations, test_data=data_file, reload_conf=reload_conf, hotfix_frequency=1, normal_frequency=1, mock_random_update_time=False) as (update_handler, mock_telemetry): update_handler._protocol.mock_wire_data.set_incarnation(2) update_handler.run(debug=True) self.assertGreater(reload_conf.call_count, 0, "Reload conf not updated") self.__assert_exit_code_successful(update_handler) self.__assert_upgrade_telemetry_emitted(mock_telemetry, version="99999.0.0.0") self.__assert_agent_directories_exist_and_others_dont_exist(versions=["99999.0.0.0", str(CURRENT_VERSION)]) def test_it_should_not_download_anything_if_rsm_version_is_current_version(self): data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" # Set the test environment by adding 20 random agents to the agent directory self.prepare_agents() self.assertEqual(20, self.agent_count(), "Agent directories not set properly") with self.__get_update_handler(test_data=data_file) as (update_handler, _): update_handler._protocol.mock_wire_data.set_version_in_agent_family(str(CURRENT_VERSION)) update_handler._protocol.mock_wire_data.set_incarnation(2) update_handler.run(debug=True) self.__assert_exit_code_successful(update_handler) self.assertFalse(os.path.exists(self.agent_dir("99999.0.0.0")), "New agent directory should not be found") def test_it_should_skip_wait_to_update_immediately_if_rsm_version_available(self): no_of_iterations = 100 def reload_conf(url, protocol): mock_wire_data = protocol.mock_wire_data # This function reloads the conf mid-run to mimic an actual customer scenario # Setting the rsm request to be sent after some iterations if HttpRequestPredicates.is_goal_state_request(url) and mock_wire_data.call_counts["goalstate"] >= 5: reload_conf.call_count += 1 # Assert GA version from status to ensure agent is running fine from the current version self.__assert_ga_version_in_status(protocol.aggregate_status) # Update the ext-conf and incarnation and add rsm version from GS mock_wire_data.data_files["ext_conf"] = "wire/ext_conf_rsm_version.xml" data_file['ga_manifest'] = "wire/ga_manifest.xml" mock_wire_data.reload() self._add_write_permission_to_goal_state_files() mock_wire_data.set_incarnation(2) reload_conf.call_count = 0 data_file = wire_protocol_data.DATA_FILE.copy() data_file['ga_manifest'] = "wire/ga_manifest_no_upgrade.xml" # Setting the prod frequency to mimic a real scenario with self.__get_update_handler(iterations=no_of_iterations, test_data=data_file, reload_conf=reload_conf, autoupdate_frequency=6000) as (update_handler, mock_telemetry): update_handler._protocol.mock_wire_data.set_version_in_ga_manifest(str(CURRENT_VERSION)) update_handler._protocol.mock_wire_data.set_incarnation(20) update_handler.run(debug=True) self.assertGreater(reload_conf.call_count, 0, "Reload conf not updated") self.assertLess(update_handler.get_iterations(), no_of_iterations, "The code should've exited as soon as rsm version was found") self.__assert_exit_code_successful(update_handler) self.__assert_upgrade_telemetry_emitted(mock_telemetry, version="9.9.9.10") def test_it_should_mark_current_agent_as_bad_version_on_downgrade(self): no_of_iterations = 100 downgrade_version = "2.5.0" self.prepare_agents(count=1) self.assertTrue(os.path.exists(self.agent_dir(CURRENT_VERSION))) self.assertFalse(next(agent for agent in self.agents() if agent.version == CURRENT_VERSION).is_blacklisted, "The current agent should not be blacklisted") def reload_conf(url, protocol): mock_wire_data = protocol.mock_wire_data # This function reloads the conf mid-run to mimic an actual customer scenario if HttpRequestPredicates.is_goal_state_request(url) and mock_wire_data.call_counts[ "goalstate"] >= 10 and mock_wire_data.call_counts["goalstate"] < 15: # Ensure we didn't try to download any agents except during the incarnation change self.__assert_agent_directories_available(versions=[str(CURRENT_VERSION)]) # mimic the rsm downgrade request mock_wire_data.data_files["ext_conf"] = "wire/ext_conf_downgrade_rsm_version.xml" data_file['ga_manifest'] = "wire/ga_manifest.xml" mock_wire_data.reload() self._add_write_permission_to_goal_state_files() mock_wire_data.set_incarnation(2) mock_wire_data.set_version_in_agent_family(downgrade_version) mock_wire_data.set_from_version_in_agent_family(str(CURRENT_VERSION)) reload_conf.call_count = 0 data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" with self.__get_update_handler(iterations=no_of_iterations, test_data=data_file, reload_conf=reload_conf) as (update_handler, mock_telemetry): # Set to current version to ensure no upgrade happens initially update_handler._protocol.mock_wire_data.set_version_in_agent_family(str(CURRENT_VERSION)) update_handler._protocol.mock_wire_data.set_incarnation(20) update_handler.run(debug=True) self.__assert_exit_code_successful(update_handler) self.__assert_upgrade_telemetry_emitted(mock_telemetry, upgrade=False, version=downgrade_version) current_agent = next(agent for agent in self.agents() if agent.version == CURRENT_VERSION) self.assertTrue(current_agent.is_blacklisted, "The current agent should be blacklisted") self.assertEqual(current_agent.error.reason, "Marking the agent {0} as bad version since a downgrade was requested in the GoalState, " "suggesting that we really don't want to execute any extensions using this version".format(CURRENT_VERSION), "Invalid reason specified for blacklisting agent") self.__assert_agent_directories_exist_and_others_dont_exist(versions=[downgrade_version, str(CURRENT_VERSION)]) def test_it_should_do_self_update_if_vm_opt_out_rsm_upgrades_later(self): no_of_iterations = 100 # Set the test environment by adding 20 random agents to the agent directory self.prepare_agents() self.assertEqual(20, self.agent_count(), "Agent directories not set properly") def reload_conf(url, protocol): mock_wire_data = protocol.mock_wire_data # This function reloads the conf mid-run to mimic an actual customer scenario if HttpRequestPredicates.is_goal_state_request(url) and mock_wire_data.call_counts["goalstate"] >= 5: reload_conf.call_count += 1 # Assert GA version from status to ensure agent is running fine from the current version self.__assert_ga_version_in_status(protocol.aggregate_status) # Update is_vm_enabled_for_rsm_upgrades flag to False update_handler._protocol.mock_wire_data.set_extension_config_is_vm_enabled_for_rsm_upgrades("False") self._add_write_permission_to_goal_state_files() mock_wire_data.set_incarnation(2) reload_conf.call_count = 0 data_file = wire_protocol_data.DATA_FILE.copy() data_file['ext_conf'] = "wire/ext_conf_rsm_version.xml" with self.__get_update_handler(iterations=no_of_iterations, test_data=data_file, reload_conf=reload_conf) as (update_handler, mock_telemetry): update_handler._protocol.mock_wire_data.set_version_in_agent_family(str(CURRENT_VERSION)) update_handler._protocol.mock_wire_data.set_incarnation(20) update_handler.run(debug=True) self.assertGreater(reload_conf.call_count, 0, "Reload conf not updated") self.assertLess(update_handler.get_iterations(), no_of_iterations, "The code should've exited as soon as version was found") self.__assert_exit_code_successful(update_handler) self.__assert_upgrade_telemetry_emitted(mock_telemetry, version="99999.0.0.0") self.__assert_agent_directories_exist_and_others_dont_exist(versions=["99999.0.0.0", str(CURRENT_VERSION)]) @patch('azurelinuxagent.ga.update.get_collect_telemetry_events_handler') @patch('azurelinuxagent.ga.update.get_send_telemetry_events_handler') @patch('azurelinuxagent.ga.update.get_collect_logs_handler') @patch('azurelinuxagent.ga.update.get_monitor_handler') @patch('azurelinuxagent.ga.update.get_env_handler') class MonitorThreadTest(AgentTestCase): def setUp(self): super(MonitorThreadTest, self).setUp() self.event_patch = patch('azurelinuxagent.common.event.add_event') current_thread().name = "ExtHandler" protocol = Mock() self.update_handler = get_update_handler() self.update_handler.protocol_util = Mock() self.update_handler.protocol_util.get_protocol = Mock(return_value=protocol) clear_singleton_instances(ProtocolUtil) def _test_run(self, invocations=1): def iterator(*_, **__): iterator.count += 1 if iterator.count <= invocations: return True return False iterator.count = 0 with patch('os.getpid', return_value=42): with patch.object(UpdateHandler, '_is_orphaned') as mock_is_orphaned: mock_is_orphaned.__get__ = Mock(return_value=False) with patch.object(UpdateHandler, 'is_running') as mock_is_running: mock_is_running.__get__ = Mock(side_effect=iterator) with patch('azurelinuxagent.ga.exthandlers.get_exthandlers_handler'): with patch('azurelinuxagent.ga.remoteaccess.get_remote_access_handler'): with patch('azurelinuxagent.ga.agent_update_handler.get_agent_update_handler'): with patch('azurelinuxagent.ga.update.initialize_event_logger_vminfo_common_parameters_and_protocol'): with patch('azurelinuxagent.ga.cgroupapi.CGroupUtil.distro_supported', return_value=False): # skip all cgroup stuff with patch('azurelinuxagent.ga.update.is_log_collection_allowed', return_value=True): with patch('time.sleep'): with patch('sys.exit'): self.update_handler.run() def _setup_mock_thread_and_start_test_run(self, mock_thread, is_alive=True, invocations=0): thread = MagicMock() thread.run = MagicMock() thread.is_alive = MagicMock(return_value=is_alive) thread.start = MagicMock() mock_thread.return_value = thread self._test_run(invocations=invocations) return thread def test_start_threads(self, mock_env, mock_monitor, mock_collect_logs, mock_telemetry_send_events, mock_telemetry_collector): def _get_mock_thread(): thread = MagicMock() thread.run = MagicMock() return thread all_threads = [mock_telemetry_send_events, mock_telemetry_collector, mock_env, mock_monitor, mock_collect_logs] for thread in all_threads: thread.return_value = _get_mock_thread() self._test_run(invocations=0) for thread in all_threads: self.assertEqual(1, thread.call_count) self.assertEqual(1, thread().run.call_count) def test_check_if_monitor_thread_is_alive(self, _, mock_monitor, *args): # pylint: disable=unused-argument mock_monitor_thread = self._setup_mock_thread_and_start_test_run(mock_monitor, is_alive=True, invocations=1) self.assertEqual(1, mock_monitor.call_count) self.assertEqual(1, mock_monitor_thread.run.call_count) self.assertEqual(1, mock_monitor_thread.is_alive.call_count) self.assertEqual(0, mock_monitor_thread.start.call_count) def test_check_if_env_thread_is_alive(self, mock_env, *args): # pylint: disable=unused-argument mock_env_thread = self._setup_mock_thread_and_start_test_run(mock_env, is_alive=True, invocations=1) self.assertEqual(1, mock_env.call_count) self.assertEqual(1, mock_env_thread.run.call_count) self.assertEqual(1, mock_env_thread.is_alive.call_count) self.assertEqual(0, mock_env_thread.start.call_count) def test_restart_monitor_thread_if_not_alive(self, _, mock_monitor, *args): # pylint: disable=unused-argument mock_monitor_thread = self._setup_mock_thread_and_start_test_run(mock_monitor, is_alive=False, invocations=1) self.assertEqual(1, mock_monitor.call_count) self.assertEqual(1, mock_monitor_thread.run.call_count) self.assertEqual(1, mock_monitor_thread.is_alive.call_count) self.assertEqual(1, mock_monitor_thread.start.call_count) def test_restart_env_thread_if_not_alive(self, mock_env, *args): # pylint: disable=unused-argument mock_env_thread = self._setup_mock_thread_and_start_test_run(mock_env, is_alive=False, invocations=1) self.assertEqual(1, mock_env.call_count) self.assertEqual(1, mock_env_thread.run.call_count) self.assertEqual(1, mock_env_thread.is_alive.call_count) self.assertEqual(1, mock_env_thread.start.call_count) def test_restart_monitor_thread(self, _, mock_monitor, *args): # pylint: disable=unused-argument mock_monitor_thread = self._setup_mock_thread_and_start_test_run(mock_monitor, is_alive=False, invocations=1) self.assertEqual(True, mock_monitor.called) self.assertEqual(True, mock_monitor_thread.run.called) self.assertEqual(True, mock_monitor_thread.is_alive.called) self.assertEqual(True, mock_monitor_thread.start.called) def test_restart_env_thread(self, mock_env, *args): # pylint: disable=unused-argument mock_env_thread = self._setup_mock_thread_and_start_test_run(mock_env, is_alive=False, invocations=1) self.assertEqual(True, mock_env.called) self.assertEqual(True, mock_env_thread.run.called) self.assertEqual(True, mock_env_thread.is_alive.called) self.assertEqual(True, mock_env_thread.start.called) class ChildMock(Mock): def __init__(self, return_value=0, side_effect=None): Mock.__init__(self, return_value=return_value, side_effect=side_effect) self.poll = Mock(return_value=return_value, side_effect=side_effect) self.wait = Mock(return_value=return_value, side_effect=side_effect) class GoalStateMock(object): def __init__(self, incarnation, family, versions): if versions is None: versions = [] self.incarnation = incarnation self.extensions_goal_state = Mock() self.extensions_goal_state.id = incarnation self.extensions_goal_state.agent_families = GoalStateMock._create_agent_families(family, versions) agent_manifest = Mock() agent_manifest.pkg_list = GoalStateMock._create_packages(versions) self.fetch_agent_manifest = Mock(return_value=agent_manifest) @staticmethod def _create_agent_families(family, versions): families = [] if len(versions) > 0 and family is not None: manifest = VMAgentFamily(name=family) for i in range(0, 10): manifest.uris.append("https://nowhere.msft/agent/{0}".format(i)) families.append(manifest) return families @staticmethod def _create_packages(versions): packages = ExtHandlerPackageList() for version in versions: package = ExtHandlerPackage(str(version)) for i in range(0, 5): package_uri = "https://nowhere.msft/agent_pkg/{0}".format(i) package.uris.append(package_uri) packages.versions.append(package) return packages class ProtocolMock(object): def __init__(self, family="TestAgent", etag=42, versions=None, client=None): self.family = family self.client = client self.call_counts = { "update_goal_state": 0 } self._goal_state = GoalStateMock(etag, family, versions) self.goal_state_is_stale = False self.etag = etag self.versions = versions if versions is not None else [] def emulate_stale_goal_state(self): self.goal_state_is_stale = True def get_protocol(self): return self def get_goal_state(self): return self._goal_state def update_goal_state(self): self.call_counts["update_goal_state"] += 1 class TimeMock(Mock): def __init__(self, time_increment=1): Mock.__init__(self) self.next_time = time.time() self.time_call_count = 0 self.time_increment = time_increment self.sleep_interval = None def sleep(self, n): self.sleep_interval = n def time(self): self.time_call_count += 1 current_time = self.next_time self.next_time += self.time_increment return current_time class TryUpdateGoalStateTestCase(HttpRequestPredicates, AgentTestCase): """ Tests for UpdateHandler._try_update_goal_state() """ def test_it_should_return_true_on_success(self): update_handler = get_update_handler() with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: self.assertTrue(update_handler._try_update_goal_state(protocol), "try_update_goal_state should have succeeded") def test_it_should_return_false_on_failure(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: def http_get_handler(url, *_, **__): if self.is_goal_state_request(url): return HttpError('Exception to fake an error retrieving the goal state') return None protocol.set_http_handlers(http_get_handler=http_get_handler) update_handler = get_update_handler() self.assertFalse(update_handler._try_update_goal_state(protocol), "try_update_goal_state should have failed") def test_it_should_update_the_goal_state(self): update_handler = get_update_handler() with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: protocol.mock_wire_data.set_incarnation(12345) # the first goal state should produce an update update_handler._try_update_goal_state(protocol) self.assertEqual(update_handler._goal_state.incarnation, '12345', "The goal state was not updated (received unexpected incarnation)") # no changes in the goal state should not produce an update update_handler._try_update_goal_state(protocol) self.assertEqual(update_handler._goal_state.incarnation, '12345', "The goal state should not be updated (received unexpected incarnation)") # a new goal state should produce an update protocol.mock_wire_data.set_incarnation(6789) update_handler._try_update_goal_state(protocol) self.assertEqual(update_handler._goal_state.incarnation, '6789', "The goal state was not updated (received unexpected incarnation)") def test_it_should_limit_the_number_of_errors_output_to_the_local_log_and_telemetry(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: def http_get_handler(url, *_, **__): if self.is_goal_state_request(url): if fail_goal_state_request: return HttpError('Exception to fake an error retrieving the goal state') return None protocol.set_http_handlers(http_get_handler=http_get_handler) @contextlib.contextmanager def create_log_and_telemetry_mocks(): messages = [] with patch("azurelinuxagent.common.logger.Logger.log", side_effect=lambda level, fmt, *args: messages.append("{0} {1}".format(LogLevel.STRINGS[level], fmt.format(*args)))): with patch("azurelinuxagent.common.event.add_event") as add_event_patcher: yield messages, add_event_patcher # E0601: Using variable 'log_messages' before assignment (used-before-assignment) filter_log_messages = lambda regex: [m for m in log_messages if re.match(regex, m)] # pylint: disable=used-before-assignment errors = lambda: filter_log_messages('ERROR Error fetching the goal state.*') periodic_errors = lambda: filter_log_messages(r'ERROR Fetching the goal state is still failing*') success_messages = lambda: filter_log_messages(r'INFO Fetching the goal state recovered from previous errors.*') # E0601: Using variable 'log_messages' before assignment (used-before-assignment) format_assert_message = lambda msg: "{0}\n*** Log: ***\n{1}".format(msg, "\n".join(log_messages)) # pylint: disable=used-before-assignment # # Initially calls to retrieve the goal state are successful... # update_handler = get_update_handler() fail_goal_state_request = False with create_log_and_telemetry_mocks() as (log_messages, add_event): update_handler._try_update_goal_state(protocol) self.assertTrue(len(log_messages) == 0, format_assert_message("A successful call should not produce any log messages.")) self.assertTrue(add_event.call_count == 0, "A successful call should not produce any telemetry events: [{0}]".format(add_event.call_args_list)) # # ... then errors start happening, and we report the first few only... # fail_goal_state_request = True with create_log_and_telemetry_mocks() as (log_messages, add_event): for _ in range(10): update_handler._try_update_goal_state(protocol) e = errors() pe = periodic_errors() self.assertEqual(3, len(e), format_assert_message("Exactly 3 errors should have been reported.")) self.assertEqual(1, len(pe), format_assert_message("Exactly 1 periodic error should have been reported.")) self.assertEqual(4, len(log_messages), format_assert_message("A total of 4 messages should have been logged.")) self.assertEqual(4, add_event.call_count, "Each of 4 errors should have produced a telemetry event. Got: [{0}]".format(add_event.call_args_list)) # # ... if errors continue happening we report them only periodically ... # with create_log_and_telemetry_mocks() as (log_messages, add_event): for _ in range(5): update_handler._update_goal_state_next_error_report = datetime.now(UTC) # force the reporting period to elapse update_handler._try_update_goal_state(protocol) e = errors() pe = periodic_errors() self.assertEqual(0, len(e), format_assert_message("No errors should have been reported.")) self.assertEqual(5, len(pe), format_assert_message("All 5 errors should have been reported periodically.")) self.assertEqual(5, len(log_messages), format_assert_message("A total of 5 messages should have been logged.")) self.assertEqual(5, add_event.call_count, "Each of the 5 errors should have produced a telemetry event. Got: [{0}]".format(add_event.call_args_list)) # # ... when the errors stop happening we report a recovery message # fail_goal_state_request = False with create_log_and_telemetry_mocks() as (log_messages, add_event): update_handler._try_update_goal_state(protocol) s = success_messages() e = errors() pe = periodic_errors() self.assertEqual(len(s), 1, "Recovering after failures should have produced an info message: [{0}]".format("\n".join(log_messages))) self.assertTrue(len(e) == 0 and len(pe) == 0, "Recovering after failures should have not produced any errors: [{0}]".format("\n".join(log_messages))) self.assertEqual(1, len(log_messages), format_assert_message("A total of 1 message should have been logged.")) self.assertTrue(add_event.call_count == 1 and add_event.call_args_list[0][1]['is_success'] == True, "Recovering after failures should produce a telemetry event (success=true): [{0}]".format(add_event.call_args_list)) def _create_update_handler(): """ Creates an UpdateHandler in which agent updates are mocked as a no-op. """ update_handler = get_update_handler() update_handler._download_agent_if_upgrade_available = Mock(return_value=False) return update_handler @contextlib.contextmanager def _mock_exthandlers_handler(extension_statuses=None, save_to_history=False): """ Creates an ExtHandlersHandler that doesn't actually handle any extensions, but that returns status for 1 extension. The returned ExtHandlersHandler uses a mock WireProtocol, and both the run() and report_ext_handlers_status() are mocked. The mock run() is a no-op. If a list of extension_statuses is given, successive calls to the mock report_ext_handlers_status() returns a single extension with each of the statuses in the list. If extension_statuses is omitted all calls to report_ext_handlers_status() return a single extension with a success status. """ def create_vm_status(extension_status): vm_status = VMStatus(status="Ready", message="Ready") vm_status.vmAgent.extensionHandlers = [ExtHandlerStatus()] vm_status.vmAgent.extensionHandlers[0].extension_status = ExtensionStatus(name="TestExtension") vm_status.vmAgent.extensionHandlers[0].extension_status.status = extension_status return vm_status with mock_wire_protocol(DATA_FILE, save_to_history=save_to_history) as protocol: exthandlers_handler = ExtHandlersHandler(protocol) exthandlers_handler.run = Mock() if extension_statuses is None: exthandlers_handler.report_ext_handlers_status = Mock(return_value=create_vm_status(ExtensionStatusValue.success)) else: exthandlers_handler.report_ext_handlers_status = Mock(side_effect=[create_vm_status(s) for s in extension_statuses]) yield exthandlers_handler class ProcessGoalStateTestCase(AgentTestCase): """ Tests for UpdateHandler._process_goal_state() """ def test_it_should_process_goal_state_only_on_new_goal_state(self): with _mock_exthandlers_handler() as exthandlers_handler: update_handler = _create_update_handler() remote_access_handler = Mock() remote_access_handler.run = Mock() agent_update_handler = Mock() agent_update_handler.run = Mock() # process a goal state update_handler._process_goal_state(exthandlers_handler, remote_access_handler, agent_update_handler) self.assertEqual(1, exthandlers_handler.run.call_count, "exthandlers_handler.run() should have been called on the first goal state") self.assertEqual(1, exthandlers_handler.report_ext_handlers_status.call_count, "exthandlers_handler.report_ext_handlers_status() should have been called on the first goal state") self.assertEqual(1, remote_access_handler.run.call_count, "remote_access_handler.run() should have been called on the first goal state") self.assertEqual(1, agent_update_handler.run.call_count, "agent_update_handler.run() should have been called on the first goal state") # process the same goal state update_handler._process_goal_state(exthandlers_handler, remote_access_handler, agent_update_handler) self.assertEqual(1, exthandlers_handler.run.call_count, "exthandlers_handler.run() should have not been called on the same goal state") self.assertEqual(2, exthandlers_handler.report_ext_handlers_status.call_count, "exthandlers_handler.report_ext_handlers_status() should have been called on the same goal state") self.assertEqual(1, remote_access_handler.run.call_count, "remote_access_handler.run() should not have been called on the same goal state") self.assertEqual(2, agent_update_handler.run.call_count, "agent_update_handler.run() should have been called on the same goal state") # process a new goal state exthandlers_handler.protocol.mock_wire_data.set_incarnation(999) exthandlers_handler.protocol.client.update_goal_state() update_handler._process_goal_state(exthandlers_handler, remote_access_handler, agent_update_handler) self.assertEqual(2, exthandlers_handler.run.call_count, "exthandlers_handler.run() should have been called on a new goal state") self.assertEqual(3, exthandlers_handler.report_ext_handlers_status.call_count, "exthandlers_handler.report_ext_handlers_status() should have been called on a new goal state") self.assertEqual(2, remote_access_handler.run.call_count, "remote_access_handler.run() should have been called on a new goal state") self.assertEqual(3, agent_update_handler.run.call_count, "agent_update_handler.run() should have been called on the new goal state") def test_it_should_write_the_agent_status_to_the_history_folder(self): with _mock_exthandlers_handler(save_to_history=True) as exthandlers_handler: update_handler = _create_update_handler() remote_access_handler = Mock() remote_access_handler.run = Mock() agent_update_handler = Mock() agent_update_handler.run = Mock() update_handler._process_goal_state(exthandlers_handler, remote_access_handler, agent_update_handler) incarnation = exthandlers_handler.protocol.get_goal_state().incarnation matches = glob.glob(os.path.join(conf.get_lib_dir(), ARCHIVE_DIRECTORY_NAME, "*_{0}".format(incarnation))) self.assertTrue(len(matches) == 1, "Could not find the history directory for the goal state. Got: {0}".format(matches)) status_file = os.path.join(matches[0], AGENT_STATUS_FILE) self.assertTrue(os.path.exists(status_file), "Could not find {0}".format(status_file)) @staticmethod def _prepare_fast_track_goal_state(): """ Creates a set of mock wire data where the most recent goal state is a FastTrack goal state; also invokes HostPluginProtocol.fetch_vm_settings() to save the Fast Track status to disk """ # Do a query for the vmSettings; this would retrieve a FastTrack goal state and keep track of its timestamp mock_wire_data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() with mock_wire_protocol(mock_wire_data_file) as protocol: protocol.mock_wire_data.set_etag("0123456789") _ = protocol.client.get_host_plugin().fetch_vm_settings() return mock_wire_data_file def test_it_should_mark_outdated_goal_states_on_service_restart_when_fast_track_is_disabled(self): data_file = self._prepare_fast_track_goal_state() with patch("azurelinuxagent.common.conf.get_enable_fast_track", return_value=False): with mock_wire_protocol(data_file) as protocol: with mock_update_handler(protocol) as update_handler: update_handler.run() self.assertTrue(protocol.client.get_goal_state().extensions_goal_state.is_outdated) @staticmethod def _http_get_vm_settings_handler_not_found(url, *_, **__): if HttpRequestPredicates.is_host_plugin_vm_settings_request(url): return MockHttpResponse(httpclient.NOT_FOUND) # HostGAPlugin returns 404 if the API is not supported return None def test_it_should_mark_outdated_goal_states_on_service_restart_when_host_ga_plugin_stops_supporting_vm_settings(self): data_file = self._prepare_fast_track_goal_state() with mock_wire_protocol(data_file, http_get_handler=self._http_get_vm_settings_handler_not_found) as protocol: with mock_update_handler(protocol) as update_handler: update_handler.run() self.assertTrue(protocol.client.get_goal_state().extensions_goal_state.is_outdated) def test_it_should_clear_the_timestamp_for_the_most_recent_fast_track_goal_state(self): data_file = self._prepare_fast_track_goal_state() if HostPluginProtocol.get_fast_track_timestamp() == timeutil.create_utc_timestamp(datetime_min_utc): raise Exception("The test setup did not save the Fast Track state") with patch("azurelinuxagent.common.conf.get_enable_fast_track", return_value=False): with patch("azurelinuxagent.common.version.get_daemon_version", return_value=FlexibleVersion("2.2.53")): with mock_wire_protocol(data_file) as protocol: with mock_update_handler(protocol) as update_handler: update_handler.run() self.assertEqual(HostPluginProtocol.get_fast_track_timestamp(), timeutil.create_utc_timestamp(datetime_min_utc), "The Fast Track state was not cleared") def test_it_should_default_fast_track_timestamp_to_datetime_min(self): data = DATA_FILE_VM_SETTINGS.copy() # TODO: Currently, there's a limitation in the mocks where bumping the incarnation but the goal # state will cause the agent to error out while trying to write the certificates to disk. These # files have no dependencies on certs, so using them does not present that issue. # # Note that the scenario this test is representing does not depend on certificates at all, and # can be changed to use the default files when the above limitation is addressed. data["vm_settings"] = "hostgaplugin/vm_settings-fabric-no_thumbprints.json" data['goal_state'] = 'wire/goal_state_no_certs.xml' def vm_settings_no_change(url, *_, **__): if HttpRequestPredicates.is_host_plugin_vm_settings_request(url): return MockHttpResponse(httpclient.NOT_MODIFIED) return None def vm_settings_not_supported(url, *_, **__): if HttpRequestPredicates.is_host_plugin_vm_settings_request(url): return MockHttpResponse(404) return None with mock_wire_protocol(data) as protocol: def mock_live_migration(iteration): if iteration == 1: protocol.mock_wire_data.set_incarnation(2) protocol.set_http_handlers(http_get_handler=vm_settings_no_change) elif iteration == 2: protocol.mock_wire_data.set_incarnation(3) protocol.set_http_handlers(http_get_handler=vm_settings_not_supported) with mock_update_handler(protocol, 3, on_new_iteration=mock_live_migration) as update_handler: with patch("azurelinuxagent.ga.update.logger.error") as patched_error: def check_for_errors(): msg_fragment = "Error fetching the goal state:" for (args, _) in filter(lambda a: len(a) > 0, patched_error.call_args_list): if msg_fragment in args[0]: self.fail("Found error: {}".format(args[0])) update_handler.run(debug=True) check_for_errors() timestamp = protocol.client.get_host_plugin()._fast_track_timestamp self.assertEqual(timestamp, timeutil.create_utc_timestamp(datetime_min_utc), "Expected fast track time stamp to be set to {0}, got {1}".format(datetime_min_utc, timestamp)) def test_it_should_refresh_certificates_on_fast_track_goal_state_after_hibernate_resume_cycle(self): # # A hibernate/resume cycle is a special case in that on resume it produces a new Fabric goal state with incarnation 1. Since the VM is re-allocated, # the goal state will include a new tenant encryption certificate. If the incarnation was also 1 before hibernation, the Agent won't detect this new # goal state and subsequent Fast Track goal states will fail because the Agent has not fetched the new certificate. # # To address this issue, before executing any Fast Track goal state, _try_update_goal_state() checks that the current goal state includes the # certificate used by extensions to decrypt their protected settings and forces a refresh if it does not. # # The test data below uses files captured from an actual scenario (minus edits to remove irrelevant/sensitive data) and consists of 3 goal states: # # * goal_state_1: WireServer + HGAP (Fast Track) goal state before hibernation; incarnation 1. # * goal_state_2: WireServer + HGAP (Fabric) goal state after resume; also incarnation 1, but new certificates. # * goal_state_3: Fast Track goal state (requires new certificates) # update_handler = get_update_handler() goal_state_1 = wire_protocol_data.DATA_FILE.copy() goal_state_1.update({ "goal_state": "hibernate/goal_state_1/GoalState.xml", "hosting_env": "hibernate/goal_state_1/HostingEnvironmentConfig.xml", "shared_config": "hibernate/goal_state_1/SharedConfig.xml", "certs": "hibernate/goal_state_1/Certificates.xml", "ext_conf": "hibernate/goal_state_1/ExtensionsConfig.xml", "trans_prv": "hibernate/TransportPrivate.pem", "trans_cert": "hibernate/TransportCert.pem", "vm_settings": "hibernate/goal_state_1/VmSettings.json", "ETag": "519198402722078973" }) goal_state_1_certificates = [c["thumbprint"] for c in json.loads(load_data("hibernate/goal_state_1/Certificates.json"))] goal_state_2 = goal_state_1.copy() goal_state_2.update({ "goal_state": "hibernate/goal_state_2/GoalState.xml", "hosting_env": "hibernate/goal_state_2/HostingEnvironmentConfig.xml", "shared_config": "hibernate/goal_state_2/SharedConfig.xml", "certs": "hibernate/goal_state_2/Certificates.xml", "ext_conf": "hibernate/goal_state_2/ExtensionsConfig.xml", "vm_settings": "hibernate/goal_state_2/VmSettings.json", "ETag": "12335680585613334365" }) goal_state_2_certificates = [c["thumbprint"] for c in json.loads(load_data("hibernate/goal_state_2/Certificates.json"))] goal_state_3 = goal_state_2.copy() goal_state_3.update({ "vm_settings": "hibernate/goal_state_3/VmSettings.json", "ETag": "6382954395241675842" }) # # Mock these to make them no-ops (we do not want extensions, JIT requests, or Agent updates to run as part of this test) # exthandlers_handler, remote_access_handler, agent_update_handler = Mock(), Mock(), Mock() with mock_wire_protocol(goal_state_1, detect_protocol=False) as protocol: exthandlers_handler.protocol = protocol # # We initialize the mock protocol with goal_state_1 and do some checks to double-check the test is setup correctly # update_handler._process_goal_state(exthandlers_handler, remote_access_handler, agent_update_handler) gs = update_handler._goal_state egs = gs.extensions_goal_state egs_1_id = egs.id certificates = [c["thumbprint"] for c in gs.certs.summary] if gs.incarnation != '1': raise Exception('Incorrect test initialization. Incarnation should be 1, was {0}'.format(gs.incarnation)) if egs.source != GoalStateSource.FastTrack: raise Exception('Incorrect test initialization. Goal state should be FastTrack, was {0}'.format(egs.source)) if egs.etag != goal_state_1["ETag"]: raise Exception('Incorrect test initialization. Expected etag {0}, got {1} '.format(goal_state_1["Etag"], egs.etag)) if sorted(certificates) != sorted(goal_state_1_certificates): raise Exception('Incorrect test initialization. Expected certificates {0}, got {1} '.format(goal_state_1_certificates, certificates)) # # On resume, the Agent will receive goal_state_2, but since the incarnation is also 1, it won't detect it as a new goal state and # _try_update_goal_state won't fetch the new data. # # Note that the Agent does detect the new VmSettings, but since they represent a Fabric goal state, it ignores them. # protocol.mock_wire_data = wire_protocol_data.WireProtocolData(goal_state_2) with patch('azurelinuxagent.common.protocol.goal_state.add_event') as patch_add_event: update_handler._process_goal_state(exthandlers_handler, remote_access_handler, agent_update_handler) gs = update_handler._goal_state egs = gs.extensions_goal_state certificates = [c["thumbprint"] for c in gs.certs.summary] telemetry_events = [kwargs["message"] for _, kwargs in patch_add_event.call_args_list if kwargs['op'] == 'GoalState'] if gs.incarnation != '1': raise Exception('Unexpected Agent behavior. Incarnation should be 1, was {0}'.format(gs.incarnation)) if egs.id != egs_1_id: raise Exception('Unexpected Agent behavior. The ID For the extensions goal state should be {0}; got {1}'.format(egs_1_id, egs.id)) if sorted(certificates) != sorted(goal_state_1_certificates): raise Exception('Unexpected Agent behavior. Expected certificates {0}, got {1} '.format(goal_state_1_certificates, certificates)) regex = r'Fetched new vmSettings.+eTag: {0}'.format(goal_state_2["ETag"]) if not any(re.match(regex, e) is not None for e in telemetry_events): raise Exception('Unexpected Agent behavior. Expected a telemetry event matching {0}; got: {1}'.format(regex, telemetry_events)) message = 'The vmSettings originated via Fabric; will ignore them.' if not any(message == e for e in telemetry_events): raise Exception('Unexpected Agent behavior. Expected a telemetry event matching "{0}"; got: {1}'.format(message, telemetry_events)) # # This is the actual test: when a Fast Track goal state shows up, the Agent should pull the certificates that originated in the previous # Fabric goal state, and the updated goal state should include all the certificates referenced by the extensions in the new goal state. # protocol.mock_wire_data = wire_protocol_data.WireProtocolData(goal_state_3) update_handler._process_goal_state(exthandlers_handler, remote_access_handler, agent_update_handler) gs = update_handler._goal_state egs = gs.extensions_goal_state certificates = [c["thumbprint"] for c in gs.certs.summary] self.assertEqual('1', gs.incarnation, "The incarnation of the latest Goal State should be 1") self.assertEqual(GoalStateSource.FastTrack, egs.source, "The latest Goal State should be Fast Track") self.assertEqual(goal_state_3["ETag"], egs.etag, "The etag of the latest Goal State should be {0}".format(goal_state_3["ETag"])) self.assertEqual(sorted(goal_state_2_certificates), sorted(certificates), "The certificates in the latest Goal State should be {0}".format(goal_state_2_certificates)) for e in egs.extensions: for s in e.settings: if s.protectedSettings is not None: self.assertIn(s.certificateThumbprint, certificates, "Certificate {0}, needed by {1} is missing from the certificates in the goal state: {2}.".format(s.certificateThumbprint, e.name, certificates)) class HeartbeatTestCase(AgentTestCase): @patch("azurelinuxagent.common.logger.info") @patch("azurelinuxagent.ga.update.add_event") def test_telemetry_heartbeat_creates_event(self, patch_add_event, patch_info, *_): update_handler = get_update_handler() agent_update_handler = Mock() update_handler.last_telemetry_heartbeat = datetime.now(UTC) - timedelta(hours=1) update_handler._send_heartbeat_telemetry(agent_update_handler) self.assertEqual(1, patch_add_event.call_count) self.assertTrue(any(call_args[0] == "[HEARTBEAT] Agent {0} is running as the goal state agent [DEBUG {1}]" for call_args in patch_info.call_args), "The heartbeat was not written to the agent's log") class AgentMemoryCheckTestCase(AgentTestCase): @patch("azurelinuxagent.common.logger.info") @patch("azurelinuxagent.ga.update.add_event") def test_check_agent_memory_usage_raises_exit_exception(self, patch_add_event, patch_info, *_): with patch("azurelinuxagent.ga.cgroupconfigurator.CGroupConfigurator._Impl.check_agent_memory_usage", side_effect=AgentMemoryExceededException()): with patch('azurelinuxagent.common.conf.get_enable_agent_memory_usage_check', return_value=True): with self.assertRaises(ExitException) as context_manager: update_handler = get_update_handler() update_handler._last_check_memory_usage_time = time.time() - 24 * 60 update_handler._check_agent_memory_usage() self.assertEqual(1, patch_add_event.call_count) self.assertTrue(any("Check on agent memory usage" in call_args[0] for call_args in patch_info.call_args), "The memory check was not written to the agent's log") self.assertIn("Agent {0} is reached memory limit -- exiting".format(CURRENT_AGENT), ustr(context_manager.exception), "An incorrect exception was raised") @patch("azurelinuxagent.common.logger.warn") @patch("azurelinuxagent.ga.update.add_event") def test_check_agent_memory_usage_fails(self, patch_add_event, patch_warn, *_): with patch("azurelinuxagent.ga.cgroupconfigurator.CGroupConfigurator._Impl.check_agent_memory_usage", side_effect=Exception()): with patch('azurelinuxagent.common.conf.get_enable_agent_memory_usage_check', return_value=True): update_handler = get_update_handler() update_handler._last_check_memory_usage_time = time.time() - 24 * 60 update_handler._check_agent_memory_usage() self.assertTrue(any("Error checking the agent's memory usage" in call_args[0] for call_args in patch_warn.call_args), "The memory check was not written to the agent's log") self.assertEqual(1, patch_add_event.call_count) add_events = [kwargs for _, kwargs in patch_add_event.call_args_list if kwargs["op"] == WALAEventOperation.AgentMemory] self.assertTrue( len(add_events) == 1, "Exactly 1 event should have been emitted when memory usage check fails. Got: {0}".format(add_events)) self.assertIn( "Error checking the agent's memory usage", add_events[0]["message"], "The error message is not correct when memory usage check failed") @patch("azurelinuxagent.ga.cgroupconfigurator.CGroupConfigurator._Impl.check_agent_memory_usage") @patch("azurelinuxagent.ga.update.add_event") def test_check_agent_memory_usage_not_called(self, patch_add_event, patch_memory_usage, *_): # This test ensures that agent not called immediately on startup, instead waits for CHILD_LAUNCH_INTERVAL with patch('azurelinuxagent.common.conf.get_enable_agent_memory_usage_check', return_value=True): update_handler = get_update_handler() update_handler._check_agent_memory_usage() self.assertEqual(0, patch_memory_usage.call_count) self.assertEqual(0, patch_add_event.call_count) class GoalStateIntervalTestCase(AgentTestCase): def test_initial_goal_state_period_should_default_to_goal_state_period(self): configuration_provider = conf.ConfigurationProvider() test_file = os.path.join(self.tmp_dir, "waagent.conf") with open(test_file, "w") as file_: file_.write("Extensions.GoalStatePeriod=987654321\n") conf.load_conf_from_file(test_file, configuration_provider) self.assertEqual(987654321, conf.get_initial_goal_state_period(conf=configuration_provider)) def test_update_handler_should_use_the_default_goal_state_period(self): update_handler = get_update_handler() default = conf.get_int_default_value("Extensions.GoalStatePeriod") self.assertEqual(default, update_handler._goal_state_period, "The UpdateHanlder is not using the default goal state period") def test_update_handler_should_not_use_the_default_goal_state_period_when_extensions_are_disabled(self): with patch('azurelinuxagent.common.conf.get_extensions_enabled', return_value=False): update_handler = get_update_handler() self.assertEqual(GOAL_STATE_PERIOD_EXTENSIONS_DISABLED, update_handler._goal_state_period, "Incorrect goal state period when extensions are disabled") def test_the_default_goal_state_period_and_initial_goal_state_period_should_be_the_same(self): update_handler = get_update_handler() default = conf.get_int_default_value("Extensions.GoalStatePeriod") self.assertEqual(default, update_handler._goal_state_period, "The UpdateHanlder is not using the default goal state period") def test_update_handler_should_use_the_initial_goal_state_period_when_it_is_different_to_the_goal_state_period(self): with patch('azurelinuxagent.common.conf.get_initial_goal_state_period', return_value=99999): update_handler = get_update_handler() self.assertEqual(99999, update_handler._goal_state_period, "Expected the initial goal state period") def test_update_handler_should_use_the_initial_goal_state_period_until_the_goal_state_converges(self): initial_goal_state_period, goal_state_period = 11111, 22222 with patch('azurelinuxagent.common.conf.get_initial_goal_state_period', return_value=initial_goal_state_period): with patch('azurelinuxagent.common.conf.get_goal_state_period', return_value=goal_state_period): with _mock_exthandlers_handler([ExtensionStatusValue.transitioning, ExtensionStatusValue.success]) as exthandlers_handler: remote_access_handler = Mock() agent_update_handler = Mock() update_handler = _create_update_handler() self.assertEqual(initial_goal_state_period, update_handler._goal_state_period, "Expected the initial goal state period") # the extension is transitioning, so we should still be using the initial goal state period update_handler._process_goal_state(exthandlers_handler, remote_access_handler, agent_update_handler) self.assertEqual(initial_goal_state_period, update_handler._goal_state_period, "Expected the initial goal state period when the extension is transitioning") # the goal state converged (the extension succeeded), so we should switch to the regular goal state period update_handler._process_goal_state(exthandlers_handler, remote_access_handler, agent_update_handler) self.assertEqual(goal_state_period, update_handler._goal_state_period, "Expected the regular goal state period after the goal state converged") def test_update_handler_should_switch_to_the_regular_goal_state_period_when_the_goal_state_does_not_converges(self): initial_goal_state_period, goal_state_period = 11111, 22222 with patch('azurelinuxagent.common.conf.get_initial_goal_state_period', return_value=initial_goal_state_period): with patch('azurelinuxagent.common.conf.get_goal_state_period', return_value=goal_state_period): with _mock_exthandlers_handler([ExtensionStatusValue.transitioning, ExtensionStatusValue.transitioning]) as exthandlers_handler: remote_access_handler = Mock() agent_update_handler = Mock() update_handler = _create_update_handler() self.assertEqual(initial_goal_state_period, update_handler._goal_state_period, "Expected the initial goal state period") # the extension is transisioning, so we should still be using the initial goal state period update_handler._process_goal_state(exthandlers_handler, remote_access_handler, agent_update_handler) self.assertEqual(initial_goal_state_period, update_handler._goal_state_period, "Expected the initial goal state period when the extension is transitioning") # a new goal state arrives before the current goal state converged (the extension is transitioning), so we should switch to the regular goal state period exthandlers_handler.protocol.mock_wire_data.set_incarnation(100) update_handler._process_goal_state(exthandlers_handler, remote_access_handler, agent_update_handler) self.assertEqual(goal_state_period, update_handler._goal_state_period, "Expected the regular goal state period when the goal state does not converge") class ExtensionsSummaryTestCase(AgentTestCase): @staticmethod def _create_extensions_summary(extension_statuses): """ Creates an ExtensionsSummary from an array of (extension name, extension status) tuples """ vm_status = VMStatus(status="Ready", message="Ready") vm_status.vmAgent.extensionHandlers = [ExtHandlerStatus()] * len(extension_statuses) for i in range(len(extension_statuses)): vm_status.vmAgent.extensionHandlers[i].extension_status = ExtensionStatus(name=extension_statuses[i][0]) vm_status.vmAgent.extensionHandlers[0].extension_status.status = extension_statuses[i][1] return ExtensionsSummary(vm_status) def test_equality_operator_should_return_true_on_items_with_the_same_value(self): summary1 = ExtensionsSummaryTestCase._create_extensions_summary([("Extension 1", ExtensionStatusValue.success), ("Extension 2", ExtensionStatusValue.transitioning)]) summary2 = ExtensionsSummaryTestCase._create_extensions_summary([("Extension 1", ExtensionStatusValue.success), ("Extension 2", ExtensionStatusValue.transitioning)]) self.assertTrue(summary1 == summary2, "{0} == {1} should be True".format(summary1, summary2)) def test_equality_operator_should_return_false_on_items_with_different_values(self): summary1 = ExtensionsSummaryTestCase._create_extensions_summary([("Extension 1", ExtensionStatusValue.success), ("Extension 2", ExtensionStatusValue.transitioning)]) summary2 = ExtensionsSummaryTestCase._create_extensions_summary([("Extension 1", ExtensionStatusValue.success), ("Extension 2", ExtensionStatusValue.success)]) self.assertFalse(summary1 == summary2, "{0} == {1} should be False") summary1 = ExtensionsSummaryTestCase._create_extensions_summary([("Extension 1", ExtensionStatusValue.success)]) summary2 = ExtensionsSummaryTestCase._create_extensions_summary([("Extension 1", ExtensionStatusValue.success), ("Extension 2", ExtensionStatusValue.success)]) self.assertFalse(summary1 == summary2, "{0} == {1} should be False") def test_inequality_operator_should_return_true_on_items_with_different_values(self): summary1 = ExtensionsSummaryTestCase._create_extensions_summary([("Extension 1", ExtensionStatusValue.success), ("Extension 2", ExtensionStatusValue.transitioning)]) summary2 = ExtensionsSummaryTestCase._create_extensions_summary([("Extension 1", ExtensionStatusValue.success), ("Extension 2", ExtensionStatusValue.success)]) self.assertTrue(summary1 != summary2, "{0} != {1} should be True".format(summary1, summary2)) summary1 = ExtensionsSummaryTestCase._create_extensions_summary([("Extension 1", ExtensionStatusValue.success)]) summary2 = ExtensionsSummaryTestCase._create_extensions_summary([("Extension 1", ExtensionStatusValue.success), ("Extension 2", ExtensionStatusValue.success)]) self.assertTrue(summary1 != summary2, "{0} != {1} should be True") def test_inequality_operator_should_return_false_on_items_with_same_value(self): summary1 = ExtensionsSummaryTestCase._create_extensions_summary([("Extension 1", ExtensionStatusValue.success), ("Extension 2", ExtensionStatusValue.transitioning)]) summary2 = ExtensionsSummaryTestCase._create_extensions_summary([("Extension 1", ExtensionStatusValue.success), ("Extension 2", ExtensionStatusValue.transitioning)]) self.assertFalse(summary1 != summary2, "{0} != {1} should be False".format(summary1, summary2)) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-a976115/tests/lib/000077500000000000000000000000001510742556200173025ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests/lib/__init__.py000066400000000000000000000011651510742556200214160ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # Azure-WALinuxAgent-a976115/tests/lib/cgroups_tools.py000066400000000000000000000027511510742556200225630ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os from azurelinuxagent.common.utils import fileutil class CGroupsTools(object): @staticmethod def create_legacy_agent_cgroup(cgroups_file_system_root, controller, daemon_pid): """ Previous versions of the daemon (2.2.31-2.2.40) wrote their PID to /sys/fs/cgroup/{cpu,memory}/WALinuxAgent/WALinuxAgent; starting from version 2.2.41 we track the agent service in walinuxagent.service instead of WALinuxAgent/WALinuxAgent. This method creates a mock cgroup using the legacy path and adds the given PID to it. """ legacy_cgroup = os.path.join(cgroups_file_system_root, controller, "WALinuxAgent", "WALinuxAgent") if not os.path.exists(legacy_cgroup): os.makedirs(legacy_cgroup) fileutil.append_file(os.path.join(legacy_cgroup, "cgroup.procs"), daemon_pid + "\n") return legacy_cgroup Azure-WALinuxAgent-a976115/tests/lib/event_logger_tools.py000066400000000000000000000053051510742556200235570ustar00rootroot00000000000000# Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import platform import azurelinuxagent.common.event as event from azurelinuxagent.common.version import DISTRO_NAME, DISTRO_VERSION, DISTRO_CODE_NAME import tests.lib.tools as tools from tests.lib import wire_protocol_data from tests.lib.mock_wire_protocol import mock_wire_protocol class EventLoggerTools(object): mock_imds_data = { 'location': 'uswest', 'subscriptionId': 'AAAAAAAA-BBBB-CCCC-DDDD-EEEEEEEEEEEE', 'resourceGroupName': 'test-rg', 'vmId': '99999999-8888-7777-6666-555555555555', 'image_origin': 2468 } @staticmethod def initialize_event_logger(event_dir): """ Initializes the event logger using mock data for the common parameters; the goal state fields are taken from wire_protocol_data.DATA_FILE and the IMDS fields from mock_imds_data. """ if not os.path.exists(event_dir): os.mkdir(event_dir) event.init_event_logger(event_dir) mock_imds_info = tools.Mock() mock_imds_info.location = EventLoggerTools.mock_imds_data['location'] mock_imds_info.subscriptionId = EventLoggerTools.mock_imds_data['subscriptionId'] mock_imds_info.resourceGroupName = EventLoggerTools.mock_imds_data['resourceGroupName'] mock_imds_info.vmId = EventLoggerTools.mock_imds_data['vmId'] mock_imds_info.image_origin = EventLoggerTools.mock_imds_data['image_origin'] mock_imds_client = tools.Mock() mock_imds_client.get_compute = tools.Mock(return_value=mock_imds_info) with mock_wire_protocol(wire_protocol_data.DATA_FILE) as mock_protocol: with tools.patch("azurelinuxagent.common.event.get_imds_client", return_value=mock_imds_client): event.initialize_event_logger_vminfo_common_parameters_and_protocol(mock_protocol) @staticmethod def get_expected_os_version(): """ Returns the expected value for the OS Version in telemetry events """ return u"{0}:{1}-{2}-{3}:{4}".format(platform.system(), DISTRO_NAME, DISTRO_VERSION, DISTRO_CODE_NAME, platform.release()) Azure-WALinuxAgent-a976115/tests/lib/extension_emulator.py000066400000000000000000000337111510742556200236050ustar00rootroot00000000000000# Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import json import re import uuid import os import contextlib import subprocess import azurelinuxagent.common.conf as conf from azurelinuxagent.common.future import ustr from azurelinuxagent.common.utils import fileutil from azurelinuxagent.ga.exthandlers import ExtHandlerInstance, ExtCommandEnvVariable from tests.lib.tools import Mock, patch from tests.lib.wire_protocol_data import WireProtocolData from tests.lib.mock_wire_protocol import MockHttpResponse from tests.lib.http_request_predicates import HttpRequestPredicates class ExtensionCommandNames(object): INSTALL = "install" UNINSTALL = "uninstall" UPDATE = "update" ENABLE = "enable" DISABLE = "disable" class Actions(object): """ A collection of static methods providing some basic functionality for the ExtensionEmulator class' actions. """ @staticmethod def succeed_action(*_, **__): """ A nop action with the correct function signature for ExtensionEmulator actions. """ return 0 @staticmethod def generate_unique_fail(): """ Utility function for tracking the return code of a command. Returns both a unique return code, and a function pointer which returns said return code. """ return_code = str(uuid.uuid4()) def fail_action(*_, **__): return return_code return return_code, fail_action def extension_emulator(name="OSTCExtensions.ExampleHandlerLinux", version="1.0.0", update_mode="UpdateWithInstall", report_heartbeat=False, continue_on_update_failure=False, supports_multiple_extensions=False, install_action=Actions.succeed_action, uninstall_action=Actions.succeed_action, enable_action=Actions.succeed_action, disable_action=Actions.succeed_action, update_action=Actions.succeed_action): """ Factory method for ExtensionEmulator objects with sensible defaults. """ # Linter reports too many arguments, but this isn't an issue because all are defaulted; # no caller will have to actually provide all of the arguments listed. return ExtensionEmulator(name, version, update_mode, report_heartbeat, continue_on_update_failure, supports_multiple_extensions, install_action, uninstall_action, enable_action, disable_action, update_action) @contextlib.contextmanager def enable_invocations(*emulators): """ Allows ExtHandlersHandler objects to call the specified emulators and keeps track of the order of those invocations. Returns the invocation record. Note that this method patches subprocess.Popen and ExtHandlerInstance.load_manifest. """ invocation_record = InvocationRecord() patched_popen = generate_patched_popen(invocation_record, *emulators) patched_load_manifest = generate_mock_load_manifest(*emulators) with patch.object(ExtHandlerInstance, "load_manifest", patched_load_manifest): with patch("subprocess.Popen", patched_popen): yield invocation_record def generate_put_handler(*emulators): """ Create a HTTP handler to store status blobs for each provided emulator. For use with tests.lib.mocks.mock_wire_protocol. """ def mock_put_handler(url, *args, **_): if HttpRequestPredicates.is_host_plugin_status_request(url): status_blob = WireProtocolData.get_status_blob_from_hostgaplugin_put_status_request(args[0]) else: status_blob = args[0] handler_statuses = json.loads(status_blob).get("aggregateStatus", {}).get("handlerAggregateStatus", []) for handler_status in handler_statuses: supplied_name = handler_status.get("handlerName", None) supplied_version = handler_status.get("handlerVersion", None) try: matching_ext = _first_matching_emulator(emulators, supplied_name, supplied_version) matching_ext.status_blobs.append(handler_status) except StopIteration: # Tests will want to know that the agent is running an extension they didn't specifically allocate. raise Exception("Extension running, but not present in emulators: {0}, {1}".format(supplied_name, supplied_version)) return MockHttpResponse(status=200) return mock_put_handler class InvocationRecord: def __init__(self): self._queue = [] def add(self, ext_name, ext_ver, ext_cmd): self._queue.append((ext_name, ext_ver, ext_cmd)) def compare(self, *expected_cmds): """ Verifies that any and all recorded invocations appear in the provided command list in that exact ordering. Each cmd in expected_cmds should be a tuple of the form (ExtensionEmulator, ExtensionCommandNames object). """ for (expected_ext_emulator, command_name) in expected_cmds: try: (ext_name, ext_ver, ext_cmd) = self._queue.pop(0) if not expected_ext_emulator.matches(ext_name, ext_ver) or command_name != ext_cmd: raise Exception("Unexpected invocation: have ({0}, {1}, {2}), but expected ({3}, {4}, {5})".format( ext_name, ext_ver, ext_cmd, expected_ext_emulator.name, expected_ext_emulator.version, command_name )) except IndexError: raise Exception("No more invocations recorded. Expected ({0}, {1}, {2}).".format(expected_ext_emulator.name, expected_ext_emulator.version, command_name)) if self._queue: raise Exception("Invocation recorded, but not expected: ({0}, {1}, {2})".format( *self._queue[0] )) def _first_matching_emulator(emulators, name, version): for ext in emulators: if ext.matches(name, version): return ext raise StopIteration class ExtensionEmulator: """ A wrapper class for the possible actions and options that an extension might support. """ def __init__(self, name, version, update_mode, report_heartbeat, continue_on_update_failure, supports_multiple_extensions, install_action, uninstall_action, enable_action, disable_action, update_action): # Linter reports too many arguments, but this constructor has its access mediated by # a factory method; the calls affected by the number of arguments here is very # limited in scope. self.name = name self.version = version self.update_mode = update_mode self.report_heartbeat = report_heartbeat self.continue_on_update_failure = continue_on_update_failure self.supports_multiple_extensions = supports_multiple_extensions self._actions = { ExtensionCommandNames.INSTALL: ExtensionEmulator._extend_func(install_action), ExtensionCommandNames.UNINSTALL: ExtensionEmulator._extend_func(uninstall_action), ExtensionCommandNames.UPDATE: ExtensionEmulator._extend_func(update_action), ExtensionCommandNames.ENABLE: ExtensionEmulator._extend_func(enable_action), ExtensionCommandNames.DISABLE: ExtensionEmulator._extend_func(disable_action) } self._status_blobs = [] @property def actions(self): """ A read-only property designed to allow inspection for the emulated extension's actions. `actions` maps an ExtensionCommandNames object to a mock wrapping the function this emulator was initialized with. """ return self._actions @property def status_blobs(self): """ A property for storing and retreiving the status blobs for the extension this object is emulating that are uploaded to the HTTP PUT /status endpoint. """ return self._status_blobs @staticmethod def _extend_func(func): """ Convert a function such that its returned value mimicks a Popen object (i.e. with correct return values for poll() and wait() calls). """ def wrapped_func(cmd, *args, **kwargs): return_value = func(cmd, *args, **kwargs) prefix = kwargs['env'][ExtCommandEnvVariable.ExtensionSeqNumber] if ExtCommandEnvVariable.ExtensionName in kwargs['env']: prefix = "{0}.{1}".format(kwargs['env'][ExtCommandEnvVariable.ExtensionName], prefix) status_file = os.path.join(os.path.dirname(cmd), "status", "{seq}.status".format(seq=prefix)) if return_value == 0: status_contents = [{ "status": {"status": "success"} }] else: try: ec = int(return_value) except Exception: # Error when trying to parse return_value, probably not an integer. # Failing with -1 and passing the return_value as message ec = -1 status_contents = [{"status": {"status": "error", "code": ec, "formattedMessage": {"message": return_value, "lang": "en-US"}}}] fileutil.write_file(status_file, json.dumps(status_contents)) return Mock(**{ "poll.return_value": return_value, "wait.return_value": return_value }) # Wrap the function in a mock to allow invocation reflection a la .assert_not_called(), etc. return Mock(wraps=wrapped_func) def matches(self, name, version): return self.name == name and self.version == version def generate_patched_popen(invocation_record, *emulators): """ Create a mock popen function able to invoke the proper action for an extension emulator in emulators. """ original_popen = subprocess.Popen def patched_popen(cmd, *args, **kwargs): try: handler_name, handler_version, command_name = extract_extension_info_from_command(cmd) except ValueError: return original_popen(cmd, *args, **kwargs) try: name = handler_name # MultiConfig scenario, search for full name - . if ExtCommandEnvVariable.ExtensionName in kwargs['env']: name = "{0}.{1}".format(handler_name, kwargs['env'][ExtCommandEnvVariable.ExtensionName]) invocation_record.add(name, handler_version, command_name) matching_ext = _first_matching_emulator(emulators, name, handler_version) return matching_ext.actions[command_name](cmd, *args, **kwargs) except StopIteration: raise Exception("Extension('{name}', '{version}') not listed as a parameter. Is it being emulated?".format( name=handler_name, version=handler_version )) return patched_popen def generate_mock_load_manifest(*emulators): original_load_manifest = ExtHandlerInstance.load_manifest def mock_load_manifest(self): matching_emulator = None names = [self.ext_handler.name] # Incase of MC, search for full names - . if self.supports_multi_config: names = [self.get_extension_full_name(ext) for ext in self.extensions] for name in names: try: matching_emulator = _first_matching_emulator(emulators, name, self.ext_handler.version) except StopIteration: continue else: break if matching_emulator is None: raise Exception( "Extension('{name}', '{version}') not listed as a parameter. Is it being emulated?".format( name=self.ext_handler.name, version=self.ext_handler.version)) base_manifest = original_load_manifest(self) base_manifest.data["handlerManifest"].update({ "continueOnUpdateFailure": matching_emulator.continue_on_update_failure, "reportHeartbeat": matching_emulator.report_heartbeat, "updateMode": matching_emulator.update_mode, "supportsMultipleExtensions": matching_emulator.supports_multiple_extensions }) return base_manifest return mock_load_manifest def extract_extension_info_from_command(command): """ Parse a command into a tuple of extension info. """ if not isinstance(command, (str, ustr)): raise ValueError("Cannot extract extension info from non-string commands") # Group layout of the expected command; this lets us grab what we want after a match template = r'(?<={base_dir}/)(?P{ext_name})-(?P{ext_ver})(?:/{script_file} -)(?P{ext_cmd})' base_dir_regex = conf.get_lib_dir() script_file_regex = r'[^\s]+' ext_cmd_regex = r'[a-zA-Z]+' ext_name_regex = r'[a-zA-Z]+(?:[.a-zA-Z]+)?' ext_ver_regex = r'[0-9]+(?:[.0-9]+)*' full_regex = template.format( ext_name=ext_name_regex, ext_ver=ext_ver_regex, base_dir=base_dir_regex, script_file=script_file_regex, ext_cmd=ext_cmd_regex ) match_obj = re.search(full_regex, command) if not match_obj: raise ValueError("Command does not match the desired format: {0}".format(command)) return match_obj.group('name', 'ver', 'cmd')Azure-WALinuxAgent-a976115/tests/lib/http_request_predicates.py000066400000000000000000000112361510742556200246110ustar00rootroot00000000000000import re from azurelinuxagent.common.utils import restutil class HttpRequestPredicates(object): """ Utility functions to check the urls used by tests """ @staticmethod def is_goal_state_request(url): return url.lower() == 'http://{0}/machine/?comp=goalstate'.format(restutil.KNOWN_WIRESERVER_IP) @staticmethod def is_certificates_request(url): return re.match(r'http://{0}(:80)?/machine/.*?comp=certificates'.format(restutil.KNOWN_WIRESERVER_IP), url, re.IGNORECASE) @staticmethod def is_extensions_config_request(url): return re.match(r'http://{0}(:80)?/machine/.*?comp=config&type=extensionsConfig'.format(restutil.KNOWN_WIRESERVER_IP), url, re.IGNORECASE) @staticmethod def is_hosting_environment_config_request(url): return re.match(r'http://{0}(:80)?/machine/.*?comp=config&type=hostingEnvironmentConfig'.format(restutil.KNOWN_WIRESERVER_IP), url, re.IGNORECASE) @staticmethod def is_shared_config_request(url): return re.match(r'http://{0}(:80)?/machine/.*?comp=config&type=sharedConfig'.format(restutil.KNOWN_WIRESERVER_IP), url, re.IGNORECASE) @staticmethod def is_telemetry_request(url): return url.lower() == 'http://{0}/machine?comp=telemetrydata'.format(restutil.KNOWN_WIRESERVER_IP) @staticmethod def is_health_service_request(url): return url.lower() == 'http://{0}:80/healthservice'.format(restutil.KNOWN_WIRESERVER_IP) @staticmethod def is_in_vm_artifacts_profile_request(url): return re.match(r'https://.+\.blob\.core\.windows\.net/\$system/.+\.(vmSettings|settings)\?.+', url) is not None @staticmethod def _get_host_plugin_request_artifact_location(url, request_kwargs): if 'headers' not in request_kwargs: raise ValueError('Host plugin request is missing HTTP headers ({0})'.format(url)) headers = request_kwargs['headers'] if 'x-ms-artifact-location' not in headers: raise ValueError('Host plugin request is missing the x-ms-artifact-location header ({0})'.format(url)) return headers['x-ms-artifact-location'] @staticmethod def is_host_plugin_vm_settings_request(url): return url.lower() == 'http://{0}:{1}/vmsettings'.format(restutil.KNOWN_WIRESERVER_IP, restutil.HOST_PLUGIN_PORT) @staticmethod def is_host_plugin_health_request(url): return url.lower() == 'http://{0}:{1}/health'.format(restutil.KNOWN_WIRESERVER_IP, restutil.HOST_PLUGIN_PORT) @staticmethod def is_host_plugin_extension_artifact_request(url): return url.lower() == 'http://{0}:{1}/extensionartifact'.format(restutil.KNOWN_WIRESERVER_IP, restutil.HOST_PLUGIN_PORT) @staticmethod def is_status_request(url): return HttpRequestPredicates.is_storage_status_request(url) or HttpRequestPredicates.is_host_plugin_status_request(url) @staticmethod def is_storage_status_request(url): # e.g. 'https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo' return re.match(r'^https://.+/.*\.status\?[^/]+$', url, re.IGNORECASE) @staticmethod def is_host_plugin_status_request(url): return url.lower() == 'http://{0}:{1}/status'.format(restutil.KNOWN_WIRESERVER_IP, restutil.HOST_PLUGIN_PORT) @staticmethod def is_host_plugin_extension_request(request_url, request_kwargs, extension_url): if not HttpRequestPredicates.is_host_plugin_extension_artifact_request(request_url): return False artifact_location = HttpRequestPredicates._get_host_plugin_request_artifact_location(request_url, request_kwargs) return artifact_location == extension_url @staticmethod def is_host_plugin_in_vm_artifacts_profile_request(url, request_kwargs): if not HttpRequestPredicates.is_host_plugin_extension_artifact_request(url): return False artifact_location = HttpRequestPredicates._get_host_plugin_request_artifact_location(url, request_kwargs) return HttpRequestPredicates.is_in_vm_artifacts_profile_request(artifact_location) @staticmethod def is_host_plugin_put_logs_request(url): return url.lower() == 'http://{0}:{1}/vmagentlog'.format(restutil.KNOWN_WIRESERVER_IP, restutil.HOST_PLUGIN_PORT) @staticmethod def is_agent_package_request(url): return re.match(r"^http://mock-goal-state/ga-manifests/OSTCExtensions.WALinuxAgent__([\d.]+)$", url) is not None @staticmethod def is_ga_manifest_request(url): return "manifest_of_ga.xml" in url Azure-WALinuxAgent-a976115/tests/lib/miscellaneous_tools.py000066400000000000000000000040311510742556200237350ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # # # Utility functions for the unit tests. # # This module is meant for simple, small tools that don't fit elsewhere. # import datetime import os import time from azurelinuxagent.common.future import ustr, UTC def format_processes(pid_list): """ Formats the given PIDs as a sequence of PIDs and their command lines """ def get_command_line(pid): try: cmdline = '/proc/{0}/cmdline'.format(pid) if os.path.exists(cmdline): with open(cmdline, "r") as cmdline_file: return "[PID: {0}] {1}".format(pid, cmdline_file.read()) except Exception: pass return "[PID: {0}] UNKNOWN".format(pid) return ustr([get_command_line(pid) for pid in pid_list]) def wait_for(predicate, timeout=10, frequency=0.01): """ Waits until the given predicate is true or the given timeout elapses. Returns the last evaluation of the predicate. Both the timeout and frequency are in seconds; the latter indicates how often the predicate is evaluated. """ def to_seconds(time_delta): return (time_delta.microseconds + (time_delta.seconds + time_delta.days * 24 * 3600) * 10 ** 6) / 10 ** 6 start_time = datetime.datetime.now(UTC) while to_seconds(datetime.datetime.now(UTC) - start_time) < timeout: if predicate(): return True time.sleep(frequency) return False Azure-WALinuxAgent-a976115/tests/lib/mock_cgroup_environment.py000066400000000000000000000263021510742556200246130ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import contextlib import os from tests.lib.tools import patch, data_dir from tests.lib.mock_environment import MockEnvironment, MockCommand # Mocked commands which are common between v1, v2, and hybrid cgroup environments _MOCKED_COMMANDS_COMMON = [ MockCommand(r"^systemctl --version$", '''systemd 237 +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybrid '''), MockCommand(r"^systemctl show walinuxagent\.service --property ControlGroup$", '''ControlGroup=/system.slice/walinuxagent.service '''), MockCommand(r"^systemctl show walinuxagent\.service --property Slice", '''Slice=system.slice '''), MockCommand(r"^systemctl show extension\.service --property ControlGroup$", '''ControlGroup=/system.slice/extension.service '''), MockCommand(r"^systemctl show (.+) --property CPUQuotaPerSecUSec$", '''CPUQuotaPerSecUSec=infinity '''), MockCommand(r"^systemctl show (.+) --property CPUAccounting$", '''CPUAccounting=no '''), MockCommand(r"^systemctl show (.+) --property MemoryAccounting$", '''MemoryAccounting=no '''), MockCommand(r"^systemctl show (.+) --property LoadState$", '''LoadState=loaded '''), MockCommand(r"^systemctl set-property (.+) --runtime", ""), MockCommand(r"^systemctl stop ([^\s]+)"), MockCommand(r"^systemd-run (.+) --unit=([^\s]+) --scope ([^\s]+)", ''' Running scope as unit: TEST_UNIT.scope Thu 28 May 2020 07:25:55 AM PDT '''), ] _MOCKED_COMMANDS_V1 = [ MockCommand(r"^findmnt -t cgroup --noheadings$", '''/sys/fs/cgroup/systemd cgroup cgroup rw,nosuid,nodev,noexec,relatime,xattr,name=systemd /sys/fs/cgroup/devices cgroup cgroup rw,nosuid,nodev,noexec,relatime,devices /sys/fs/cgroup/rdma cgroup cgroup rw,nosuid,nodev,noexec,relatime,rdma /sys/fs/cgroup/perf_event cgroup cgroup rw,nosuid,nodev,noexec,relatime,perf_event /sys/fs/cgroup/net_cls,net_prio cgroup cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio /sys/fs/cgroup/blkio cgroup cgroup rw,nosuid,nodev,noexec,relatime,blkio /sys/fs/cgroup/cpuset cgroup cgroup rw,nosuid,nodev,noexec,relatime,cpuset /sys/fs/cgroup/misc cgroup cgroup rw,nosuid,nodev,noexec,relatime,misc /sys/fs/cgroup/cpu,cpuacct cgroup cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct /sys/fs/cgroup/memory cgroup cgroup rw,nosuid,nodev,noexec,relatime,memory /sys/fs/cgroup/freezer cgroup cgroup rw,nosuid,nodev,noexec,relatime,freezer /sys/fs/cgroup/hugetlb cgroup cgroup rw,nosuid,nodev,noexec,relatime,hugetlb /sys/fs/cgroup/pids cgroup cgroup rw,nosuid,nodev,noexec,relatime,pids '''), MockCommand(r"^findmnt -t cgroup2 --noheadings$", ''), MockCommand(r"^stat -f --format=%T /sys/fs/cgroup$", 'tmpfs'), ] _MOCKED_COMMANDS_V2 = [ MockCommand(r"^findmnt -t cgroup2 --noheadings$", '''/sys/fs/cgroup cgroup2 cgroup2 rw,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot '''), MockCommand(r"^findmnt -t cgroup --noheadings$", ''), MockCommand(r"^stat -f --format=%T /sys/fs/cgroup$", 'cgroup2fs'), ] _MOCKED_COMMANDS_HYBRID = [ MockCommand(r"^findmnt -t cgroup --noheadings$", '''/sys/fs/cgroup/systemd cgroup cgroup rw,nosuid,nodev,noexec,relatime,xattr,name=systemd /sys/fs/cgroup/devices cgroup cgroup rw,nosuid,nodev,noexec,relatime,devices /sys/fs/cgroup/rdma cgroup cgroup rw,nosuid,nodev,noexec,relatime,rdma /sys/fs/cgroup/perf_event cgroup cgroup rw,nosuid,nodev,noexec,relatime,perf_event /sys/fs/cgroup/net_cls,net_prio cgroup cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio /sys/fs/cgroup/blkio cgroup cgroup rw,nosuid,nodev,noexec,relatime,blkio /sys/fs/cgroup/cpuset cgroup cgroup rw,nosuid,nodev,noexec,relatime,cpuset /sys/fs/cgroup/misc cgroup cgroup rw,nosuid,nodev,noexec,relatime,misc /sys/fs/cgroup/cpu,cpuacct cgroup cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct /sys/fs/cgroup/memory cgroup cgroup rw,nosuid,nodev,noexec,relatime,memory /sys/fs/cgroup/freezer cgroup cgroup rw,nosuid,nodev,noexec,relatime,freezer /sys/fs/cgroup/hugetlb cgroup cgroup rw,nosuid,nodev,noexec,relatime,hugetlb /sys/fs/cgroup/pids cgroup cgroup rw,nosuid,nodev,noexec,relatime,pids '''), MockCommand(r"^findmnt -t cgroup2 --noheadings$", '''/sys/fs/cgroup/unified cgroup2 cgroup2 rw,nosuid,nodev,noexec,relatime,nsdelegate '''), MockCommand(r"^stat -f --format=%T /sys/fs/cgroup$", 'tmpfs'), MockCommand(r"^stat -f --format=%T /sys/fs/cgroup/unified$", 'cgroup2fs'), ] _MOCKED_FILES_V1 = [ ("/proc/self/cgroup", os.path.join(data_dir, 'cgroups', 'v1', 'proc_self_cgroup')), (r"/proc/[0-9]+/cgroup", os.path.join(data_dir, 'cgroups', 'v1', 'proc_pid_cgroup')), (r"/sys/fs/cgroup/cpu,cpuacct/system.slice/walinuxagent.service/cgroup.procs", os.path.join(data_dir, 'cgroups', 'cgroup.procs')), (r"/sys/fs/cgroup/memory/system.slice/walinuxagent.service/cgroup.procs", os.path.join(data_dir, 'cgroups', 'cgroup.procs')) ] _MOCKED_FILES_V2 = [ ("/proc/self/cgroup", os.path.join(data_dir, 'cgroups', 'v2', 'proc_self_cgroup')), (r"/proc/[0-9]+/cgroup", os.path.join(data_dir, 'cgroups', 'v2', 'proc_pid_cgroup')), ("/sys/fs/cgroup/cgroup.subtree_control", os.path.join(data_dir, 'cgroups', 'v2', 'sys_fs_cgroup_cgroup.subtree_control')), ("/sys/fs/cgroup/azure.slice/cgroup.subtree_control", os.path.join(data_dir, 'cgroups', 'v2', 'sys_fs_cgroup_cgroup.subtree_control')), ("/sys/fs/cgroup/azure.slice/walinuxagent.service/cgroup.subtree_control", os.path.join(data_dir, 'cgroups', 'v2', 'sys_fs_cgroup_cgroup.subtree_control_empty')), (r"/sys/fs/cgroup/system.slice/walinuxagent.service/cgroup.procs", os.path.join(data_dir, 'cgroups', 'cgroup.procs')) ] _MOCKED_FILES_HYBRID = [ ("/proc/self/cgroup", os.path.join(data_dir, 'cgroups', 'v1', 'proc_self_cgroup')), (r"/proc/[0-9]+/cgroup", os.path.join(data_dir, 'cgroups', 'v1', 'proc_pid_cgroup')), ("/sys/fs/cgroup/unified/cgroup.controllers", os.path.join(data_dir, 'cgroups', 'hybrid', 'sys_fs_cgroup_cgroup.controllers')) ] _MOCKED_PATHS = [ r"^(/lib/systemd/system)", r"^(/etc/systemd/system)" ] class UnitFilePaths: walinuxagent = "/lib/systemd/system/walinuxagent.service" logcollector = "/lib/systemd/system/azure-walinuxagent-logcollector.slice" azure = "/lib/systemd/system/azure.slice" vmextensions = "/lib/systemd/system/azure-vmextensions.slice" extensionslice = "/lib/systemd/system/azure-vmextensions-Microsoft.CPlat.Extension.slice" slice = "/lib/systemd/system/walinuxagent.service.d/10-Slice.conf" cpu_accounting = "/lib/systemd/system/walinuxagent.service.d/11-CPUAccounting.conf" cpu_quota = "/lib/systemd/system/walinuxagent.service.d/12-CPUQuota.conf" memory_accounting = "/lib/systemd/system/walinuxagent.service.d/13-MemoryAccounting.conf" extension_service_cpu_accounting = '/lib/systemd/system/extension.service.d/11-CPUAccounting.conf' extension_service_cpu_quota = '/lib/systemd/system/extension.service.d/12-CPUQuota.conf' extension_service_memory_accounting = '/lib/systemd/system/extension.service.d/13-MemoryAccounting.conf' extension_service_memory_limit = '/lib/systemd/system/extension.service.d/14-MemoryLimit.conf' @contextlib.contextmanager def mock_cgroup_v1_environment(tmp_dir): """ Creates a mock environment for cgroup v1 hierarchy used by the tests related to cgroups (currently it only provides support for systemd platforms). The command output used in __MOCKED_COMMANDS comes from an Ubuntu 20 system. """ data_files = [ (os.path.join(data_dir, 'init', 'walinuxagent.service'), UnitFilePaths.walinuxagent), (os.path.join(data_dir, 'init', 'azure.slice'), UnitFilePaths.azure), (os.path.join(data_dir, 'init', 'azure-vmextensions.slice'), UnitFilePaths.vmextensions) ] with patch('azurelinuxagent.ga.cgroupapi.CGroupUtil.distro_supported', return_value=True): with patch('azurelinuxagent.common.osutil.systemd.is_systemd', return_value=True): with patch('azurelinuxagent.common.conf.get_cgroup_disable_on_process_check_failure', return_value=False): with MockEnvironment(tmp_dir, commands=_MOCKED_COMMANDS_COMMON + _MOCKED_COMMANDS_V1, paths=_MOCKED_PATHS, files=_MOCKED_FILES_V1, data_files=data_files) as mock: yield mock @contextlib.contextmanager def mock_cgroup_v2_environment(tmp_dir): """ Creates a mock environment for cgroup v2 hierarchy used by the tests related to cgroups (currently it only provides support for systemd platforms). The command output used in __MOCKED_COMMANDS comes from an Ubuntu 22 system. """ data_files = [ (os.path.join(data_dir, 'init', 'walinuxagent.service'), UnitFilePaths.walinuxagent), (os.path.join(data_dir, 'init', 'azure.slice'), UnitFilePaths.azure), (os.path.join(data_dir, 'init', 'azure-vmextensions.slice'), UnitFilePaths.vmextensions) ] with patch('azurelinuxagent.ga.cgroupapi.CGroupUtil.distro_supported', return_value=True): with patch('azurelinuxagent.common.osutil.systemd.is_systemd', return_value=True): with patch('azurelinuxagent.common.conf.get_cgroup_disable_on_process_check_failure', return_value=False): with MockEnvironment(tmp_dir, commands=_MOCKED_COMMANDS_COMMON + _MOCKED_COMMANDS_V2, paths=_MOCKED_PATHS, files=_MOCKED_FILES_V2, data_files=data_files) as mock: yield mock @contextlib.contextmanager def mock_cgroup_hybrid_environment(tmp_dir): """ Creates a mock environment for cgroup hybrid hierarchy used by the tests related to cgroups (currently it only provides support for systemd platforms). """ data_files = [ (os.path.join(data_dir, 'init', 'walinuxagent.service'), UnitFilePaths.walinuxagent), (os.path.join(data_dir, 'init', 'azure.slice'), UnitFilePaths.azure), (os.path.join(data_dir, 'init', 'azure-vmextensions.slice'), UnitFilePaths.vmextensions) ] with patch('azurelinuxagent.ga.cgroupapi.CGroupUtil.distro_supported', return_value=True): with patch('azurelinuxagent.common.osutil.systemd.is_systemd', return_value=True): with patch('azurelinuxagent.common.conf.get_cgroup_disable_on_process_check_failure', return_value=False): with MockEnvironment(tmp_dir, commands=_MOCKED_COMMANDS_COMMON + _MOCKED_COMMANDS_HYBRID, paths=_MOCKED_PATHS, files=_MOCKED_FILES_HYBRID, data_files=data_files) as mock: yield mock Azure-WALinuxAgent-a976115/tests/lib/mock_command.py000077500000000000000000000026171510742556200223140ustar00rootroot00000000000000#!/usr/bin/env python3 import os import sys if len(sys.argv) < 4: sys.stderr.write("usage: {0} ".format(os.path.basename(__file__))) # W0632: Possible unbalanced tuple unpacking with sequence: left side has 3 label(s), right side has 0 value(s) (unbalanced-tuple-unpacking) # Disabled: Unpacking is balanced: there is a check for the length on line 5 # This script will be used for mocking cgroups commands in test, when popen called this script will be executed instead of actual commands # We pass stdout, return_value, stderr of the mocked command output as arguments to this script and this script will print them to stdout, stderr and exit with the return value # So that popen gets the output of the mocked command. Ideally we should get 4 arguments in sys.argv, first one is the script name, next 3 are the actual command output # But somehow when we run the tests from pycharm, it adds extra arguments next to the script name, so we need to handle that when reading the arguments # ex: /home/nag/Documents/repos/WALinuxAgent/tests/lib/mock_command.py /snap/pycharm-professional/412/plugins/python-ce/helpers/py... +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybrid\n 0 stdout, return_value, stderr = sys.argv[-3:] # pylint: disable=W0632 if stdout != '': sys.stdout.write(stdout) if stderr != '': sys.stderr.write(stderr) sys.exit(int(return_value)) Azure-WALinuxAgent-a976115/tests/lib/mock_environment.py000066400000000000000000000162701510742556200232370ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import re import shutil import subprocess from azurelinuxagent.common.future import ustr from azurelinuxagent.common.utils import fileutil from tests.lib.tools import patch, patch_builtin class MockCommand: def __init__(self, command, stdout='', return_value=0, stderr=''): self.command = command self.stdout = stdout self.return_value = return_value self.stderr = stderr def __str__(self): return ' '.join(self.command) if isinstance(self.command, list) else self.command class MockEnvironment: """ A MockEnvironment is a Context Manager that can be used to mock a set of commands, file system paths, and/or files. It can be useful in tests that need to execute commands or access/modify files that are not available in all platforms, or require root privileges, or can change global system settings. For a sample usage see the mock_cgroup_environment() function. Currently, MockEnvironment mocks subprocess.Popen(), fileutil.mkdir(), os.path.exists() and the builtin() open function (it mocks fileutil.mkdir() instead od os.mkdir() because the agent's code users the former since it provides backwards compatibility with Python 2). The mock for Popen looks for a match in the given 'commands' and, if found, forwards the call to the mock_command.py, which produces the output specified by the matching item. Otherwise it forwards the call to the original Popen function. The mocks for the other functions first look for a match in the given 'files' array and, if found, map the file to the corresponding path in the matching item (if the mapping points to an Exception, the Exception is raised). If there is no match, then it checks if the file is included in the given 'paths' array and maps the path to the given 'tmp_dir' (e.g. "/lib/systemd/system" becomes "/lib/systemd/system".) If there no matches, the path is not changed. Once this mapping has completed the mocks invoke the corresponding original function. Matches are done using regular expressions; the regular expressions in 'paths' must create group 0 to indicate the section of the path that needs to be mapped (i.e. use parenthesis around the section that needs to be mapped.) The items in the given 'data_files' are copied to the 'tmp_dir'. The add_*() methods insert new items int the list of mock objects. Items added by these methods take precedence over the items provided to the __init__() method. """ def __init__(self, tmp_dir, commands=None, paths=None, files=None, data_files=None): # make a copy of the arrays passed as arguments since individual tests can modify them self.tmp_dir = tmp_dir self.commands = [] if commands is None else commands[:] self.paths = [] if paths is None else paths[:] self.files = [] if files is None else files[:] self._data_files = data_files self._commands_call_list = [] # get references to the functions we'll mock so that we can call the original implementations self._original_popen = subprocess.Popen self._original_mkdir = fileutil.mkdir self._original_path_exists = os.path.exists self._original_os_remove = os.remove self._original_open = open self.patchers = [ patch_builtin("open", side_effect=self._mock_open), patch("subprocess.Popen", side_effect=self._mock_popen), patch("os.path.exists", side_effect=self._mock_path_exists), patch("os.remove", side_effect=self._mock_os_remove), patch("azurelinuxagent.common.utils.fileutil.mkdir", side_effect=self._mock_mkdir) ] def __enter__(self): if self._data_files is not None: for items in self._data_files: self.add_data_file(items[0], items[1]) try: for patcher in self.patchers: patcher.start() except Exception: self._stop_patchers() raise return self def __exit__(self, *_): self._stop_patchers() def _stop_patchers(self): for patcher in self.patchers: try: patcher.stop() except Exception: pass def add_command(self, command): self.commands.insert(0, command) def add_path(self, mock): self.paths.insert(0, mock) def add_file(self, actual, mock): self.files.insert(0, (actual, mock)) def add_data_file(self, source, target): shutil.copyfile(source, self.get_mapped_path(target)) def get_mapped_path(self, path): for item in self.files: match = re.match(item[0], path) if match is not None: return item[1] for item in self.paths: mapped = re.sub(item, r"{0}\1".format(self.tmp_dir), path) if mapped != path: mapped_parent = os.path.split(mapped)[0] if not self._original_path_exists(mapped_parent): os.makedirs(mapped_parent) return mapped return path @property def commands_call_list(self): return self._commands_call_list def _mock_popen(self, command, *args, **kwargs): if isinstance(command, list): command_string = " ".join(command) else: command_string = command for cmd in self.commands: match = re.match(cmd.command, command_string) if match is not None: mock_script = os.path.join(os.path.split(__file__)[0], "mock_command.py") if 'shell' in kwargs and kwargs['shell']: command = "{0} '{1}' {2} '{3}'".format(mock_script, cmd.stdout, cmd.return_value, cmd.stderr) else: command = [mock_script, cmd.stdout, ustr(cmd.return_value), cmd.stderr] break self._commands_call_list.append(command_string) return self._original_popen(command, *args, **kwargs) def _mock_mkdir(self, path, *args, **kwargs): return self._original_mkdir(self.get_mapped_path(path), *args, **kwargs) def _mock_open(self, path, *args, **kwargs): mapped_path = self.get_mapped_path(path) if isinstance(mapped_path, Exception): raise mapped_path return self._original_open(mapped_path, *args, **kwargs) def _mock_path_exists(self, path): return self._original_path_exists(self.get_mapped_path(path)) def _mock_os_remove(self, path): return self._original_os_remove(self.get_mapped_path(path)) Azure-WALinuxAgent-a976115/tests/lib/mock_firewall_command.py000066400000000000000000000366001510742556200241750ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import re from azurelinuxagent.common.utils import shellutil from tests.lib.tools import patch class _MockFirewallCommand(object): """ Abstract base class for the MockIpTables and MockFirewallCmd classes. Intercepts calls to shellutil.run_command and mocks the behavior of the firewall command-line utilities using a pre-defined set of return values. """ def __init__(self, command_name, check_option, add_option, delete_option): self._command_name = command_name self._check_option = check_option self._add_option = add_option self._delete_option = delete_option self._call_list = [] self._original_run_command = shellutil.run_command self._run_command_patcher = patch("azurelinuxagent.ga.firewall_manager.shellutil.run_command", side_effect=self._mock_run_command) # # Return values for each command-line option (add, check, delete) indexed by rule type (ACCEPT DNS, ACCEPT, DROP, legacy). # These default values indicate success, and can be overridden with set_return_values(). # self._return_values = { add_option: { "ACCEPT DNS": 0, "ACCEPT": 0, "DROP": 0, "legacy": 0, }, check_option: { "ACCEPT DNS": 0, "ACCEPT": 0, "DROP": 0, "legacy": 0, }, delete_option: { "ACCEPT DNS": 0, "ACCEPT": 0, "DROP": 0, "legacy": 0, } } def __enter__(self): self._run_command_patcher.start() return self def __exit__(self, exc_type, exc_value, exc_traceback): self._run_command_patcher.stop() def _mock_run_command(self, command, *args, **kwargs): if command[0] == self._command_name: command_string = " ".join(command) command = ['sh', '-c', "exit {0}".format(self._get_return_value(command_string))] self._call_list.append(command_string) return self._original_run_command(command, *args, **kwargs) @property def check_option(self): return self._check_option @property def add_option(self): return self._add_option @property def delete_option(self): return self._delete_option @property def call_list(self): """ Returns the list of commands that were executed by the mock """ return self._call_list def set_return_values(self, option, accept_dns, accept, drop, legacy): """ Changes the return values for the mocked command """ self._return_values[option]["ACCEPT DNS"] = accept_dns self._return_values[option]["ACCEPT"] = accept self._return_values[option]["DROP"] = drop self._return_values[option]["legacy"] = legacy def _get_return_value(self, command): raise NotImplementedError() @staticmethod def get_accept_dns_command(option): raise NotImplementedError() @staticmethod def get_accept_command(option): raise NotImplementedError() @staticmethod def get_drop_command(option): raise NotImplementedError() @staticmethod def get_legacy_command(option): raise NotImplementedError() class MockIpTables(_MockFirewallCommand): """ Mock for the iptables command """ def __init__(self, version='1.4.21'): super(MockIpTables, self).__init__(command_name="iptables", check_option="-C", add_option="-A", delete_option="-D") self._version = version # Currently the Agent calls delete repeatedly until it returns 1, indicating that the rule does not exist (and hence the rule has been deleted successfully) self.set_return_values("-D", 1, 1, 1, 1) def _mock_run_command(self, command, *args, **kwargs): if command[0] == 'iptables' and command[1] == '--version': return self._original_run_command(['echo', 'iptables v{0} (nf_tables)'.format(self._version)], *args, **kwargs) return super(MockIpTables, self)._mock_run_command(command, *args, **kwargs) def _get_return_value(self, command): """ Possible commands are: * ACCEPT DNS rule: iptables [-w] -t security <-A|-C|-D> OUTPUT -d 168.63.129.16 -p tcp --destination-port 53 -j ACCEPT * ACCEPT rule: iptables [-w] -t security <-A|-C|-D> OUTPUT -d 168.63.129.16 -p tcp -m owner --uid-owner -j ACCEPT * DROP rule: iptables [-w] -t security <-A|-C|-D> OUTPUT -d 168.63.129.16 -p tcp -m conntrack --ctstate INVALID,NEW -j DROP * Legacy rule: iptables [-w] -t security <-A|-C|-D> OUTPUT -d 168.63.129.16 -p tcp -m conntrack --ctstate INVALID,NEW -j ACCEPT """ match = re.match(r"iptables (-w )?-t security (?P

\n
' # # ResourceGone can happen if we are fetching one of the URIs in the goal state and a new goal state arrives { 'message': r"(?s)(Fetching the goal state failed|Error fetching goal state|Error fetching the goal state).*(\[ResourceGoneError\]|\[410: Gone\]|Resource is gone)", 'if': lambda r: r.level in ("WARNING", "ERROR") }, # # 2022-12-02T05:45:51.771876Z ERROR ExtHandler ExtHandler Error fetching the goal state: [ProtocolError] [Wireserver Exception] [HttpError] [HTTP Failed] GET http://168.63.129.16/machine/ -- IOError [Errno 104] Connection reset by peer -- 6 attempts made # 2025-09-06T08:47:16.941247Z ERROR ExtHandler ExtHandler Error fetching the goal state: [ProtocolError] [Wireserver Exception] [HttpError] [HTTP Failed] GET http://168.63.129.16/machine/ -- IOError timed out -- 6 attempts made # { 'message': r"\[HttpError\] \[HTTP Failed\] GET http://168.63.129.16/machine/ -- IOError (\[Errno 104\] Connection reset by peer|timed out)", 'if': lambda r: r.level in ("WARNING", "ERROR") and self._increment_counter("ProtocolError-Goalstate-IOError") < 6 # ignore unless there are 6 or more instances }, # # 2022-03-08T03:03:23.036161Z WARNING ExtHandler ExtHandler Fetch failed from [http://168.63.129.16:32526/extensionArtifact]: [HTTP Failed] [400: Bad Request] b'' # 2022-03-08T03:03:23.042008Z WARNING ExtHandler ExtHandler Fetch failed: [ProtocolError] Fetch failed from [http://168.63.129.16:32526/extensionArtifact]: [HTTP Failed] [400: Bad Request] b'' # # Warning downloading extension manifest. If the issue persists, this would cause errors elsewhere so safe to ignore { 'message': r"\[http://168.63.129.16:32526/extensionArtifact\]: \[HTTP Failed\] \[400: Bad Request\]", 'if': lambda r: r.level == "WARNING" }, # # 2022-03-29T05:52:10.089958Z WARNING ExtHandler ExtHandler An error occurred while retrieving the goal state: [ProtocolError] GET vmSettings [correlation ID: da106cf5-83a0-44ec-9484-d0e9223847ab eTag: 9856274988128027586]: Timeout # # Ignore warnings about timeouts in vmSettings; if the condition persists, an error will occur elsewhere. # { 'message': r"GET vmSettings \[[^]]+\]: Timeout", 'if': lambda r: r.level == "WARNING" }, # # 2022-09-30T02:48:33.134649Z WARNING MonitorHandler ExtHandler Error in SendHostPluginHeartbeat: [HttpError] [HTTP Failed] GET http://168.63.129.16:32526/health -- IOError timed out -- 1 attempts made --- [NOTE: Will not log the same error for the next hour] # # Ignore timeouts in the HGAP's health API... those are tracked in the HGAP dashboard so no need to worry about them on test runs # { 'message': r"SendHostPluginHeartbeat:.*GET http://168.63.129.16:32526/health.*timed out", 'if': lambda r: r.level == "WARNING" }, # # 2025-09-06T08:47:42.367645Z WARNING MonitorHandler ExtHandler Error in SendHostPluginHeartbeat: [ProtocolError] [Wireserver Exception] [HttpError] [HTTP Failed] GET http://168.63.129.16/machine/ -- IOError timed out -- 6 attempts made --- [NOTE: Will not log the same error for the next hour] # # As part of health check, we refresh goal state to update HGPA with latest containerId. That goal state refresh failures can be ignored as if the issue persist the log would include other errors as well. # { 'message': r"SendHostPluginHeartbeat:.*GET http:\/\/168.63.129.16\/machine.*timed out", 'if': lambda r: r.level == "WARNING" and self._increment_counter("SendHostPluginHeartbeat-Goalstate-timedout") < 6 # ignore unless there are 6 or more instances }, # 2022-09-30T03:09:25.013398Z WARNING MonitorHandler ExtHandler Error in SendHostPluginHeartbeat: [ResourceGoneError] [HTTP Failed] [410: Gone] # # ResourceGone should not happen very often, since the monitor thread already refreshes the goal state before sending the HostGAPlugin heartbeat. Errors can still happen, though, since the goal state # can change in-between the time at which the monitor thread refreshes and the time at which it sends the heartbeat. Ignore these warnings unless there are 2 or more of them. # { 'message': r"SendHostPluginHeartbeat:.*ResourceGoneError.*410", 'if': lambda r: r.level == "WARNING" and self._increment_counter("SendHostPluginHeartbeat-ResourceGoneError-410") < 2 # ignore unless there are 2 or more instances }, # # 2023-01-18T02:58:25.589492Z ERROR SendTelemetryHandler ExtHandler Event: name=WALinuxAgent, op=ReportEventErrors, message=DroppedEventsCount: 1 # Reasons (first 5 errors): [ProtocolError] [Wireserver Exception] [ProtocolError] [Wireserver Failed] URI http://168.63.129.16/machine?comp=telemetrydata [HTTP Failed] Status Code 400: Traceback (most recent call last): # { 'message': r"(?s)\[ProtocolError\].*http:\/\/168.63.129.16\/machine\?comp=telemetrydata.*Status Code 400", 'if': lambda r: r.thread == 'SendTelemetryHandler' and self._increment_counter("SendTelemetryHandler-telemetrydata-Status Code 400") < 2 # ignore unless there are 2 or more instances }, # # 2023-07-26T22:05:42.841692Z ERROR SendTelemetryHandler ExtHandler Event: name=WALinuxAgent, op=ReportEventErrors, message=DroppedEventsCount: 1 # Reasons (first 5 errors): [ProtocolError] Failed to send events:[ResourceGoneError] [HTTP Failed] [410: Gone] b'\n\n ResourceNotAvailable\n The resource requested is no longer available. Please refresh your cache.\n
\n
': Traceback (most recent call last): # { 'message': r"(?s)\[ProtocolError\].*Failed to send events.*\[410: Gone\]", 'if': lambda r: r.thread == 'SendTelemetryHandler' and self._increment_counter("SendTelemetryHandler-telemetrydata-Status Code 410") < 2 # ignore unless there are 2 or more instances }, # # 2025-09-06T08:46:51.298681Z ERROR SendTelemetryHandler ExtHandler Event: name=WALinuxAgent, op=ReportEventErrors, message=DroppedEventsCount: 1 # Reasons (first 5 errors): [ProtocolError] [Wireserver Exception] [HttpError] [HTTP Failed] POST http://168.63.129.16/machine -- IOError timed out -- 3 attempts made: # { 'message': r"(?s)\[ProtocolError\].*http:\/\/168.63.129.16\/machine.*timed out", 'if': lambda r: r.thread == 'SendTelemetryHandler' and self._increment_counter("SendTelemetryHandler-telemetrydata-IOError-timed-out") < 2 # ignore unless there are 2 or more instances }, # Ignore these errors in flatcar: # # 1) 2023-03-16T14:30:33.091427Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology # 2) 2023-03-16T14:30:33.091708Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 # 3) 2023-03-16T14:30:34.660976Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required # 4) 2023-03-16T14:30:34.800112Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' # # 1, 2) under investigation # 3) There seems to be a configuration issue in flatcar that prevents python from using HTTPS when trying to reach storage. This does not produce any actual errors, since the agent fallbacks to the HGAP. # 4) Remove this when bug 17523033 is fixed. # { 'message': r"(Failed to mount resource disk)|(unable to detect disk topology)", 'if': lambda r: r.prefix == 'Daemon' and DISTRO_NAME == 'flatcar' }, { 'message': r"(HTTPS is unavailable and required)|(Unable to setup the persistent firewall rules.*Read-only file system)", 'if': lambda r: DISTRO_NAME == 'flatcar' }, # # AzureSecurityLinuxAgent fails to install on a few distros (e.g. Debian 11) # # 2023-03-16T14:29:48.798415Z ERROR ExtHandler ExtHandler Event: name=Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent, op=Install, message=[ExtensionOperationError] Non-zero exit code: 56, /var/lib/waagent/Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent-2.21.115/handler.sh install # { 'message': r"Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent.*op=Install.*Non-zero exit code: 56,", }, # # Ignore LogCollector failure to fetch vmSettings if it recovers # # 2023-08-27T08:13:42.520557Z WARNING MainThread LogCollector Fetch failed: [HttpError] [HTTP Failed] GET https://md-hdd-tkst3125n3x0.blob.core.chinacloudapi.cn/$system/lisa-WALinuxAgent-20230827-080144-029-e0-n0.cb9a406f-584b-4702-98bb-41a3ad5e334f.vmSettings -- IOError timed out -- 6 attempts made # { 'message': r"Fetch failed:.*GET.*vmSettings.*timed out", 'if': lambda r: r.prefix == 'LogCollector' and self.agent_log_contains("LogCollector Log collection successfully completed", after_timestamp=r.timestamp) }, # # In tests, we use both autoupdate flags to install test agent with different value and changing it depending on the scenario. So, we can ignore this warning. # # 2024-01-30T22:22:37.299911Z WARNING ExtHandler ExtHandler AutoUpdate.Enabled property is **Deprecated** now but it's set to different value from AutoUpdate.UpdateToLatestVersion. Please consider removing it if added by mistake { 'message': r"AutoUpdate.Enabled property is \*\*Deprecated\*\* now but it's set to different value from AutoUpdate.UpdateToLatestVersion", 'if': lambda r: r.prefix == 'ExtHandler' and r.thread == 'ExtHandler' }, # # Some distros are running older agents, which do not add the DNS rule # # 2024-08-02T21:44:44.330727Z WARNING ExtHandler ExtHandler The firewall rules for Azure Fabric are not setup correctly (the environment thread will fix it): The following rules are missing: ['ACCEPT DNS'] # 2024-08-08T22:05:26.561896Z WARNING EnvHandler ExtHandler The firewall is not configured correctly. The following rules are missing: ['ACCEPT DNS']. Will reset it. # 2024-09-16T15:50:12.473500Z WARNING ExtHandler ExtHandler The permanent firewall rules for Azure Fabric are not setup correctly (The following rules are missing: ['ACCEPT DNS']), will reset them. # 2024-12-27T19:42:03.895387Z WARNING ExtHandler ExtHandler The permanent firewall rules for Azure Fabric are not setup correctly (The following rules are missing: ['ACCEPT DNS'] due to: ['']), will reset them. # 2024-12-27T19:38:14.093727Z WARNING EnvHandler ExtHandler The firewall is not configured correctly. The following rules are missing: ['ACCEPT DNS'] due to: ['iptables: Bad rule (does a matching rule exist in that chain?).\n']. Will reset it. { 'message': r"(The firewall rules for Azure Fabric are not setup correctly \(the environment thread will fix it\): The following rules are missing: \['ACCEPT DNS'\])" "|" r"(The firewall is not configured correctly. The following rules are missing: \['ACCEPT DNS'\].* Will reset it.)" "|" r"The permanent firewall rules for Azure Fabric are not setup correctly \(The following rules are missing: \['ACCEPT DNS'\]\).* will reset them.", 'if': lambda r: r.level == "WARNING" }, # TODO: The Daemon has not been updated on Azure Linux 3; remove this message when it is. # # 2024-08-05T14:36:48.004865Z WARNING Daemon Daemon Unable to load distro implementation for azurelinux. Using default distro implementation instead. # { 'message': r"Unable to load distro implementation for azurelinux. Using default distro implementation instead.", 'if': lambda r: DISTRO_NAME == 'azurelinux' and r.prefix == 'Daemon' and r.level == 'WARNING' }, # # TODO: The OMS extension does not support Azure Linux 3; remove this message when it does. # # 2024-08-12T17:40:48.375193Z ERROR ExtHandler ExtHandler Event: name=Microsoft.EnterpriseCloud.Monitoring.OmsAgentForLinux, op=Install, message=[ExtensionOperationError] Non-zero exit code: 51, /var/lib/waagent/Microsoft.EnterpriseCloud.Monitoring.OmsAgentForLinux-1.19.0/omsagent_shim.sh -install # { 'message': r"name=Microsoft\.EnterpriseCloud\.Monitoring\.OmsAgentForLinux.+Non-zero exit code: 51", 'if': lambda r: DISTRO_NAME == 'azurelinux' and DISTRO_VERSION == '3.0' }, # Ubuntu 16 has an issue representing no quota as infinity, instead it outputs weird values. https://github.com/systemd/systemd/issues/5965, so ignoring in ubuntu 16 # 2024-11-26T00:07:38.716162Z INFO ExtHandler ExtHandler [CGW] Error parsing current CPUQuotaPerSecUSec: could not convert string to float: '584542y 2w 2d 20h 1min 49.549568' # 2025-04-08T09:02:47.491505Z INFO ExtHandler ExtHandler [CGW] Error parsing current CPUQuotaPerSecUSec: invalid literal for float(): 584542y 2w 2d 20h 1min 49.549568 {'message': r"Error parsing current CPUQuotaPerSecUSec: (could not convert string to float|invalid literal for float)", 'if': lambda r: re.match(r"((ubuntu16\.04)|(centos7\.9))\D*", "{0}{1}".format(DISTRO_NAME, DISTRO_VERSION), flags=re.IGNORECASE) }, # # GuestConfiguration produces a lot of errors in test runs due to issues in the extension. Some samples: # # 2024-12-08T06:28:34.480675Z ERROR ExtHandler ExtHandler Event: name=Microsoft.GuestConfiguration.ConfigurationforLinux, op=Install, message=[ExtensionOperationError] Non-zero exit code: 126, /var/lib/waagent/Microsoft.GuestConfiguration.ConfigurationforLinux-1.26.79/bin/guest-configuration-shim install # [stdout] # Linux distribution version is 9.0. # Linux distribution is Red Hat. # + /var/lib/waagent/Microsoft.GuestConfiguration.ConfigurationforLinux-1.26.79/bin/guest-configuration-extension install # /var/lib/waagent/Microsoft.GuestConfiguration.ConfigurationforLinux-1.26.79/bin/guest-configuration-shim: line 211: /var/lib/waagent/Microsoft.GuestConfiguration.ConfigurationforLinux-1.26.79/bin/guest-configuration-extension: cannot execute binary file: Exec format error # [stderr] # # 2024-12-26T06:35:24.233438Z ERROR ExtHandler ExtHandler Event: name=Microsoft.GuestConfiguration.ConfigurationforLinux, op=Install, message=[ExtensionOperationError] Non-zero exit code: 51, /var/lib/waagent/Microsoft.GuestConfiguration.ConfigurationforLinux-1.26.79/bin/guest-configuration-shim install # [stdout] # Linux distribution version is 4081.2.1. # [stderr] # [2024-12-26T06:35:22+0000]: Unexpected Linux distribution. Expected Linux distributions include only Ubuntu, Red Hat, SUSE, CentOS, Debian or Mariner. # # 2025-01-07T11:32:28.121056Z ERROR ExtHandler ExtHandler Event: name=Microsoft.GuestConfiguration.ConfigurationforLinux, op=Install, message=[ExtensionOperationError] Non-zero exit code: 1, /var/lib/waagent/Microsoft.GuestConfiguration.ConfigurationforLinux-1.26.79/bin/guest-configuration-shim install # [stdout] # Linux distribution version is 12.5. # Linux distribution is SUSE. # /var/lib/waagent/Microsoft.GuestConfiguration.ConfigurationforLinux-1.26.79/bin/guest-configuration-extension install # /var/lib/waagent/Microsoft.GuestConfiguration.ConfigurationforLinux-1.26.79/bin/guest-configuration-shim: line 211: # /var/lib/waagent/Microsoft.GuestConfiguration.ConfigurationforLinux-1.26.79/bin/guest-configuration-extension: Text file busy # [stderr] # # Also, enable not always completes before the new goal state is received # # 2025-01-07T13:33:25.636847Z WARNING ExtHandler ExtHandler A new goal state was received, but not all the extensions in the previous goal state have completed: # [('Microsoft.Azure.Extensions.CustomScript', 'success'), ('Microsoft.GuestConfiguration.ConfigurationforLinux', 'transitioning'), ('RunCommandHandler', 'success')] # { 'message': r"(?s)name=Microsoft.GuestConfiguration.ConfigurationforLinux.*op=Install.*Non-zero exit code: (1.*Text file busy|51.*Unexpected Linux distribution|126.*Exec format error)", }, { 'message': r"A new goal state was received, but not all the extensions in the previous goal state have completed.*'Microsoft.GuestConfiguration.ConfigurationforLinux',\s+u?'transitioning'", }, # # Below systemd errors are transient and will not block extension execution # # 2025-01-05T09:38:44.046292Z INFO ExtHandler ExtHandler [CGW] Failed to set the extension azure-vmextensions-Microsoft.CPlat.Core.RunCommandHandlerLinux.slice slice and quotas: # 'systemctl show azure-vmextensions-Microsoft.CPlat.Core.RunCommandHandlerLinux.slice --property CPUAccounting' failed: 1 (Failed to get properties: Message recipient disconnected from message bus without replying) # # 2025-01-06T09:32:42.594033Z INFO ExtHandler ExtHandler [CGW] Failed to set the extension azure-vmextensions-Microsoft.Azure.Extensions.CustomScript.slice slice and quotas: # 'systemctl show azure-vmextensions-Microsoft.Azure.Extensions.CustomScript.slice --property CPUAccounting' failed: 1 (Failed to get properties: Connection reset by peer) # # 2025-03-12T08:48:02.186772Z INFO ExtHandler ExtHandler [CGW] Error parsing current CPUQuotaPerSecUSec: 'systemctl show azure-vmextensions-Microsoft.Azure.Extensions.Edp.GATestExtGo.slice --property CPUQuotaPerSecUSec' failed: 1 (Failed to get properties: Connection reset by peer) # 2025-03-31T08:46:39.253900Z INFO ExtHandler ExtHandler [CGW] Failed to set the extension azure-vmextensions-Microsoft.Azure.Extensions.CustomScript.slice slice and quotas: Can't set properties ['CPUQuota='] of azure-vmextensions-Microsoft.Azure.Extensions.CustomScript.slice: 'systemctl set-property azure-vmextensions-Microsoft.Azure.Extensions.CustomScript.slice CPUQuota= --runtime' failed: 1 (Failed to set unit properties on azure-vmextensions-Microsoft.Azure.Extensions.CustomScript.slice: Message recipient disconnected from message bus without replying) # 2025-04-28T12:27:25.311806Z INFO ExtHandler ExtHandler [CGW] Failed to set the extension azure-vmextensions-Microsoft.CPlat.Core.RunCommandHandlerLinux.slice slice and quotas: 'systemctl show azure-vmextensions-Microsoft.CPlat.Core.RunCommandHandlerLinux.slice --property CPUAccounting' failed: 1 (Failed to get properties: Remote peer disconnected) # 2025-04-27T12:28:14.585253Z INFO ExtHandler ExtHandler [CGW] Error parsing current CPUQuotaPerSecUSec: 'systemctl show azure-vmextensions-Microsoft.CPlat.Core.RunCommandHandlerLinux.RunCommandHandler.slice --property CPUQuotaPerSecUSec' failed: 1 (Failed to get properties: Transport endpoint is not connected) { 'message': r"(Failed to set the extension|Error parsing).*systemctl (show|set-property).*failed: 1.*(Message recipient disconnected from message bus without replying|Connection reset by peer|Remote peer disconnected|Transport endpoint is not connected)", }, # # 2025-01-06T09:32:44.641948Z INFO ExtHandler ExtHandler [CGW] Disabling resource usage monitoring. Reason: Failed to start Microsoft.Azure.Extensions.CustomScript-2.1.10 using systemd-run, will try invoking the extension directly. Error: [SystemdRunError] Systemd process exited with code 1 and output [stdout] # # # [stderr] # Failed to start transient scope unit: Message recipient disconnected from message bus without replying # # Microsoft.CPlat.Core.RunCommandHandlerLinux.RunCommandHandler-1.3.15 using systemd-run, will try invoking the extension directly. Error: [SystemdRunError] Systemd process exited with code 1 and output [stdout] # # # [stderr] # Failed to start transient scope unit: Transport endpoint is not connected # # 2025-07-06T08:37:30.642513Z INFO ExtHandler ExtHandler [CGW] Disabling resource usage monitoring. Reason: Failed to start Microsoft.CPlat.Core.RunCommandLinux-1.0.5 using systemd-run, will try invoking the extension directly. Error: [SystemdRunError] Systemd process exited with code 1 and output [stdout] # # # [stderr] # Warning! D-Bus connection terminated. # Failed to wait for response: Connection reset by peer { 'message': r"(?s)Disabling resource usage monitoring. Reason: Failed to start.*using systemd-run, will try invoking the extension directly. Error: \[SystemdRunError\].* (Message recipient disconnected from message bus without replying|Connection reset by peer|Remote peer disconnected|Transport endpoint is not connected)", }, # # If agent is not mounted at the expected path, we log this message in v2 machines. This is not an error. # 2025-03-03T09:19:03.145557Z INFO ExtHandler ExtHandler [CGW] The walinuxagent.service cgroup is not mounted at the expected path; will not track. Actual cgroup path:[/sys/fs/cgroup/system.slice/walinuxagent.service] Expected:[/sys/fs/cgroup/azure.slice/walinuxagent.service] # 2025-03-12T22:03:04.095141Z INFO ExtHandler ExtHandler [CGW] The cpu,cpuacct controller is not mounted at the expected path for the walinuxagent.service cgroup; will not track. Actual cgroup path:[/sys/fs/cgroup/cpu,cpuacct/system.slice/walinuxagent.service] Expected:[/sys/fs/cgroup/cpu,cpuacct/azure.slice/walinuxagent.service] # { 'message': r"(The walinuxagent.service cgroup is not mounted at the expected path|controller is not mounted at the expected path for the walinuxagent.service cgroup); will not track. Actual cgroup path:\[.*\] Expected:\[.*\]", }, # Timing issue when the CGroup has been deleted/reset quota by the time we are fetching the values # from it. We would see IOError with file entry not found (ERRNO: 2). # 2025-08-28T18:46:06.813016Z WARNING MonitorHandler ExtHandler [PERIODIC] Could not collect metrics for cgroup azuremonitor-coreagent. Error : [CGroupsException] Failed to read cpu.stat: Cannot find throttled_usec { 'message': r"\[PERIODIC\] Could not collect metrics for cgroup .* Failed to read cpu.stat: Cannot find throttled_usec", }, ] def is_error(r: AgentLogRecord) -> bool: return r.level in ('ERROR', 'WARNING') or any(err in r.text for err in ['Exception', 'Traceback', '[CGW]']) errors = [] primary_interface_error = None provisioning_complete = False for record in self.read(): if is_error(record) and not self.matches_ignore_rule(record, ignore_rules): # Handle "/proc/net/route contains no routes" and "/proc/net/route is missing headers" as a special case # since it can take time for the primary interface to come up, and we don't want to report transient # errors as actual errors. The last of these errors in the log will be reported if "/proc/net/route contains no routes" in record.text or "/proc/net/route is missing headers" in record.text and record.prefix == "Daemon": primary_interface_error = record provisioning_complete = False else: errors.append(record) if "Provisioning complete" in record.text and record.prefix == "Daemon": provisioning_complete = True # Keep the "no routes found" as a genuine error message if it was never corrected if primary_interface_error is not None and not provisioning_complete: errors.append(primary_interface_error) return errors def agent_log_contains(self, data: str, after_timestamp: datetime = datetime_min_utc): """ This function looks for the specified test data string in the WALinuxAgent logs and returns if found or not. :param data: The string to look for in the agent logs :param after_timestamp: A timestamp appears after this timestamp :return: True if test data string found in the agent log after after_timestamp and False if not. """ for record in self.read(): if data in record.text and record.timestamp > after_timestamp: return True return False @staticmethod def _is_systemd(): # Taken from azurelinuxagent/common/osutil/systemd.py; repeated here because it is available only on agents >= 2.3 return os.path.exists("/run/systemd/system/") def _increment_counter(self, counter_name) -> int: """ Keeps a table of counters indexed by the given 'counter_name'. Each call to the function increments the value of that counter and returns the new value. """ count = self._counter_table.get(counter_name) count = 1 if count is None else count + 1 self._counter_table[counter_name] = count return count @staticmethod def matches_ignore_rule(record: AgentLogRecord, ignore_rules: List[Dict[str, Any]]) -> bool: """ Returns True if the given 'record' matches any of the 'ignore_rules' """ return any(re.search(rule['message'], record.message) is not None and ('if' not in rule or rule['if'](record)) for rule in ignore_rules) # The format of the log has changed over time and the current log may include records from different sources. Most records are single-line, but some of them # can span across multiple lines. We will assume records always start with a line similar to the examples below; any other lines will be assumed to be part # of the record that is being currently parsed. # # Newer Agent: 2019-11-27T22:22:48.123985Z VERBOSE ExtHandler ExtHandler Report vm agent status # 2021-03-30T19:45:33.793213Z INFO ExtHandler [Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent-2.14.64] Target handler state: enabled [incarnation 3] # # 2.2.46: the date time was changed to ISO-8601 format but the thread name was not added. # 2021-05-28T01:17:40.683072Z INFO ExtHandler Wire server endpoint:168.63.129.16 # 2021-05-28T01:17:40.683823Z WARNING ExtHandler Move rules file 70-persistent-net.rules to /var/lib/waagent/70-persistent-net.rules # 2021-05-28T01:17:40.767600Z INFO ExtHandler Successfully added Azure fabric firewall rules # # Older Agent: 2021/03/30 19:35:35.971742 INFO Daemon Azure Linux Agent Version:2.2.45 # # Oldest Agent: 2023/06/07 08:04:35.336313 WARNING Disabling guest agent in accordance with ovf-env.xml # # Extension: 2021/03/30 19:45:31 Azure Monitoring Agent for Linux started to handle. # 2021/03/30 19:45:31 [Microsoft.Azure.Monitor.AzureMonitorLinuxAgent-1.7.0] cwd is /var/lib/waagent/Microsoft.Azure.Monitor.AzureMonitorLinuxAgent-1.7.0 # _NEWER_AGENT_RECORD = re.compile(r'(?P[\d-]+T[\d:.]+Z)\s(?PVERBOSE|INFO|WARNING|ERROR)\s(?P\S+)\s(?P(Daemon)|(ExtHandler)|(LogCollector)|(\[\S+\]))\s(?P.*)') _2_2_46_AGENT_RECORD = re.compile(r'(?P[\d-]+T[\d:.]+Z)\s(?PVERBOSE|INFO|WARNING|ERROR)\s(?P)(?PDaemon|ExtHandler|\[\S+\])\s(?P.*)') _OLDER_AGENT_RECORD = re.compile(r'(?P[\d/]+\s[\d:.]+)\s(?PVERBOSE|INFO|WARNING|ERROR)\s(?P)(?PDaemon|ExtHandler)\s(?P.*)') _OLDEST_AGENT_RECORD = re.compile(r'(?P[\d/]+\s[\d:.]+)\s(?PVERBOSE|INFO|WARNING|ERROR)\s(?P)(?P)(?P.*)') _EXTENSION_RECORD = re.compile(r'(?P[\d/]+\s[\d:.]+)\s(?P)(?P)((?P\[[^\]]+\])\s)?(?P.*)') def read(self) -> Iterable[AgentLogRecord]: """ Generator function that returns each of the entries in the agent log parsed as AgentLogRecords. The function can be used following this pattern: for record in read_agent_log(): ... do something... """ def match_record(): for regex in [self._NEWER_AGENT_RECORD, self._2_2_46_AGENT_RECORD, self._OLDER_AGENT_RECORD, self._OLDEST_AGENT_RECORD]: m = regex.match(line) if m is not None: return m # The extension regex also matches the old agent records, so it needs to be last return self._EXTENSION_RECORD.match(line) def complete_record(): record.text = record.text.rstrip() # the text includes \n if extra_lines != "": record.text = record.text + "\n" + extra_lines.rstrip() record.message = record.message + "\n" + extra_lines.rstrip() return record log = self._open_log() try: record = None extra_lines = "" line = log.readline() while line != "": # while not EOF match = match_record() if match is not None: if record is not None: yield complete_record() record = AgentLogRecord.from_match(match) extra_lines = "" else: extra_lines = extra_lines + line line = log.readline() if record is not None: yield complete_record() finally: log.close() Azure-WALinuxAgent-a976115/tests_e2e/tests/lib/agent_setup_helpers.py000066400000000000000000000026721510742556200256200ustar00rootroot00000000000000# # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Common helper functions for agent setup used by the tests # import time from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.shell import CommandError from tests_e2e.tests.lib.ssh_client import SshClient def wait_for_agent_to_complete_provisioning(ssh_client: SshClient): """ Wait for the agent to complete provisioning """ log.info("Checking for the Agent to complete provisioning before starting the test validation") for _ in range(5): time.sleep(30) try: ssh_client.run_command("[ -f /var/lib/waagent/provisioned ] && exit 0 || exit 1", use_sudo=True) break except CommandError: log.info("Waiting for agent to complete provisioning, will check again after a short delay") else: raise Exception("Timeout while waiting for the Agent to complete provisioning") Azure-WALinuxAgent-a976115/tests_e2e/tests/lib/agent_test.py000066400000000000000000000103521510742556200237070ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import sys from abc import ABC, abstractmethod from datetime import datetime from assertpy import fail from typing import Any, Dict, List from azurelinuxagent.common.future import datetime_min_utc from tests_e2e.tests.lib.agent_test_context import AgentTestContext, AgentVmTestContext, AgentVmssTestContext from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.remote_test import FAIL_EXIT_CODE from tests_e2e.tests.lib.shell import CommandError from tests_e2e.tests.lib.ssh_client import ATTEMPTS, ATTEMPT_DELAY, SshClient class TestSkipped(Exception): """ Tests can raise this exception to indicate they should not be executed (for example, if trying to execute them on an unsupported distro """ class RemoteTestError(CommandError): """ Raised when a remote test fails with an unexpected error. """ class AgentTest(ABC): """ Abstract base class for Agent tests """ def __init__(self, context: AgentTestContext): self._context: AgentTestContext = context @abstractmethod def run(self): """ Test must define this method, which is used to execute the test. """ def get_ignore_error_rules(self) -> List[Dict[str, Any]]: """ Tests can override this method to return a list with rules to ignore errors in the agent log (see agent_log.py for sample rules). """ return [] def get_ignore_errors_before_timestamp(self) -> datetime: # Ignore errors in the agent log before this timestamp return datetime_min_utc @classmethod def run_from_command_line(cls): """ Convenience method to execute the test when it is being invoked directly from the command line (as opposed as being invoked from a test framework or library.) TODO: Need to implement for reading test specific arguments from command line """ try: if issubclass(cls, AgentVmTest): cls(AgentVmTestContext.from_args()).run() elif issubclass(cls, AgentVmssTest): cls(AgentVmssTestContext.from_args()).run() else: raise Exception(f"Class {cls.__name__} is not a valid test class") except SystemExit: # Bad arguments pass except AssertionError as e: log.error("%s", e) sys.exit(1) except: # pylint: disable=bare-except log.exception("Test failed") sys.exit(1) sys.exit(0) def _run_remote_test(self, ssh_client: SshClient, command: str, use_sudo: bool = False, attempts: int = ATTEMPTS, attempt_delay: int = ATTEMPT_DELAY) -> None: """ Derived classes can use this method to execute a remote test (a test that runs over SSH). """ try: output = ssh_client.run_command(command=command, use_sudo=use_sudo, attempts=attempts, attempt_delay=attempt_delay) log.info("*** PASSED: [%s]\n%s", command, self._indent(output)) except CommandError as error: if error.exit_code == FAIL_EXIT_CODE: fail(f"[{command}] {error.stderr}{self._indent(error.stdout)}") raise RemoteTestError(command=error.command, exit_code=error.exit_code, stdout=self._indent(error.stdout), stderr=error.stderr) @staticmethod def _indent(text: str, indent: str = " " * 8): return "\n".join(f"{indent}{line}" for line in text.splitlines()) class AgentVmTest(AgentTest): """ Base class for Agent tests that run on a single VM """ class AgentVmssTest(AgentTest): """ Base class for Agent tests that run on a scale set """ Azure-WALinuxAgent-a976115/tests_e2e/tests/lib/agent_test_context.py000066400000000000000000000157621510742556200254650ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import argparse import os from abc import ABC, abstractmethod from pathlib import Path from typing import List from tests_e2e.tests.lib.virtual_machine_client import VirtualMachineClient from tests_e2e.tests.lib.virtual_machine_scale_set_client import VirtualMachineScaleSetClient from tests_e2e.tests.lib.ssh_client import SshClient class TestNode(object): """ Name and IP address of a test VM """ def __init__(self, name: str, ip_address: str): self.name = name self.ip_address = ip_address def __str__(self): return f"{self.name}:{self.ip_address}" class AgentTestContext(ABC): """ Base class for the execution context of agent tests; includes the working directories and SSH info for the tests. """ DEFAULT_SSH_PORT = 22 def __init__(self, working_directory: Path, username: str, identity_file: Path, ssh_port: int): self.working_directory: Path = working_directory self.username: str = username self.identity_file: Path = identity_file self.ssh_port: int = ssh_port @abstractmethod def get_test_nodes(self) -> List[TestNode]: """ Returns the list of nodes the test is executed on """ @abstractmethod def refresh_ip_addresses(self) -> None: """ Updates the list of test nodes with their current IP addresses """ @staticmethod def _create_argument_parser() -> argparse.ArgumentParser: """ Creates an ArgumentParser that includes the arguments common to the concrete classes derived from AgentTestContext """ parser = argparse.ArgumentParser() parser.add_argument('-c', '--cloud', dest="cloud", required=False, choices=['AzureCloud', 'AzureChinaCloud', 'AzureUSGovernment'], default="AzureCloud") parser.add_argument('-g', '--group', required=True) parser.add_argument('-l', '--location', required=True) parser.add_argument('-s', '--subscription', required=True) parser.add_argument('-w', '--working-directory', dest="working_directory", required=False, default=str(Path().home() / "tmp")) parser.add_argument('-u', '--username', required=False, default=os.getenv("USER")) parser.add_argument('-k', '--identity-file', dest="identity_file", required=False, default=str(Path.home() / ".ssh" / "id_rsa")) parser.add_argument('-p', '--ssh-port', dest="ssh_port", required=False, default=AgentTestContext.DEFAULT_SSH_PORT) return parser class AgentVmTestContext(AgentTestContext): """ Execution context for agent tests targeted to individual VMs. """ def __init__(self, working_directory: Path, vm: VirtualMachineClient, ip_address: str, username: str, identity_file: Path, ssh_port: int = AgentTestContext.DEFAULT_SSH_PORT): super().__init__(working_directory, username, identity_file, ssh_port) self.vm: VirtualMachineClient = vm self.ip_address: str = ip_address def get_test_nodes(self) -> List[TestNode]: """ Returns the list of nodes the test is executed on """ return [TestNode(name=self.vm.name, ip_address=self.ip_address)] def refresh_ip_addresses(self) -> None: """ Updates self.ip_address to reflect the current IP address of the VM. """ self.ip_address = self.vm.get_ip_address() def create_ssh_client(self) -> SshClient: """ Convenience method to create an SSH client using the connection info from the context. """ return SshClient( ip_address=self.ip_address, username=self.username, identity_file=self.identity_file, port=self.ssh_port) @staticmethod def from_args(): """ Creates an AgentVmTestContext from the command line arguments. """ parser = AgentTestContext._create_argument_parser() parser.add_argument('-vm', '--vm', required=True) parser.add_argument('-a', '--ip-address', dest="ip_address", required=False) # Use the vm name as default args = parser.parse_args() working_directory: Path = Path(args.working_directory) if not working_directory.exists(): working_directory.mkdir(exist_ok=True) vm: VirtualMachineClient = VirtualMachineClient(cloud=args.cloud, location=args.location, subscription=args.subscription, resource_group=args.group, name=args.vm) ip_address = args.ip_address if args.ip_address is not None else args.vm return AgentVmTestContext(working_directory=working_directory, vm=vm, ip_address=ip_address, username=args.username, identity_file=Path(args.identity_file), ssh_port=args.ssh_port) class AgentVmssTestContext(AgentTestContext): """ Execution context for agent tests targeted to VM Scale Sets. """ def __init__(self, working_directory: Path, vmss: VirtualMachineScaleSetClient, username: str, identity_file: Path, ssh_port: int = AgentTestContext.DEFAULT_SSH_PORT): super().__init__(working_directory, username, identity_file, ssh_port) self.vmss: VirtualMachineScaleSetClient = vmss self._test_nodes = None # fetched on demand def get_test_nodes(self) -> List[TestNode]: """ Returns the list of nodes the test is executed on """ if self._test_nodes is None: self.refresh_ip_addresses() return self._test_nodes def refresh_ip_addresses(self) -> None: """ Updates self.ip_address to reflect the current IP address of the VM. """ self._test_nodes = [TestNode(name=i.instance_name, ip_address=i.ip_address) for i in self.vmss.get_instances_ip_address()] @staticmethod def from_args(): """ Creates an AgentVmssTestContext from the command line arguments. """ parser = AgentTestContext._create_argument_parser() parser.add_argument('-vmss', '--vmss', required=True) args = parser.parse_args() working_directory: Path = Path(args.working_directory) if not working_directory.exists(): working_directory.mkdir(exist_ok=True) vmss = VirtualMachineScaleSetClient(cloud=args.cloud, location=args.location, subscription=args.subscription, resource_group=args.group, name=args.vmss) return AgentVmssTestContext(working_directory=working_directory, vmss=vmss, username=args.username, identity_file=Path(args.identity_file), ssh_port=args.ssh_port) Azure-WALinuxAgent-a976115/tests_e2e/tests/lib/agent_update_helpers.py000066400000000000000000000105161510742556200257360ustar00rootroot00000000000000# # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import json from assertpy import fail import requests from azure.mgmt.compute.models import VirtualMachine from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.retry import retry_if_false from tests_e2e.tests.lib.ssh_client import SshClient from tests_e2e.tests.lib.virtual_machine_client import VirtualMachineClient # Helper methods for agent update/publish tests def verify_agent_update_flag_enabled(vm: VirtualMachineClient) -> bool: result: VirtualMachine = vm.get_model() flag: bool = result.os_profile.linux_configuration.enable_vm_agent_platform_updates if flag is None: return False return flag def enable_agent_update_flag(vm: VirtualMachineClient) -> None: osprofile = { "location": vm.location, # location is required field "properties": { "osProfile": { "linuxConfiguration": { "enableVMAgentPlatformUpdates": True } } } } log.info("updating the vm with osProfile property:\n%s", osprofile) vm.update(osprofile) def request_rsm_update(requested_version: str, vm: VirtualMachineClient, arch_type: str, is_downgrade: bool, downgrade_from: str = "9.9.9.9") -> None: """ This method is to simulate the rsm request. First we ensure the PlatformUpdates enabled in the vm and then make a request using rest api """ if not verify_agent_update_flag_enabled(vm): # enable the flag log.info("Attempting vm update to set the enableVMAgentPlatformUpdates flag") enable_agent_update_flag(vm) log.info("Updated the enableVMAgentPlatformUpdates flag to True") else: log.info("Already enableVMAgentPlatformUpdates flag set to True") if arch_type == "aarch64": data = { "target": "Microsoft.OSTCLinuxAgent.ARM64Test", "targetVersion": requested_version } else: data = { "target": "Microsoft.OSTCLinuxAgent.Test", "targetVersion": requested_version } if is_downgrade: data.update({"isEmergencyRollbackRequest": True}) data.update({"badVersion": downgrade_from}) log.info("Attempting rsm upgrade post request with data: {0}".format(data)) request = vm.create_resource_manager_request(requests.post, 'UpgradeVMAgent?api-version=2022-08-01') # Later this api call will be replaced by azure-python-sdk wrapper response = request(data=json.dumps(data), timeout=300) if response.status_code == 202: log.info("RSM upgrade request accepted") else: raise Exception("Error occurred while making RSM upgrade request. Status code : {0} and msg: {1}".format( response.status_code, response.content)) def verify_current_agent_version(ssh_client: SshClient, requested_version: str) -> None: """ Verify current agent version running on requested version """ def _check_agent_version(version: str) -> bool: waagent_version: str = ssh_client.run_command("waagent-version", use_sudo=True) expected_version = f"Goal state agent: {version}" if expected_version in waagent_version: return True else: return False waagent_version: str = "" log.info("Verifying agent updated to published version: {0}".format(requested_version)) success: bool = retry_if_false(lambda: _check_agent_version(requested_version)) if not success: fail("Guest agent didn't update to published version {0} but found \n {1}. \n ".format( requested_version, waagent_version)) waagent_version: str = ssh_client.run_command("waagent-version", use_sudo=True) log.info( f"Successfully verified agent updated to published version. Current agent version running:\n {waagent_version}") Azure-WALinuxAgent-a976115/tests_e2e/tests/lib/azure_clouds.py000066400000000000000000000016121510742556200242500ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from typing import Dict from msrestazure.azure_cloud import Cloud, AZURE_PUBLIC_CLOUD, AZURE_CHINA_CLOUD, AZURE_US_GOV_CLOUD AZURE_CLOUDS: Dict[str, Cloud] = { "AzureCloud": AZURE_PUBLIC_CLOUD, "AzureChinaCloud": AZURE_CHINA_CLOUD, "AzureUSGovernment": AZURE_US_GOV_CLOUD } Azure-WALinuxAgent-a976115/tests_e2e/tests/lib/azure_sdk_client.py000066400000000000000000000043071510742556200251020ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from typing import Any, Callable from azure.identity import DefaultAzureCredential from azure.core.polling import LROPoller from tests_e2e.tests.lib.azure_clouds import AZURE_CLOUDS from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.retry import execute_with_retry class AzureSdkClient: """ Base class for classes implementing clients of the Azure SDK. """ _DEFAULT_TIMEOUT = 10 * 60 # (in seconds) @staticmethod def create_client(client_type: type, cloud: str, subscription_id: str): """ Creates an SDK client of the given 'client_type' """ azure_cloud = AZURE_CLOUDS[cloud] return client_type( base_url=azure_cloud.endpoints.resource_manager, credential=DefaultAzureCredential(authority=azure_cloud.endpoints.active_directory), credential_scopes=[azure_cloud.endpoints.resource_manager + "/.default"], subscription_id=subscription_id) @staticmethod def _execute_async_operation(operation: Callable[[], LROPoller], operation_name: str, timeout: int) -> Any: """ Starts an async operation and waits its completion. Returns the operation's result. """ log.info("Starting [%s]", operation_name) poller: LROPoller = execute_with_retry(operation) log.info("Waiting for [%s]", operation_name) poller.wait(timeout=timeout) if not poller.done(): raise TimeoutError(f"[{operation_name}] did not complete within {timeout} seconds") log.info("[%s] completed", operation_name) return poller.result() Azure-WALinuxAgent-a976115/tests_e2e/tests/lib/cgroup_helpers.py000066400000000000000000000226111510742556200245740ustar00rootroot00000000000000import os import re from assertpy import assert_that, fail from azurelinuxagent.common.future import datetime_min_utc from azurelinuxagent.common.osutil import systemd from azurelinuxagent.common.utils import shellutil, fileutil from azurelinuxagent.common.version import DISTRO_NAME, DISTRO_VERSION from azurelinuxagent.ga.cgroupapi import create_cgroup_api, SystemdCgroupApiv1, SystemdCgroupApiv2 from azurelinuxagent.ga.cpucontroller import CpuControllerV1, CpuControllerV2 from tests_e2e.tests.lib.agent_log import AgentLog from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.retry import retry_if_false BASE_CGROUP = '/sys/fs/cgroup' AGENT_CGROUP_NAME = 'WALinuxAgent' AGENT_SERVICE_NAME = systemd.get_agent_unit_name() CGROUP_TRACKED_PATTERN = r'Started tracking (cpu|memory) cgroup ([^\s]+)\s+\[(?P[^\s]+)\]' GATESTEXT_FULL_NAME = "Microsoft.Azure.Extensions.Edp.GATestExtGo" GATESTEXT_SERVICE = "gatestext" AZUREMONITOREXT_FULL_NAME = "Microsoft.Azure.Monitor.AzureMonitorLinuxAgent" AZUREMONITORAGENT_SERVICE = "azuremonitoragent" def verify_if_distro_supports_cgroup(): """ checks if agent is running in a distro that supports cgroups """ log.info("===== Checking if distro supports cgroups") base_cgroup_fs_exists = os.path.exists(BASE_CGROUP) assert_that(base_cgroup_fs_exists).is_true().described_as("Cgroup file system:{0} not found in Distro {1}-{2}".format(BASE_CGROUP, DISTRO_NAME, DISTRO_VERSION)) log.info('Distro %s-%s supports cgroups\n', DISTRO_NAME, DISTRO_VERSION) def print_cgroups(): """ log the mounted cgroups information """ log.info("====== Currently mounted cgroups ======") for m in shellutil.run_command(['mount']).splitlines(): # output is similar to # mount # sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel) # proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) # devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=1842988k,nr_inodes=460747,mode=755) # cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) # cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,pids) # cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,memory) # cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,blkio) # cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,hugetlb) if 'type cgroup' in m: log.info('\t%s', m) def print_service_status(): log.info("====== Agent Service status ======") output = shellutil.run_command(["systemctl", "status", systemd.get_agent_unit_name()]) for line in output.splitlines(): log.info("\t%s", line) def get_agent_cgroup_mount_path(): return [os.path.join('/', 'azure.slice', AGENT_SERVICE_NAME), os.path.join("/", "system.slice", AGENT_SERVICE_NAME)] def get_extension_cgroup_mount_path(extension_name): return os.path.join('/', 'azure.slice/azure-vmextensions.slice', "azure-vmextensions-" + extension_name + ".slice") def get_unit_cgroup_mount_path(unit_name): """ Returns the cgroup mount path for the given unit """ output = shellutil.run_command(["systemctl", "show", unit_name, "--property", "ControlGroup"]) # Output is similar to # systemctl show walinuxagent.service --property ControlGroup # ControlGroup=/azure.slice/walinuxagent.service # matches above output and extract right side value match = re.match("[^=]+=(?P.+)", output) if match is not None: return match.group('value') return None def verify_agent_cgroup_assigned_correctly(): """ This method checks agent is running and assigned to the correct cgroup using service status output """ log.info("===== Verifying the daemon and the agent are assigned to the same correct cgroup using systemd") cgroup_mount_path = get_agent_cgroup_mount_path() service_status = "" def check_agent_service_cgroup(): is_active = False is_cgroup_assigned = False service_status = shellutil.run_command(["systemctl", "status", systemd.get_agent_unit_name()]) log.info("Agent service status output:\n%s", service_status) is_active_pattern = re.compile(r".*Active:\s+active.*") for line in service_status.splitlines(): if re.match(is_active_pattern, line): is_active = True if any(cgroup in line for cgroup in cgroup_mount_path): is_cgroup_assigned = True return is_active and is_cgroup_assigned # Test check can happen before correct cgroup assigned and relfected in service status. So, retrying the check for few times if not retry_if_false(check_agent_service_cgroup): fail('walinuxagent service was not assigned to the expected cgroup:{0}. Current agent status:{1}'.format(cgroup_mount_path, service_status)) log.info("Successfully verified the agent cgroup assigned correctly by systemd\n") def get_agent_cpu_quota(): """ Returns the cpu quota for the agent service """ output = shellutil.run_command(["systemctl", "show", AGENT_SERVICE_NAME, "--property", "CPUQuotaPerSecUSec"]) # Output is similar to # systemctl show walinuxagent --property CPUQuotaPerSecUSec # CPUQuotaPerSecUSec=infinity match = re.match("[^=]+=(?P.+)", output) if match is not None: return match.group('value') return None def check_agent_quota_disabled(): """ Returns True if the cpu quota is infinity """ cpu_quota = get_agent_cpu_quota() log.info(cpu_quota) # the quota can be expressed as seconds (s) or milliseconds (ms); no quota is expressed as "infinity" # Ubuntu 16 has an issue in expressing no quota as "infinity" https://github.com/systemd/systemd/issues/5965, so we are directly checking the quota value in cpu controller return cpu_quota == 'infinity' or get_unit_cgroup_cpu_quota_disabled(AGENT_SERVICE_NAME) def check_cgroup_disabled_due_to_systemd_error(): """ Returns True if the cgroup is disabled due to systemd error (Connection reset by peer) Ex: 2024-12-18T06:43:23.867711Z INFO ExtHandler ExtHandler [CGW] Disabling resource usage monitoring. Reason: Failed to start Microsoft.Azure.Extensions.Edp.GATestExtGo-1.2.0.0 using systemd-run, will try invoking the extension directly. Error: [SystemdRunError] Systemd process exited with code 1 and output [stdout] [stderr] Warning! D-Bus connection terminated. Failed to start transient scope unit: Connection reset by peer """ return check_log_message("Failed to start.+using systemd-run, will try invoking the extension directly.+[SystemdRunError].+(Message recipient disconnected from message bus without replying|Connection reset by peer|Remote peer disconnected|Transport endpoint is not connected)") def check_log_message(message, after_timestamp=datetime_min_utc): """ Check if the log message is present after the given timestamp(if provided) in the agent log """ log.info("Checking log message: {0}".format(message)) for record in AgentLog().read(): match = re.search(message, record.message, flags=re.DOTALL) if match is not None and record.timestamp > after_timestamp: log.info("Found message:\n\t%s", record.text.replace("\n", "\n\t")) return True return False def get_unit_cgroup_proc_path(unit_name, controller): """ Returns the cgroup.procs path for the given unit and controller. """ cgroups_api = create_cgroup_api() unit_cgroup = cgroups_api.get_unit_cgroup(unit_name=unit_name, cgroup_name="test cgroup") if isinstance(cgroups_api, SystemdCgroupApiv1): return unit_cgroup.get_controller_procs_path(controller=controller) else: return unit_cgroup.get_procs_path() def get_unit_cgroup_cpu_quota_disabled(unit_name): """ Returns True if cpu quota not set for the given unit cgroup """ cgroups_api = create_cgroup_api() unit_cgroup = cgroups_api.get_unit_cgroup(unit_name=unit_name, cgroup_name="test cgroup") controllers = unit_cgroup.get_controllers() for controller in controllers: if isinstance(controller, CpuControllerV1): path = os.path.join(controller.path, "cpu.cfs_quota_us") log.info("Checking cpu.cfs_quota_us file: {0}".format(path)) val = fileutil.read_file(path).strip() return val == "-1" # -1 means no quota elif isinstance(controller, CpuControllerV2): # /sys/fs/cgroup/system.slice/cron.service$ cat cpu.max # max 100000 path = os.path.join(controller.path, "cpu.max") log.info("Checking cpu.cfs_quota_us file: {0}".format(path)) val = fileutil.read_file(path).split()[0] return val == "max" # max means no quota return False def get_mounted_controller_list(): """ Returns list of controller names which are mounted in different cgroup paths """ if using_cgroupv2(): return [] # empty since v2 controllers are mounted at same root return ['cpu', 'memory'] def using_cgroupv2(): """ Returns True if systemd v2 is used """ cgroups_api = create_cgroup_api() return isinstance(cgroups_api, SystemdCgroupApiv2) Azure-WALinuxAgent-a976115/tests_e2e/tests/lib/firewall_manager.py000066400000000000000000000312701510742556200250530ustar00rootroot00000000000000import errno import json import re import time from assertpy import fail from typing import Callable, Dict, List from azurelinuxagent.common.utils import shellutil from azurelinuxagent.common.utils.shellutil import CommandError from tests_e2e.tests.lib.logging import log def get_wireserver_ip() -> str: try: with open('/var/lib/waagent/WireServerEndpoint', 'r') as f: wireserver_ip = f.read() except: # pylint: disable=bare-except wireserver_ip = '168.63.129.16' return wireserver_ip class FirewallConfigurationError(Exception): """ Exception raised when the firewall is not configured correctly """ class FirewallManager: """ Utilities to manage the firewall """ def __init__(self): self._wire_server_address = get_wireserver_ip() FIREWALL_PERIOD = 30 ACCEPT_DNS = "ACCEPT DNS" ACCEPT = "ACCEPT" DROP = "DROP" @staticmethod def create(): """ Creates the appropriate firewall manager to """ try: shellutil.run_command(["sudo", "iptables", "--version"]) # On some distros, e.g. CentOS, iptables is not on the PATH for regular users log.info("Using iptables to manage the firewall") return IpTables() except CommandError: pass try: shellutil.run_command(["nft", "--version"]) log.info("Using nftables to manage the firewall") return NfTables() except FileNotFoundError: pass raise Exception("No firewall commands are installed") def get_state(self) -> str: raise NotImplementedError() def get_missing_rules(self) -> List[str]: raise NotImplementedError() def check_rule(self, rule_name: str) -> bool: raise NotImplementedError() def delete_rule(self, rule_name: str) -> None: raise NotImplementedError() def log_firewall_state(self, header: str) -> None: try: log.info(f"{header}:\n{self.get_state()}") except Exception as error: log.warning(f"Error -- Failed to get the current state of the firewall: {error}") def assert_all_rules_are_set(self) -> None: """ Fails the test if any of the firewall rules are missing """ log.info("Verifying all firewall rules are set...") # Agent will re-add rules within OS.EnableFirewallPeriod, So waiting that time + some buffer missing = self.get_missing_rules() if len(missing) > 0: log.info("Some firewall rules are missing. Waiting for a short period to give the agent a chance to re-add the rules...") time.sleep(self.FIREWALL_PERIOD + 15) missing = self.get_missing_rules() if len(missing) > 0: fail(f"Some firewall rules are missing: {missing}. Current firewall rules:\n{self.get_state()}") log.info("All firewall rules are set") def verify_rule_is_not_set(self, rule_name: str) -> None: """ This function verifies that the given rule is not set (for example, if it was just deleted). """ log.info(f"-----Verifying firewall rule {rule_name} is not set") if self.check_rule(rule_name): raise Exception(f"Firewall rule {rule_name} should not be set. Current firewall state:\n{self.get_state()}") log.info(f"Firewall rule {rule_name} is not set") @staticmethod def _log_and_run_command(command: str) -> str: log.info(f"Executing command: {command}") return shellutil.run_command(command.split(" ")) class _IpTablesFirewalldManager(FirewallManager): """ Base class for the IpTables and Firewalld classes """ def __init__(self): super().__init__() self._commands: Dict[str, Callable] = { self.ACCEPT_DNS: self._get_accept_dns_command, self.ACCEPT: self._get_accept_command, self.DROP: self._get_accept_drop_command } def get_state(self) -> str: return self._log_and_run_command(self._get_state_command()) def get_missing_rules(self) -> List[str]: missing = [] for name, get_command in self._commands.items(): try: command_option = self._get_check_command_option() self._log_and_run_command(get_command(command_option)) except CommandError as command_error: if command_error.returncode != 1: raise missing.append(name) return missing def check_rule(self, rule_name: str) -> bool: try: command_option = self._get_check_command_option() self._log_and_run_command(self._commands[rule_name](command_option)) except CommandError as command_error: if command_error.returncode == 1: return False raise return True def delete_rule(self, rule_name: str) -> None: command_option = self._get_delete_command_option() self._log_and_run_command(self._commands[rule_name](command_option)) def _get_state_command(self) -> str: raise NotImplementedError() def _get_check_command_option(self) -> str: raise NotImplementedError() def _get_delete_command_option(self) -> str: raise NotImplementedError() def _get_accept_dns_command(self, command_option: str) -> str: raise NotImplementedError() def _get_accept_command(self, command_option: str) -> str: raise NotImplementedError() def _get_accept_drop_command(self, command_option: str) -> str: raise NotImplementedError() class IpTables(_IpTablesFirewalldManager): """ Implementation of Firewall using the iptables command """ def _get_state_command(self) -> str: return "sudo iptables -w -t security -L -nxv" def _get_check_command_option(self) -> str: return "-C" def _get_delete_command_option(self) -> str: return "-D" def _get_accept_dns_command(self, command_option: str) -> str: return f"sudo iptables -w -t security {command_option} OUTPUT -d {self._wire_server_address} -p tcp --destination-port 53 -j ACCEPT" def _get_accept_command(self, command_option: str) -> str: return f"sudo iptables -w -t security {command_option} OUTPUT -d {self._wire_server_address} -p tcp -m owner --uid-owner 0 -j ACCEPT" def _get_accept_drop_command(self, command_option: str) -> str: return f"sudo iptables -w -t security {command_option} OUTPUT -d {self._wire_server_address} -p tcp -m conntrack --ctstate INVALID,NEW -j DROP" class Firewalld(_IpTablesFirewalldManager): """ Implementation of Firewall using the firewall-cmd command """ @staticmethod def is_service_running() -> bool: """ Returns true if the firewalld service is running on the VM """ try: return shellutil.run_command(["firewall-cmd", "--state"]).rstrip() == "running" except Exception as exception: if isinstance(exception, OSError) and exception.errno == errno.ENOENT: # pylint: disable=no-member return False log.info(f"The firewalld service is present, but it is not running: {exception}") return False def _get_state_command(self) -> str: return "sudo firewall-cmd --permanent --direct --get-all-passthroughs" def _get_check_command_option(self) -> str: return "--query-passthrough" def _get_delete_command_option(self) -> str: return "--remove-passthrough" def _get_accept_dns_command(self, command_option: str) -> str: return f"firewall-cmd --permanent --direct {command_option} ipv4 -t security -A OUTPUT -d {self._wire_server_address} -p tcp --destination-port 53 -j ACCEPT" def _get_accept_command(self, command_option: str) -> str: return f"firewall-cmd --permanent --direct {command_option} ipv4 -t security -A OUTPUT -d {self._wire_server_address} -p tcp -m owner --uid-owner 0 -j ACCEPT" def _get_accept_drop_command(self, command_option: str) -> str: return f"firewall-cmd --permanent --direct {command_option} ipv4 -t security -A OUTPUT -d {self._wire_server_address} -p tcp -m conntrack --ctstate INVALID,NEW -j DROP" class NfTables(FirewallManager): """ Implementation of Firewall using the nft command """ def get_state(self) -> str: # # The state is similar to # # table ip walinuxagent { # chain output { # type filter hook output priority filter; policy accept; # ip daddr 168.63.129.16 tcp dport 53 counter packets 0 bytes 0 accept # ip daddr 168.63.129.16 meta skuid 0 counter packets 93 bytes 57077 accept # ip daddr 168.63.129.16 ct state invalid,new counter packets 5904 bytes 742896 drop # } # } # return shellutil.run_command(["sudo", "nft", "list", "table", "walinuxagent"]) _rule_regexp = { FirewallManager.ACCEPT_DNS: r" tcp dport != 53 ", FirewallManager.ACCEPT: r" meta skuid != 0 ", FirewallManager.DROP: r" drop$" } def get_missing_rules(self) -> List[str]: if "table ip walinuxagent" not in shellutil.run_command(["sudo", "nft", "list", "tables"]): return [FirewallManager.ACCEPT_DNS, FirewallManager.ACCEPT, FirewallManager.DROP] try: missing = [] wireserver_rule = self._get_wireserver_rule() for rule, regexp in NfTables._rule_regexp.items(): if re.search(regexp, wireserver_rule) is None: missing.append(rule) return missing except FirewallConfigurationError: return [FirewallManager.ACCEPT_DNS, FirewallManager.ACCEPT, FirewallManager.DROP] def check_rule(self, rule_name: str) -> bool: try: wireserver_rule = self._get_wireserver_rule() return re.search(self._rule_regexp[rule_name], wireserver_rule) is not None except KeyError: raise Exception(f"Invalid rule name: {rule_name}") def _get_wireserver_rule(self) -> str: """ Returns the output line of the nft command that contains the rule for the WireServer address; raises FirewallConfigurationError if the rule is not found. """ for line in self.get_state().split("\n"): if re.search(r"\s*ip daddr 168.63.129.16\s*", line) is not None: return line raise FirewallConfigurationError("Could not find any rules for the WireServer address in the nftables state") def delete_rule(self, rule_name: str) -> None: output: str = shellutil.run_command(["sudo", "nft", "--json", "--handle", "list", "table", "walinuxagent"]) # # The output will be similar to # # { # "nftables": [ # { "metainfo": { "version": "1.0.2", "release_name": "Lester Gooch", "json_schema_version": 1 } }, # { "table": { "family": "ip", "name": "walinuxagent", "handle": 2 } }, # { "chain": { "family": "ip", "table": "walinuxagent", "name": "output", "handle": 1, "type": "filter", "hook": "output", "prio": 0, "policy": "accept" } }, # { # "rule": { # "family": "ip", "table": "walinuxagent", "chain": "output", "handle": 2, # "expr": [ # ... # ] # } # } # ] # } # # Delete the entire rule and add a new one that is missing the desired rule_name # state = json.loads(output) handles = [i["rule"]["handle"] for i in state["nftables"] if i.get("rule") is not None and i["rule"]["table"] == "walinuxagent"] if len(handles) != 1: raise Exception(f"Expected exactly one rule in the walinuxagent table.\n{output}") self._log_and_run_command(f"sudo nft delete rule ip walinuxagent output handle {handles[0]}") if rule_name == FirewallManager.ACCEPT_DNS: add_rule_command = "sudo nft add rule ip walinuxagent output ip protocol tcp ip daddr 168.63.129.16 skuid != 0 ct state invalid,new counter drop" elif rule_name == FirewallManager.ACCEPT: add_rule_command = "sudo nft add rule ip walinuxagent output ip protocol tcp ip daddr 168.63.129.16 tcp dport != 53 ct state invalid,new counter drop" elif rule_name == FirewallManager.DROP: add_rule_command = "sudo nft add rule ip walinuxagent output ip protocol tcp ip daddr 168.63.129.16 tcp dport != 53 skuid != 0 ct state invalid,new counter accept" else: raise Exception(f"Invalid rule name: {rule_name}") self._log_and_run_command(add_rule_command) Azure-WALinuxAgent-a976115/tests_e2e/tests/lib/logging.py000066400000000000000000000145071510742556200232060ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # This module defines a single object, 'log', of type AgentLogger, which the end-to-end tests and libraries use # for logging. # import contextlib import sys from logging import FileHandler, Formatter, Handler, Logger, StreamHandler, INFO from pathlib import Path from threading import current_thread from typing import Dict, Callable class _AgentLoggingHandler(Handler): """ AgentLoggingHandler is a helper class for AgentLogger. This handler simply redirects logging to other handlers. It maintains a set of FileHandlers associated to specific threads. When a thread emits a log record, the AgentLoggingHandler passes through the call to the FileHandlers associated with that thread, or to a StreamHandler that outputs to stdout if there is not a FileHandler for that thread. Thread can set a FileHandler for themselves using _AgentLoggingHandler.set_current_thread_log() and remove that handler using _AgentLoggingHandler.close_current_thread_log(). The _AgentLoggingHandler simply passes through calls to setLevel, setFormatter, flush, and close to the handlers it maintains. AgentLoggingHandler is meant to be primarily used in multithreaded scenarios and is thread-safe. """ def __init__(self): super().__init__() self.formatter: Formatter = Formatter('%(asctime)s.%(msecs)03d [%(levelname)s] %(message)s', datefmt="%Y-%m-%dT%H:%M:%SZ") self.default_handler = StreamHandler(sys.stdout) self.default_handler.setFormatter(self.formatter) self.per_thread_handlers: Dict[int, FileHandler] = {} def set_thread_log(self, thread_ident: int, log_file: Path) -> None: self.close_current_thread_log() handler: FileHandler = FileHandler(str(log_file)) handler.setFormatter(self.formatter) self.per_thread_handlers[thread_ident] = handler def get_thread_log(self, thread_ident: int) -> Path: handler = self.per_thread_handlers.get(thread_ident) if handler is None: return None return Path(handler.baseFilename) def close_thread_log(self, thread_ident: int) -> None: handler = self.per_thread_handlers.pop(thread_ident, None) if handler is not None: handler.close() def set_current_thread_log(self, log_file: Path) -> None: self.set_thread_log(current_thread().ident, log_file) def get_current_thread_log(self) -> Path: return self.get_thread_log(current_thread().ident) def close_current_thread_log(self) -> None: self.close_thread_log(current_thread().ident) def emit(self, record) -> None: handler = self.per_thread_handlers.get(current_thread().ident) if handler is None: handler = self.default_handler handler.emit(record) def setLevel(self, level) -> None: self._for_each_handler(lambda h: h.setLevel(level)) def setFormatter(self, fmt) -> None: self._for_each_handler(lambda h: h.setFormatter(fmt)) def flush(self) -> None: self._for_each_handler(lambda h: h.flush()) def close(self) -> None: self._for_each_handler(lambda h: h.close()) def _for_each_handler(self, op: Callable[[Handler], None]) -> None: op(self.default_handler) # copy of the values into a new list in case the dictionary changes while we are iterating for handler in list(self.per_thread_handlers.values()): op(handler) class AgentLogger(Logger): """ AgentLogger is a Logger customized for agent test scenarios. When tests are executed from the command line (for example, during development) the AgentLogger can be used with its default configuration, which simply outputs to stdout. When tests are executed from the test framework, typically there are multiple test suites executed concurrently on different threads, and each test suite must have its own log file; in that case, each thread can call AgentLogger.set_current_thread_log() to send all the logging from that thread to a particular file. """ def __init__(self): super().__init__(name="waagent", level=INFO) self._handler: _AgentLoggingHandler = _AgentLoggingHandler() self.addHandler(self._handler) def set_thread_log(self, thread_ident: int, log_file: Path) -> None: self._handler.set_thread_log(thread_ident, log_file) def get_thread_log_file(self, thread_ident: int) -> Path: """ Returns the Path of the log file for the current thread, or None if a log has not been set """ return self._handler.get_thread_log(thread_ident) def close_thread_log(self, thread_ident: int) -> None: self._handler.close_thread_log(thread_ident) def set_current_thread_log(self, log_file: Path) -> None: self._handler.set_current_thread_log(log_file) def get_current_thread_log(self) -> Path: return self._handler.get_current_thread_log() def close_current_thread_log(self) -> None: self._handler.close_current_thread_log() log: AgentLogger = AgentLogger() @contextlib.contextmanager def set_current_thread_log(log_file: Path): """ Context Manager to set the log file for the current thread temporarily """ initial_value = log.get_current_thread_log() log.set_current_thread_log(log_file) try: yield finally: log.close_current_thread_log() if initial_value is not None: log.set_current_thread_log(initial_value) @contextlib.contextmanager def set_thread_name(name: str): """ Context Manager to change the name of the current thread temporarily """ initial_name = current_thread().name current_thread().name = name try: yield finally: current_thread().name = initial_name Azure-WALinuxAgent-a976115/tests_e2e/tests/lib/network_security_rule.py000066400000000000000000000203521510742556200262220ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import json from typing import Any, Dict, List from tests_e2e.tests.lib.update_arm_template import UpdateArmTemplate class NetworkSecurityRule: """ Provides methods to add network security rules to the given ARM template. The security rules are added under _NETWORK_SECURITY_GROUP, which is also added to the template. """ def __init__(self, template: Dict[str, Any], is_lisa_template: bool): self._template = template self._is_lisa_template = is_lisa_template _NETWORK_SECURITY_GROUP: str = "waagent-nsg" def add_allow_ssh_rule(self, ip_address: str) -> None: self.add_security_rule( json.loads(f"""{{ "name": "waagent-ssh", "properties": {{ "description": "Allows inbound SSH connections from the orchestrator machine.", "protocol": "Tcp", "sourcePortRange": "*", "destinationPortRange": "22", "sourceAddressPrefix": "{ip_address}", "destinationAddressPrefix": "*", "access": "Allow", "priority": 100, "direction": "Inbound" }} }}""")) def add_security_rule(self, security_rule: Dict[str, Any]) -> None: self._get_network_security_group()["properties"]["securityRules"].append(security_rule) def disable_default_outbound_access(self) -> None: subnets = self._get_subnets() subnet = subnets[0] subnet["properties"]["defaultoutboundaccess"] = False def _get_network_security_group(self) -> Dict[str, Any]: resources: Any = self._template["resources"] # # If the NSG already exists, just return it # try: return UpdateArmTemplate.get_resource_by_name(resources, self._NETWORK_SECURITY_GROUP, "Microsoft.Network/networkSecurityGroups") except KeyError: pass # # Otherwise, create it and append it to the list of resources # network_security_group = json.loads(f"""{{ "type": "Microsoft.Network/networkSecurityGroups", "name": "{self._NETWORK_SECURITY_GROUP}", "location": "[resourceGroup().location]", "apiVersion": "2020-05-01", "properties": {{ "securityRules": [] }} }}""") # resources is a dictionary in LISA's ARM template, but a list in the template for scale sets if isinstance(resources, dict): nsg_reference = "network_security_groups" resources[nsg_reference] = network_security_group else: nsg_reference = f"[resourceId('Microsoft.Network/networkSecurityGroups', '{self._NETWORK_SECURITY_GROUP}')]" resources.append(network_security_group) # # Add a dependency on the NSG to the virtual network # network_resource = UpdateArmTemplate.get_resource(resources, "Microsoft.Network/virtualNetworks") network_resource_dependencies = network_resource.get("dependsOn") if network_resource_dependencies is None: network_resource["dependsOn"] = [nsg_reference] else: network_resource_dependencies.append(nsg_reference) # # Add a reference to the NSG to the properties of the subnets. # nsg_reference = json.loads(f"""{{ "networkSecurityGroup": {{ "id": "[resourceId('Microsoft.Network/networkSecurityGroups', '{self._NETWORK_SECURITY_GROUP}')]" }} }}""") subnets = self._get_subnets() subnets_properties = subnets[0].get("properties") if subnets_properties is None: subnets["properties"] = nsg_reference else: subnets_properties.update(nsg_reference) return network_security_group def _get_subnets(self) -> List[Dict[str, Any]]: resources: Any = self._template["resources"] network_resource = UpdateArmTemplate.get_resource(resources, "Microsoft.Network/virtualNetworks") if self._is_lisa_template: # The subnets are a copy property of the virtual network in LISA's ARM template: # # { # "condition": "[empty(parameters('virtual_network_resource_group'))]", # "apiVersion": "2020-05-01", # "type": "Microsoft.Network/virtualNetworks", # "name": "[parameters('virtual_network_name')]", # "location": "[parameters('location')]", # "properties": { # "addressSpace": { # "addressPrefixes": [ # "10.0.0.0/16" # ] # }, # "copy": [ # { # "name": "subnets", # "count": "[parameters('subnet_count')]", # "input": { # "name": "[concat(parameters('subnet_prefix'), copyIndex('subnets'))]", # "properties": { # "addressPrefix": "[concat('10.0.', copyIndex('subnets'), '.0/24')]" # } # } # } # ] # } # } # subnets_copy = network_resource["properties"].get("copy") if network_resource.get( "properties") is not None else None if subnets_copy is None: raise Exception("Cannot find the copy property of the virtual network in the ARM template") subnets_with_input = [i for i in subnets_copy if "name" in i and i["name"] == 'subnets'] if len(subnets_with_input) == 0: raise Exception("Cannot find the subnets of the virtual network in the ARM template") subnets = [subnet.get("input") for subnet in subnets_with_input if subnet.get("input") is not None] if len(subnets) == 0: raise Exception("Cannot find the input property of the subnets in the ARM template") return subnets else: # # The subnets are simple property of the virtual network in template for scale sets: # { # "apiVersion": "2023-06-01", # "type": "Microsoft.Network/virtualNetworks", # "name": "[variables('virtualNetworkName')]", # "location": "[resourceGroup().location]", # "properties": { # "addressSpace": { # "addressPrefixes": [ # "[variables('vnetAddressPrefix')]" # ] # }, # "subnets": [ # { # "name": "[variables('subnetName')]", # "properties": { # "addressPrefix": "[variables('subnetPrefix')]", # } # } # ] # } # } subnets = network_resource["properties"].get("subnets") if network_resource.get( "properties") is not None else None if subnets is None: raise Exception("Cannot find the subnets property of the virtual network in the ARM template") return subnets Azure-WALinuxAgent-a976115/tests_e2e/tests/lib/remote_test.py000066400000000000000000000026161510742556200241100ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import sys from typing import Callable from tests_e2e.tests.lib.logging import log SUCCESS_EXIT_CODE = 0 FAIL_EXIT_CODE = 100 ERROR_EXIT_CODE = 200 def run_remote_test(test_method: Callable[[], None]) -> None: """ Helper function to run a remote test; implements coding conventions for remote tests, e.g. error message goes to stderr, test log goes to stdout, etc. """ try: test_method() log.info("*** PASSED") except AssertionError as e: print(f"{e}", file=sys.stderr) log.error("%s", e) sys.exit(FAIL_EXIT_CODE) except Exception as e: print(f"UNEXPECTED ERROR: {e}", file=sys.stderr) log.exception("*** UNEXPECTED ERROR") sys.exit(ERROR_EXIT_CODE) sys.exit(SUCCESS_EXIT_CODE) Azure-WALinuxAgent-a976115/tests_e2e/tests/lib/resource_group_client.py000066400000000000000000000062211510742556200261530ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # This module includes facilities to create a resource group and deploy an arm template to it # from typing import Dict, Any from azure.mgmt.compute import ComputeManagementClient from azure.mgmt.resource import ResourceManagementClient from azure.mgmt.resource.resources.models import DeploymentProperties, DeploymentMode from tests_e2e.tests.lib.azure_sdk_client import AzureSdkClient from tests_e2e.tests.lib.logging import log class ResourceGroupClient(AzureSdkClient): """ Provides operations on resource groups (create, template deployment, etc). """ def __init__(self, cloud: str, subscription: str, name: str, location: str = ""): super().__init__() self.cloud: str = cloud self.location = location self.subscription: str = subscription self.name: str = name self._compute_client = AzureSdkClient.create_client(ComputeManagementClient, cloud, subscription) self._resource_client = AzureSdkClient.create_client(ResourceManagementClient, cloud, subscription) def create(self) -> None: """ Creates a resource group """ log.info("Creating resource group %s", self) self._resource_client.resource_groups.create_or_update(self.name, {"location": self.location}) def deploy_template(self, template: Dict[str, Any], parameters: Dict[str, Any] = None): """ Deploys an ARM template to the resource group """ if parameters: properties = DeploymentProperties(template=template, parameters=parameters, mode=DeploymentMode.incremental) else: properties = DeploymentProperties(template=template, mode=DeploymentMode.incremental) log.info("Deploying template to resource group %s...", self) self._execute_async_operation( operation=lambda: self._resource_client.deployments.begin_create_or_update(self.name, 'TestDeployment', {'properties': properties}), operation_name=f"Deploy template to resource group {self}", timeout=AzureSdkClient._DEFAULT_TIMEOUT) def delete(self) -> None: """ Deletes the resource group """ log.info("Deleting resource group %s (no wait)", self) self._resource_client.resource_groups.begin_delete(self.name) # Do not wait for the deletion to complete def is_exists(self) -> bool: """ Checks if the resource group exists """ return self._resource_client.resource_groups.check_existence(self.name) def __str__(self): return f"{self.name}" Azure-WALinuxAgent-a976115/tests_e2e/tests/lib/retry.py000066400000000000000000000074061510742556200227250ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import time from typing import Callable, Any from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.shell import CommandError # R1710: Either all return statements in a function should return an expression, or none of them should. (inconsistent-return-statements) def execute_with_retry(operation: Callable[[], Any]) -> Any: # pylint: disable=inconsistent-return-statements """ Some Azure errors (e.g. throttling) are retryable; this method attempts the given operation retrying a few times (after a short delay) if the error includes the string "RetryableError" """ attempts = 3 while attempts > 0: attempts -= 1 try: return operation() except Exception as e: # TODO: Do we need to retry on msrestazure.azure_exceptions.CloudError? if "RetryableError" not in str(e) or attempts == 0: raise log.warning("The operation failed with a RetryableError, retrying in 30 secs. Error: %s", e) time.sleep(30) def retry_ssh_run(operation: Callable[[], Any], attempts: int, attempt_delay: int) -> Any: """ This method attempts to retry ssh run command a few times if operation failed with connection time out """ i = 0 while True: i += 1 try: return operation() except CommandError as e: retryable = ((e.exit_code == 255 and ("Connection timed out" in e.stderr or "Connection refused" in e.stderr)) or "Unprivileged users are not permitted to log in yet" in e.stderr) if not retryable or i >= attempts: raise log.warning("The SSH operation failed, retrying in %s secs [Attempt %s/%s].\n%s", attempt_delay, i, attempts, e) time.sleep(attempt_delay) def retry_if_false(operation: Callable[[], bool], attempts: int = 5, delay: int = 30) -> bool: """ This method attempts the given operation retrying a few times (after a short delay) Note: Method used for operations which are return True or False """ success: bool = False while attempts > 0 and not success: attempts -= 1 try: success = operation() except Exception as e: log.warning("Error in operation: %s", e) if attempts == 0: raise if not success and attempts != 0: log.info("Current operation failed, retrying in %s secs.", delay) time.sleep(delay) return success # R1710: Either all return statements in a function should return an expression, or none of them should. (inconsistent-return-statements) def retry(operation: Callable[[], Any], attempts: int = 5, delay: int = 30) -> Any: # pylint: disable=inconsistent-return-statements """ This method attempts the given operation retrying a few times on exceptions. Returns the value returned by the operation. """ while attempts > 0: attempts -= 1 try: return operation() except Exception as e: if attempts == 0: raise log.warning("Error in operation, retrying in %s secs: %s", delay, e) time.sleep(delay) Azure-WALinuxAgent-a976115/tests_e2e/tests/lib/shell.py000066400000000000000000000045171510742556200226670ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import threading from subprocess import Popen, PIPE from typing import Any class CommandError(Exception): """ Exception raised by run_command when the command returns an error """ def __init__(self, command: Any, exit_code: int, stdout: str, stderr: str): super().__init__(f"'{command}' failed (exit code: {exit_code}): {stderr}") self.command: Any = command self.exit_code: int = exit_code self.stdout: str = stdout self.stderr: str = stderr def __str__(self): return f"'{self.command}' failed (exit code: {self.exit_code})\nstdout:\n{self.stdout}\nstderr:\n{self.stderr}\n" def run_command(command: Any, shell=False) -> str: """ This function is a thin wrapper around Popen/communicate in the subprocess module. It executes the given command and returns its stdout. If the command returns a non-zero exit code, the function raises a CommandError. Similarly to Popen, the 'command' can be a string or a list of strings, and 'shell' indicates whether to execute the command through the shell. NOTE: The command's stdout and stderr are read as text streams. """ process = Popen(command, stdout=PIPE, stderr=PIPE, shell=shell, text=True) timer = threading.Timer(15 * 60, process.kill) # Kill process after timeout timer.start() try: stdout, stderr = process.communicate() finally: timer.cancel() if process.returncode == -9: stderr = "The process was killed due to command timeout\n" + stderr raise CommandError(command, process.returncode, stdout, stderr) if process.returncode != 0: raise CommandError(command, process.returncode, stdout, stderr) return stdout Azure-WALinuxAgent-a976115/tests_e2e/tests/lib/ssh_client.py000066400000000000000000000104641510742556200237110ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import datetime import re from pathlib import Path from azurelinuxagent.common.future import UTC from tests_e2e.tests.lib import shell from tests_e2e.tests.lib.retry import retry_ssh_run ATTEMPTS: int = 3 ATTEMPT_DELAY: int = 30 class SshClient(object): def __init__(self, ip_address: str, username: str, identity_file: Path, port: int = 22): self.ip_address: str = ip_address self.username: str = username self.identity_file: Path = identity_file self.port: int = port def run_command(self, command: str, use_sudo: bool = False, attempts: int = ATTEMPTS, attempt_delay: int = ATTEMPT_DELAY) -> str: """ Executes the given command over SSH and returns its stdout. If the command returns a non-zero exit code, the function raises a CommandError. """ if re.match(r"^\s*sudo\s*", command): raise Exception("Do not include 'sudo' in the 'command' argument, use the 'use_sudo' parameter instead") destination = f"ssh://{self.username}@{self.ip_address}:{self.port}" # Note that we add ~/bin to the remote PATH, since Python (Pypy) and other test tools are installed there. # Note, too, that when using sudo we need to carry over the value of PATH to the sudo session sudo = "sudo env PATH=$PATH PYTHONPATH=$PYTHONPATH" if use_sudo else '' command = [ "ssh", "-o", "StrictHostKeyChecking=no", "-i", self.identity_file, destination, f"if [[ -e ~/bin/set-agent-env ]]; then source ~/bin/set-agent-env; fi; {sudo} {command}" ] return retry_ssh_run(lambda: shell.run_command(command), attempts, attempt_delay) @staticmethod def generate_ssh_key(private_key_file: Path) -> None: """ Generates an SSH key on the given Path """ shell.run_command( ["ssh-keygen", "-m", "PEM", "-t", "rsa", "-b", "4096", "-q", "-N", "", "-f", str(private_key_file)]) def get_architecture(self) -> str: return self.run_command("uname -m").rstrip() def get_distro(self): return self.run_command("get_distro.py").rstrip() def get_time(self) -> datetime.datetime: time_string = self.run_command("date --utc '+%Y-%m-%dT%T.%6NZ'").rstrip() return datetime.datetime.strptime(time_string, '%Y-%m-%dT%H:%M:%S.%fZ').replace(tzinfo=UTC) def copy_to_node(self, local_path: Path, remote_path: Path, recursive: bool = False, attempts: int = ATTEMPTS, attempt_delay: int = ATTEMPT_DELAY) -> None: """ File copy to a remote node """ self._copy(local_path, remote_path, remote_source=False, remote_target=True, recursive=recursive, attempts=attempts, attempt_delay=attempt_delay) def copy_from_node(self, remote_path: Path, local_path: Path, recursive: bool = False, attempts: int = ATTEMPTS, attempt_delay: int = ATTEMPT_DELAY) -> None: """ File copy from a remote node """ self._copy(remote_path, local_path, remote_source=True, remote_target=False, recursive=recursive, attempts=attempts, attempt_delay=attempt_delay) def _copy(self, source: Path, target: Path, remote_source: bool, remote_target: bool, recursive: bool, attempts: int, attempt_delay: int) -> None: if remote_source: source = f"{self.username}@{self.ip_address}:{source}" if remote_target: target = f"{self.username}@{self.ip_address}:{target}" command = ["scp", "-o", "StrictHostKeyChecking=no", "-i", self.identity_file] if recursive: command.append("-r") command.extend([str(source), str(target)]) return retry_ssh_run(lambda: shell.run_command(command), attempts, attempt_delay) Azure-WALinuxAgent-a976115/tests_e2e/tests/lib/update_arm_template.py000066400000000000000000000144641510742556200255760ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from abc import ABC, abstractmethod from typing import Any, Dict class UpdateArmTemplate(ABC): @abstractmethod def update(self, template: Dict[str, Any], is_lisa_template: bool) -> None: """ Derived classes implement this method to customize the ARM template used to create the test VMs. The 'template' parameter is a dictionary created from the template's JSON document, as parsed by json.loads(). If the 'is_lisa_template' parameter is True, the template was created by LISA. The original JSON document is located at https://github.com/microsoft/lisa/blob/main/lisa/sut_orchestrator/azure/arm_template.json """ @staticmethod def get_resource(resources: Any, type_name: str) -> Any: """ Returns the first resource of the specified type in the given 'resources' list/dict. Raises KeyError if no resource of the specified type is found. """ if isinstance(resources, dict): resources = resources.values() for item in resources: if item["type"] == type_name: return item raise KeyError(f"Cannot find a resource of type {type_name} in the ARM template") @staticmethod def get_resource_by_name(resources: Any, resource_name: str, type_name: str) -> Any: """ Returns the first resource of the specified type and name in the given 'resources' list/dict. Raises KeyError if no resource of the specified type and name is found. """ if isinstance(resources, dict): resources = resources.values() for item in resources: if item["type"] == type_name and item["name"] == resource_name: return item raise KeyError(f"Cannot find a resource {resource_name} of type {type_name} in the ARM template") @staticmethod def get_lisa_function(template: Dict[str, Any], function_name: str) -> Dict[str, Any]: """ Looks for the given function name in the bicep namespace and returns its definition. Raises KeyError if the function is not found. Note: LISA leverages the bicep language to define the ARM templates.Now namespace is changed to __bicep instead lisa """ # # NOTE: LISA's functions are in the "lisa" namespace, for example: # # "functions": [ # { # "namespace": "lisa", # "members": { # "getOSProfile": { # "parameters": [ # { # "name": "computername", # "type": "string" # }, # etc. # ], # "output": { # "type": "object", # "value": { # "computername": "[parameters('computername')]", # "adminUsername": "[parameters('admin_username')]", # "adminPassword": "[if(parameters('has_password'), parameters('admin_password'), json('null'))]", # "linuxConfiguration": "[if(parameters('has_linux_configuration'), parameters('linux_configuration'), json('null'))]" # } # } # }, # } # } # ] functions = template.get("functions") if functions is None: raise Exception('Cannot find "functions" in the LISA template.') for namespace in functions: name = namespace.get("namespace") if name is None: raise Exception(f'Cannot find "namespace" in the LISA template: {namespace}') if name == "__bicep": lisa_functions = namespace.get('members') if lisa_functions is None: raise Exception(f'Cannot find the members of the lisa namespace in the LISA template: {namespace}') function_definition = lisa_functions.get(function_name) if function_definition is None: raise KeyError(f'Cannot find function {function_name} in the lisa namespace in the LISA template: {namespace}') return function_definition raise Exception(f'Cannot find the "lisa" namespace in the LISA template: {functions}') @staticmethod def get_function_output(function: Dict[str, Any]) -> Dict[str, Any]: """ Returns the "value" property of the output for the given function. Sample function: { "parameters": [ { "name": "computername", "type": "string" }, etc. ], "output": { "type": "object", "value": { "computername": "[parameters('computername')]", "adminUsername": "[parameters('admin_username')]", "adminPassword": "[if(parameters('has_password'), parameters('admin_password'), json('null'))]", "linuxConfiguration": "[if(parameters('has_linux_configuration'), parameters('linux_configuration'), json('null'))]" } } } """ output = function.get('output') if output is None: raise Exception(f'Cannot find the "output" of the given function: {function}') value = output.get('value') if value is None: raise Exception(f"Cannot find the output's value of the given function: {function}") return value Azure-WALinuxAgent-a976115/tests_e2e/tests/lib/virtual_machine_client.py000066400000000000000000000312611510742556200262640ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # This module includes facilities to execute operations on virtual machines (list extensions, restart, etc). # import datetime import functools import time from typing import Any, Callable, Dict, List from azure.identity import DefaultAzureCredential from msrestazure.azure_cloud import Cloud from azure.mgmt.compute import ComputeManagementClient from azure.mgmt.compute.models import VirtualMachineExtension, VirtualMachineInstanceView, VirtualMachine from azure.mgmt.network import NetworkManagementClient from azure.mgmt.network.models import NetworkInterface, PublicIPAddress from azure.mgmt.resource import ResourceManagementClient from azurelinuxagent.common.future import UTC from tests_e2e.tests.lib.azure_clouds import AZURE_CLOUDS from tests_e2e.tests.lib.azure_sdk_client import AzureSdkClient from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.retry import execute_with_retry from tests_e2e.tests.lib.shell import CommandError from tests_e2e.tests.lib.ssh_client import SshClient class VirtualMachineClient(AzureSdkClient): """ Provides operations on virtual machines (get instance view, update, restart, etc). """ def __init__(self, cloud: str, location: str, subscription: str, resource_group: str, name: str): super().__init__() self.cloud: str = cloud self.location = location self.subscription: str = subscription self.resource_group: str = resource_group self.name: str = name self._compute_client = AzureSdkClient.create_client(ComputeManagementClient, cloud, subscription) self._resource_client = AzureSdkClient.create_client(ResourceManagementClient, cloud, subscription) self._network_client = AzureSdkClient.create_client(NetworkManagementClient, cloud, subscription) def create_resource_manager_request(self, request: Callable, operation: str): """ Creates a partial function to invoke the given 'request' providing the 'url' and 'headers' arguments. The URL corresponds to a Resource Manager HTTPS request on the current virtual machine; the 'operation' is appended to the URL and can be used to, for example, specify a query string, or a particular API (e.g. "UpgradeVMAgent"). The 'request' callable must accept a 'url' and a 'headers' parameter (e.g. it can be one of the APIs in the requests module, such as get or post. """ cloud: Cloud = AZURE_CLOUDS[self.cloud] credential: DefaultAzureCredential = DefaultAzureCredential(authority=cloud.endpoints.active_directory) token = credential.get_token(f"{cloud.endpoints.resource_manager}/.default") url = f'{cloud.endpoints.resource_manager}/subscriptions/{self.subscription}/resourceGroups/{self.resource_group}/providers/Microsoft.Compute/virtualMachines/{self.name}/{operation}' headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {token.token}' } return functools.partial(request, url=url, headers=headers) def get_ip_address(self) -> str: """ Retrieves the public IP address of the virtual machine """ vm_model = self.get_model() nic: NetworkInterface = self._network_client.network_interfaces.get( resource_group_name=self.resource_group, network_interface_name=vm_model.network_profile.network_interfaces[0].id.split('/')[-1]) # the name of the interface is the last component of the id public_ip: PublicIPAddress = self._network_client.public_ip_addresses.get( resource_group_name=self.resource_group, public_ip_address_name=nic.ip_configurations[0].public_ip_address.id.split('/')[-1]) # the name of the ip address is the last component of the id return public_ip.ip_address def get_private_ip_address(self) -> str: """ Retrieves the private IP address of the virtual machine """ vm_model = self.get_model() nic: NetworkInterface = self._network_client.network_interfaces.get( resource_group_name=self.resource_group, network_interface_name=vm_model.network_profile.network_interfaces[0].id.split('/')[ -1]) # the name of the interface is the last component of the id private_ip = nic.ip_configurations[0].private_ip_address return private_ip def get_model(self, include_instance_view: bool = False) -> VirtualMachine: """ Retrieves the model of the virtual machine. """ log.info("Retrieving VM model for %s", self) kwargs = { "resource_group_name": self.resource_group, "vm_name": self.name } if include_instance_view: kwargs["expand"] = "instanceView" return execute_with_retry(lambda: self._compute_client.virtual_machines.get(**kwargs)) def get_instance_view(self) -> VirtualMachineInstanceView: """ Retrieves the instance view of the virtual machine """ log.info("Retrieving instance view for %s", self) return execute_with_retry(lambda: self._compute_client.virtual_machines.get( resource_group_name=self.resource_group, vm_name=self.name, expand="instanceView" ).instance_view) def get_extensions(self) -> List[VirtualMachineExtension]: """ Retrieves the extensions installed on the virtual machine """ log.info("Retrieving extensions for %s", self) return execute_with_retry( lambda: self._compute_client.virtual_machine_extensions.list( resource_group_name=self.resource_group, vm_name=self.name)) def delete_all_extensions(self, timeout: int = AzureSdkClient._DEFAULT_TIMEOUT) -> None: """ Delete all extensions installed on the virtual machine """ extensions_to_delete = self.get_extensions().value for ext in extensions_to_delete: ext_name = ext.name log.info(f"Deleting extension {ext_name} from {self.name}") self._execute_async_operation( lambda extension_name=ext_name: self._compute_client.virtual_machine_extensions.begin_delete( self.resource_group, self.name, extension_name), operation_name=f"Delete extension {ext_name}", timeout=timeout) def update(self, properties: Dict[str, Any], timeout: int = AzureSdkClient._DEFAULT_TIMEOUT) -> None: """ Updates a set of properties on the virtual machine """ # location is a required by begin_create_or_update, always add it properties_copy = properties.copy() properties_copy["location"] = self.location log.info("Updating %s with properties: %s", self, properties_copy) self._execute_async_operation( lambda: self._compute_client.virtual_machines.begin_create_or_update( self.resource_group, self.name, properties_copy), operation_name=f"Update {self}", timeout=timeout) def reapply(self, timeout: int = AzureSdkClient._DEFAULT_TIMEOUT) -> None: """ Reapplies the goal state on the virtual machine """ self._execute_async_operation( lambda: self._compute_client.virtual_machines.begin_reapply(self.resource_group, self.name), operation_name=f"Reapply {self}", timeout=timeout) def restart( self, wait_for_boot, ssh_client: SshClient = None, boot_timeout: datetime.timedelta = datetime.timedelta(minutes=5), timeout: int = AzureSdkClient._DEFAULT_TIMEOUT) -> None: """ Restarts (reboots) the virtual machine. NOTES: * If wait_for_boot is True, an SshClient must be provided in order to verify that the restart was successful. * 'timeout' is the timeout for the restart operation itself, while 'boot_timeout' is the timeout for waiting the boot to complete. """ if wait_for_boot and ssh_client is None: raise ValueError("An SshClient must be provided if wait_for_boot is True") before_restart = datetime.datetime.now(UTC) self._execute_async_operation( lambda: self._compute_client.virtual_machines.begin_restart( resource_group_name=self.resource_group, vm_name=self.name), operation_name=f"Restart {self}", timeout=timeout) if not wait_for_boot: return start = datetime.datetime.now(UTC) self._wait_for_status("PowerState/running", boot_timeout) # self._wait_for_status() works by checking the instance view, and we may capture a view from before the reboot actually happened, so we verify # that the reboot actually happened by checking the system's uptime. while datetime.datetime.now(UTC) < start + boot_timeout: log.info("Verifying VM's uptime to ensure the reboot has completed...") try: uptime = ssh_client.run_command("cat /proc/uptime | sed 's/ .*//'", attempts=1).rstrip() # The uptime is the first field in the file log.info("Uptime: %s", uptime) boot_time = datetime.datetime.now(UTC) - datetime.timedelta(seconds=float(uptime)) if boot_time > before_restart: log.info("VM %s completed boot and is running. Boot time: %s", self, boot_time) return log.info("The VM has not rebooted yet. Restart time: %s. Boot time: %s", before_restart, boot_time) except CommandError as e: if (e.exit_code == 255 and ("Connection refused" in str(e) or "Connection timed out" in str(e))) or "Unprivileged users are not permitted to log in yet" in str(e): log.info("VM %s is not yet accepting SSH connections", self) else: raise time.sleep(10) raise Exception(f"VM {self} did not boot after {boot_timeout}") def start(self, start_timeout: datetime.timedelta = datetime.timedelta(minutes=5), timeout: int = AzureSdkClient._DEFAULT_TIMEOUT) -> None: """ Starts (allocates) the virtual machine. """ self._execute_async_operation( lambda: self._compute_client.virtual_machines.begin_start( resource_group_name=self.resource_group, vm_name=self.name), operation_name=f"Start {self}", timeout=timeout) self._wait_for_status("PowerState/running", start_timeout) def deallocate(self, hibernate: bool = False, deallocate_timeout: datetime.timedelta = datetime.timedelta(minutes=5), timeout: int = AzureSdkClient._DEFAULT_TIMEOUT) -> None: """ Deallocates the virtual machine, optionally setting it up to hibernate. """ log.info("Deallocating %s [hibernate=%s]...", self, hibernate) self._execute_async_operation( lambda: self._compute_client.virtual_machines.begin_deallocate( resource_group_name=self.resource_group, vm_name=self.name, hibernate=hibernate), operation_name=f"Deallocate {self} [hibernate={hibernate}]", timeout=timeout) self._wait_for_status('HibernationState/Hibernated' if hibernate else 'PowerState/deallocated', deallocate_timeout) def _wait_for_status(self, status: str, timeout: datetime.timedelta = datetime.timedelta(minutes=5)) -> None: start = datetime.datetime.now(UTC) while datetime.datetime.now(UTC) < start + timeout: log.info("Waiting for Status to reach %s", status) instance_view = self.get_instance_view() statuses = [s.code for s in instance_view.statuses] log.info("Current status: %s", statuses) if status in statuses: return time.sleep(10) raise Exception(f"VM {self} did not reach {status} after {timeout}") def __str__(self): return f"{self.resource_group}:{self.name}" Azure-WALinuxAgent-a976115/tests_e2e/tests/lib/virtual_machine_extension_client.py000066400000000000000000000170361510742556200303640ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # This module includes facilities to execute VM extension operations (enable, remove, etc). # import json import uuid from assertpy import assert_that, soft_assertions from typing import Any, Callable, Dict from azure.mgmt.compute import ComputeManagementClient from azure.mgmt.compute.models import VirtualMachineExtension, VirtualMachineExtensionInstanceView from tests_e2e.tests.lib.azure_sdk_client import AzureSdkClient from tests_e2e.tests.lib.vm_extension_identifier import VmExtensionIdentifier from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.retry import execute_with_retry from tests_e2e.tests.lib.virtual_machine_client import VirtualMachineClient class VirtualMachineExtensionClient(AzureSdkClient): """ Client for operations virtual machine extensions. """ def __init__(self, vm: VirtualMachineClient, extension: VmExtensionIdentifier, resource_name: str = None): super().__init__() self._vm: VirtualMachineClient = vm self._identifier = extension self._resource_name = resource_name or extension.type self._compute_client: ComputeManagementClient = AzureSdkClient.create_client(ComputeManagementClient, self._vm.cloud, self._vm.subscription) @property def extension_id(self) -> VmExtensionIdentifier: return self._identifier def get_instance_view(self) -> VirtualMachineExtensionInstanceView: """ Retrieves the instance view of the extension """ log.info("Retrieving instance view for %s...", self._identifier) return execute_with_retry(lambda: self._compute_client.virtual_machine_extensions.get( resource_group_name=self._vm.resource_group, vm_name=self._vm.name, vm_extension_name=self._resource_name, expand="instanceView" ).instance_view) def enable( self, settings: Dict[str, Any] = None, protected_settings: Dict[str, Any] = None, auto_upgrade_minor_version: bool = True, force_update: bool = False, force_update_tag: str = None, timeout: int = AzureSdkClient._DEFAULT_TIMEOUT ) -> None: """ Performs an enable operation on the extension. NOTE: 'force_update' is not a parameter of the actual ARM API. It is provided here for convenience: If set to True, the 'force_update_tag' can be left unspecified and this method will generate a random tag. """ if force_update_tag is not None and not force_update: raise ValueError("If force_update_tag is provided then force_update must be set to true") if force_update and force_update_tag is None: force_update_tag = str(uuid.uuid4()) extension_parameters = VirtualMachineExtension( publisher=self._identifier.publisher, location=self._vm.location, type_properties_type=self._identifier.type, type_handler_version=self._identifier.version, auto_upgrade_minor_version=auto_upgrade_minor_version, settings=settings, protected_settings=protected_settings, force_update_tag=force_update_tag) # Hide the protected settings from logging if protected_settings is not None: extension_parameters.protected_settings = "*****[REDACTED]*****" log.info("Enabling %s", self._identifier) log.info("%s", extension_parameters) # Now set the actual protected settings before invoking the extension extension_parameters.protected_settings = protected_settings result: VirtualMachineExtension = self._execute_async_operation( lambda: self._compute_client.virtual_machine_extensions.begin_create_or_update( self._vm.resource_group, self._vm.name, self._resource_name, extension_parameters), operation_name=f"Enable {self._identifier}", timeout=timeout) log.info("Provisioning state: %s", result.provisioning_state) def delete(self, timeout: int = AzureSdkClient._DEFAULT_TIMEOUT) -> None: """ Performs a delete operation on the extension """ self._execute_async_operation( lambda: self._compute_client.virtual_machine_extensions.begin_delete( self._vm.resource_group, self._vm.name, self._resource_name), operation_name=f"Delete {self._identifier}", timeout=timeout) def assert_instance_view( self, expected_status_code: str = "ProvisioningState/succeeded", expected_version: str = None, expected_message: str = None, assert_function: Callable[[VirtualMachineExtensionInstanceView], None] = None ) -> None: """ Asserts that the extension's instance view matches the given expected values. If 'expected_version' and/or 'expected_message' are omitted, they are not validated. If 'assert_function' is provided, it is invoked passing as parameter the instance view. This function can be used to perform additional validations. """ # Sometimes we get incomplete instance view with only 'name' property which causes issues during assertions. # Retry attempt to get instance view if only 'name' property is populated. attempt = 1 instance_view = self.get_instance_view() while instance_view.name is not None and instance_view.type_handler_version is None and instance_view.statuses is None and attempt < 3: log.info("Instance view is incomplete: %s\nRetrying attempt to get instance view...", instance_view.serialize()) instance_view = self.get_instance_view() attempt += 1 log.info("Instance view:\n%s", json.dumps(instance_view.serialize(), indent=4)) with soft_assertions(): if expected_version is not None: # Compare only the major and minor versions (i.e. the first 2 items in the result of split()) installed_version = instance_view.type_handler_version assert_that(expected_version.split(".")[0:2]).described_as("Unexpected extension version").is_equal_to(installed_version.split(".")[0:2]) assert_that(instance_view.statuses).described_as(f"Expected 1 status, got: {instance_view.statuses}").is_length(1) status = instance_view.statuses[0] if expected_message is not None: assert_that(expected_message in status.message).described_as(f"{expected_message} should be in the InstanceView message ({status.message})").is_true() assert_that(status.code).described_as("InstanceView status code").is_equal_to(expected_status_code) if assert_function is not None: assert_function(instance_view) log.info("The instance view matches the expected values") def __str__(self): return f"{self._identifier}" Azure-WALinuxAgent-a976115/tests_e2e/tests/lib/virtual_machine_runcommand_client.py000066400000000000000000000126231510742556200305100ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # This module includes facilities to execute VM extension runcommand operations (enable, remove, etc). # import json from typing import Any, Dict, Callable from assertpy import soft_assertions, assert_that from azure.mgmt.compute import ComputeManagementClient from azure.mgmt.compute.models import VirtualMachineRunCommand, VirtualMachineRunCommandScriptSource, VirtualMachineRunCommandInstanceView from tests_e2e.tests.lib.azure_sdk_client import AzureSdkClient from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.retry import execute_with_retry from tests_e2e.tests.lib.virtual_machine_client import VirtualMachineClient from tests_e2e.tests.lib.vm_extension_identifier import VmExtensionIdentifier class VirtualMachineRunCommandClient(AzureSdkClient): """ Client for operations virtual machine RunCommand extensions. """ def __init__(self, vm: VirtualMachineClient, extension: VmExtensionIdentifier, resource_name: str = None): super().__init__() self._vm: VirtualMachineClient = vm self._identifier = extension self._resource_name = resource_name or extension.type self._compute_client: ComputeManagementClient = AzureSdkClient.create_client(ComputeManagementClient, self._vm.cloud, self._vm.subscription) def get_instance_view(self) -> VirtualMachineRunCommandInstanceView: """ Retrieves the instance view of the run command extension """ log.info("Retrieving instance view for %s...", self._identifier) return execute_with_retry(lambda: self._compute_client.virtual_machine_run_commands.get_by_virtual_machine( resource_group_name=self._vm.resource_group, vm_name=self._vm.name, run_command_name=self._resource_name, expand="instanceView" ).instance_view) def enable( self, settings: Dict[str, Any] = None, timeout: int = AzureSdkClient._DEFAULT_TIMEOUT ) -> None: """ Performs an enable operation on the run command extension. """ run_command_parameters = VirtualMachineRunCommand( location=self._vm.location, source=VirtualMachineRunCommandScriptSource( script=settings.get("source") if settings is not None else settings ) ) log.info("Enabling %s", self._identifier) log.info("%s", run_command_parameters) result: VirtualMachineRunCommand = self._execute_async_operation( lambda: self._compute_client.virtual_machine_run_commands.begin_create_or_update( self._vm.resource_group, self._vm.name, self._resource_name, run_command_parameters), operation_name=f"Enable {self._identifier}", timeout=timeout) log.info("Provisioning state: %s", result.provisioning_state) def delete(self, timeout: int = AzureSdkClient._DEFAULT_TIMEOUT) -> None: """ Performs a delete operation on the run command extension """ self._execute_async_operation( lambda: self._compute_client.virtual_machine_run_commands.begin_delete( self._vm.resource_group, self._vm.name, self._resource_name), operation_name=f"Delete {self._identifier}", timeout=timeout) def assert_instance_view( self, expected_status_code: str = "Succeeded", expected_exit_code: int = 0, expected_message: str = None, assert_function: Callable[[VirtualMachineRunCommandInstanceView], None] = None ) -> None: """ Asserts that the run command's instance view matches the given expected values. If 'expected_message' is omitted, it is not validated. If 'assert_function' is provided, it is invoked passing as parameter the instance view. This function can be used to perform additional validations. """ instance_view = self.get_instance_view() log.info("Instance view:\n%s", json.dumps(instance_view.serialize(), indent=4)) with soft_assertions(): if expected_message is not None: assert_that(expected_message in instance_view.output).described_as(f"{expected_message} should be in the InstanceView message ({instance_view.output})").is_true() assert_that(instance_view.execution_state).described_as("InstanceView execution state").is_equal_to(expected_status_code) assert_that(instance_view.exit_code).described_as("InstanceView exit code").is_equal_to(expected_exit_code) if assert_function is not None: assert_function(instance_view) log.info("The instance view matches the expected values") def __str__(self): return f"{self._identifier}" Azure-WALinuxAgent-a976115/tests_e2e/tests/lib/virtual_machine_scale_set_client.py000066400000000000000000000113111510742556200303000ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # This module includes facilities to execute operations on virtual machines scale sets (list instances, delete, etc). # import re from typing import List from azure.mgmt.compute import ComputeManagementClient from azure.mgmt.compute.models import VirtualMachineScaleSetVM, VirtualMachineScaleSetInstanceView from azure.mgmt.network import NetworkManagementClient from tests_e2e.tests.lib.azure_sdk_client import AzureSdkClient from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.retry import execute_with_retry class VmssInstanceIpAddress(object): """ IP address of a virtual machine scale set instance """ def __init__(self, instance_name: str, ip_address: str): self.instance_name: str = instance_name self.ip_address: str = ip_address def __str__(self): return f"{self.instance_name}:{self.ip_address}" class VirtualMachineScaleSetClient(AzureSdkClient): """ Provides operations on virtual machine scale sets. """ def __init__(self, cloud: str, location: str, subscription: str, resource_group: str, name: str): super().__init__() self.cloud: str = cloud self.location = location self.subscription: str = subscription self.resource_group: str = resource_group self.name: str = name self._compute_client = AzureSdkClient.create_client(ComputeManagementClient, cloud, subscription) self._network_client = AzureSdkClient.create_client(NetworkManagementClient, cloud, subscription) def list_vms(self) -> List[VirtualMachineScaleSetVM]: """ Returns the VM instances of the virtual machine scale set """ log.info("Retrieving instances of scale set %s", self) return list(self._compute_client.virtual_machine_scale_set_vms.list(resource_group_name=self.resource_group, virtual_machine_scale_set_name=self.name)) def get_instances_ip_address(self) -> List[VmssInstanceIpAddress]: """ Returns a list containing the IP addresses of scale set instances """ log.info("Retrieving IP addresses of scale set %s", self) ip_addresses = self._network_client.public_ip_addresses.list_virtual_machine_scale_set_public_ip_addresses(resource_group_name=self.resource_group, virtual_machine_scale_set_name=self.name) ip_addresses = list(ip_addresses) def parse_instance(resource_id: str) -> str: # the resource_id looks like /subscriptions/{subs}}/resourceGroups/{rg}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmss}/virtualMachines/{instance}/networkInterfaces/{netiace}/ipConfigurations/ipconfig1/publicIPAddresses/{name} match = re.search(r'virtualMachines/(?P[0-9])/networkInterfaces', resource_id) if match is None: raise Exception(f"Unable to parse instance from IP address ID:{resource_id}") return match.group('instance') return [VmssInstanceIpAddress(instance_name=f"{self.name}_{parse_instance(a.id)}", ip_address=a.ip_address) for a in ip_addresses if a.ip_address is not None] def delete_extension(self, extension: str, timeout: int = AzureSdkClient._DEFAULT_TIMEOUT) -> None: """ Deletes the given operation """ log.info("Deleting extension %s from %s", extension, self) self._execute_async_operation( operation=lambda: self._compute_client.virtual_machine_scale_set_extensions.begin_delete(resource_group_name=self.resource_group, vm_scale_set_name=self.name, vmss_extension_name=extension), operation_name=f"Delete {extension} from {self}", timeout=timeout) def get_instance_view(self) -> VirtualMachineScaleSetInstanceView: """ Retrieves the instance view of the virtual machine """ log.info("Retrieving instance view for %s", self) return execute_with_retry(lambda: self._compute_client.virtual_machine_scale_sets.get_instance_view( resource_group_name=self.resource_group, vm_scale_set_name=self.name )) def __str__(self): return f"{self.resource_group}:{self.name}" Azure-WALinuxAgent-a976115/tests_e2e/tests/lib/vm_extension_identifier.py000066400000000000000000000105051510742556200264720ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from typing import Dict, List class VmExtensionIdentifier(object): """ Represents the information that identifies an extension to the ARM APIs publisher - e.g. Microsoft.Azure.Extensions type - e.g. CustomScript version - e.g. 2.1, 2.* name - arbitrary name for the extension ARM resource """ def __init__(self, publisher: str, ext_type: str, version: str): self.publisher: str = publisher self.type: str = ext_type self.version: str = version unsupported_distros: Dict[str, List[str]] = { "Microsoft.OSTCExtensions.VMAccessForLinux": ["flatcar"], "Microsoft.Azure.Monitor.AzureMonitorLinuxAgent": ["flatcar", "mariner_1", "ubuntu_2404", "sles_15", "rhel_10"], "Microsoft.GuestConfiguration.ConfigurationforLinux": ["flatcar"], "Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent": ["flatcar"], # TODO: RCv2 currently fails on AzureCloud on the distros below due to GLIBC < 2.34. Once the extension is fixed to support older GLIB versions, remove this entry. "Microsoft.CPlat.Core.RunCommandHandlerLinux": ["almalinux_810", "centos_82", "debian_9", "debian_10", "debian_11", "redhat_810", "ubuntu_1804", "ubuntu_2004"] } def supports_distro(self, system_info: str) -> bool: """ Returns true if an unsupported distro name for the extension is found in the provided system info """ ext_unsupported_distros = VmExtensionIdentifier.unsupported_distros.get(self.publisher + "." + self.type) if ext_unsupported_distros is not None and any(distro in system_info for distro in ext_unsupported_distros): return False return True def __str__(self): return f"{self.publisher}.{self.type}" class VmExtensionIds(object): """ A set of extensions used by the tests, listed here for convenience (easy to reference them by name). Only the major version is specified, and the minor version is set to 0 (set autoUpgradeMinorVersion to True in the call to enable to use the latest version) """ AzureMonitorLinuxAgent: VmExtensionIdentifier = VmExtensionIdentifier(publisher='Microsoft.Azure.Monitor', ext_type='AzureMonitorLinuxAgent', version="1.5") AzureSecurityLinuxAgent: VmExtensionIdentifier = VmExtensionIdentifier(publisher='Microsoft.Azure.Security.Monitoring', ext_type='AzureSecurityLinuxAgent', version="2.0") CustomScript: VmExtensionIdentifier = VmExtensionIdentifier(publisher='Microsoft.Azure.Extensions', ext_type='CustomScript', version="2.0") GATestExtension: VmExtensionIdentifier = VmExtensionIdentifier(publisher='Microsoft.Azure.Extensions.Edp', ext_type='GATestExtGo', version="1.2") GuestAgentDcrTestExtension: VmExtensionIdentifier = VmExtensionIdentifier(publisher='Microsoft.Azure.TestExtensions.Edp', ext_type='GuestAgentDcrTest', version='1.0') GuestConfig: VmExtensionIdentifier = VmExtensionIdentifier(publisher='Microsoft.GuestConfiguration', ext_type='ConfigurationforLinux', version="1.0") Hibernate: VmExtensionIdentifier = VmExtensionIdentifier(publisher='Microsoft.CPlat.Core', ext_type='LinuxHibernateExtension', version="1.0") # Older run command extension, still used by the Portal as of Dec 2022 RunCommand: VmExtensionIdentifier = VmExtensionIdentifier(publisher='Microsoft.CPlat.Core', ext_type='RunCommandLinux', version="1.0") # New run command extension, with support for multi-config RunCommandHandler: VmExtensionIdentifier = VmExtensionIdentifier(publisher='Microsoft.CPlat.Core', ext_type='RunCommandHandlerLinux', version="1.0") VmAccess: VmExtensionIdentifier = VmExtensionIdentifier(publisher='Microsoft.OSTCExtensions', ext_type='VMAccessForLinux', version="1.0") Azure-WALinuxAgent-a976115/tests_e2e/tests/log_collector/000077500000000000000000000000001510742556200232605ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests_e2e/tests/log_collector/log_collector.py000077500000000000000000000111471510742556200264700ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import re import time from assertpy import fail from azurelinuxagent.common.utils.shellutil import CommandError from tests_e2e.tests.lib.agent_test import AgentVmTest from tests_e2e.tests.lib.logging import log class LogCollector(AgentVmTest): """ Tests that the log collector logs the expected behavior on periodic runs. """ def run(self): ssh_client = self._context.create_ssh_client() # Rename the agent log file so that the test does not pick up any incomplete log collector runs that started # before the config is updated # Reduce log collector initial delay via config log.info("Renaming agent log file and modifying log collector conf flags") setup_script = ("agent-service stop && mv /var/log/waagent.log /var/log/waagent.$(date --iso-8601=seconds).log && " "update-waagent-conf Logs.Collect=y Debug.LogCollectorInitialDelay=60") ssh_client.run_command(f"sh -c '{setup_script}'", use_sudo=True) log.info('Renamed log file and updated log collector config flags') # Wait for log collector to finish uploading logs for _ in range(3): time.sleep(90) try: ssh_client.run_command("grep 'Successfully uploaded logs' /var/log/waagent.log") break except CommandError: log.info("The Agent has not finished log collection, will check again after a short delay") else: raise Exception("Timeout while waiting for the Agent to finish log collection") # Get any agent logs between log collector start and finish try: # We match the first full log collector run in the agent log (this test just needs to validate any full log collector run, does not matter if it's the first or last) lc_start_pattern = "INFO CollectLogsHandler ExtHandler Starting log collection" lc_end_pattern = "INFO CollectLogsHandler ExtHandler Successfully uploaded logs" output = ssh_client.run_command("sed -n '/{0}/,/{1}/{{p;/{1}/q}}' /var/log/waagent.log".format(lc_start_pattern, lc_end_pattern)).rstrip().splitlines() except Exception as e: raise Exception("Unable to get log collector logs from waagent.log: {0}".format(e)) # These logs indicate a successful log collector run with resource enforcement and monitoring expected = [ r'.*Starting log collection', r'.*Using cgroup v\d for resource enforcement and monitoring', r'.*cpu controller for cgroup: azure-walinuxagent-logcollector \[\/sys\/fs\/cgroup(\/cpu,cpuacct)?\/azure.slice\/azure-walinuxagent.slice\/azure-walinuxagent\-logcollector.slice\/collect\-logs.scope\]', r'.*memory controller for cgroup: azure-walinuxagent-logcollector \[\/sys\/fs\/cgroup(\/memory)?\/azure.slice\/azure-walinuxagent.slice\/azure-walinuxagent\-logcollector.slice\/collect\-logs.scope\]', r'.*Log collection successfully completed', r'.*Successfully collected logs', r'.*Successfully uploaded logs' ] # Filter output to only include relevant log collector logs lc_logs = [log for log in output if len([pattern for pattern in expected if re.match(pattern, log)]) > 0] # Check that all expected logs exist and are in the correct order indent = lambda lines: "\n".join([f" {ln}" for ln in lines]) if len(lc_logs) == len(expected) and all([re.match(expected[i], lc_logs[i]) is not None for i in range(len(expected))]): log.info("The log collector run completed as expected.\nLog messages:\n%s", indent(lc_logs)) else: fail(f"The log collector run did not complete as expected.\nExpected:\n{indent(expected)}\nActual:\n{indent(lc_logs)}") ssh_client.run_command("update-waagent-conf Debug.EnableCgroupV2ResourceLimiting=n Debug.LogCollectorInitialDelay=5*60", use_sudo=True) if __name__ == "__main__": LogCollector.run_from_command_line() Azure-WALinuxAgent-a976115/tests_e2e/tests/multi_config_ext/000077500000000000000000000000001510742556200237705ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests_e2e/tests/multi_config_ext/multi_config_ext.py000066400000000000000000000205361510742556200277070ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # This test adds multiple instances of RCv2 and verifies that the extensions are processed and deleted as expected. # import uuid from typing import Dict, Callable, Any from assertpy import fail from azure.mgmt.compute.models import VirtualMachineInstanceView from tests_e2e.tests.lib.agent_test import AgentVmTest from tests_e2e.tests.lib.azure_sdk_client import AzureSdkClient from tests_e2e.tests.lib.virtual_machine_runcommand_client import VirtualMachineRunCommandClient from tests_e2e.tests.lib.vm_extension_identifier import VmExtensionIds from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.virtual_machine_client import VirtualMachineClient from tests_e2e.tests.lib.virtual_machine_extension_client import VirtualMachineExtensionClient class MultiConfigExt(AgentVmTest): class TestCase: def __init__(self, extension: AzureSdkClient, get_settings: Callable[[str], Dict[str, str]]): self.extension = extension self.get_settings = get_settings self.test_guid: str = str(uuid.uuid4()) def enable_and_assert_test_cases(self, cases_to_enable: Dict[str, TestCase], cases_to_assert: Dict[str, TestCase], delete_extensions: bool = False): for resource_name, test_case in cases_to_enable.items(): log.info("") log.info("Adding {0} to the test VM. guid={1}".format(resource_name, test_case.test_guid)) kwargs = {"settings": test_case.get_settings(test_case.test_guid)} if isinstance(test_case.extension, VirtualMachineExtensionClient) and test_case.extension.extension_id == VmExtensionIds.CustomScript: kwargs["protected_settings"] = {} test_case.extension.enable(**kwargs) test_case.extension.assert_instance_view() log.info("") log.info("Check that each extension has the expected guid in its status message...") for resource_name, test_case in cases_to_assert.items(): log.info("") log.info("Checking {0} has expected status message with {1}".format(resource_name, test_case.test_guid)) test_case.extension.assert_instance_view(expected_message=f"{test_case.test_guid}") # Delete each extension on the VM if delete_extensions: log.info("") log.info("Delete each extension...") self.delete_extensions(cases_to_assert) def delete_extensions(self, test_cases: Dict[str, TestCase]): for resource_name, test_case in test_cases.items(): log.info("") log.info("Deleting {0} from the test VM".format(resource_name)) test_case.extension.delete() log.info("") vm: VirtualMachineClient = VirtualMachineClient( cloud=self._context.vm.cloud, location=self._context.vm.location, subscription=self._context.vm.subscription, resource_group=self._context.vm.resource_group, name=self._context.vm.name) instance_view: VirtualMachineInstanceView = vm.get_instance_view() if instance_view.extensions is not None: for ext in instance_view.extensions: if ext.name in test_cases.keys(): fail("Extension was not deleted: \n{0}".format(ext)) log.info("") log.info("All extensions were successfully deleted.") def run(self): # Create 3 different RCv2 extensions and a single config extension (CSE) and assign each a unique guid. Each # extension will have settings that echo its assigned guid. We will use this guid to verify the extension # statuses later. mc_settings: Callable[[Any], Dict[str, str]] = lambda s: {"source": f"echo {s}"} sc_settings: Callable[[Any], Dict[str, str]] = lambda s: {'commandToExecute': f"echo {s}"} test_cases: Dict[str, MultiConfigExt.TestCase] = { "MCExt1": MultiConfigExt.TestCase( VirtualMachineRunCommandClient(self._context.vm, VmExtensionIds.RunCommandHandler, resource_name="MCExt1"), mc_settings), "MCExt2": MultiConfigExt.TestCase( VirtualMachineRunCommandClient(self._context.vm, VmExtensionIds.RunCommandHandler, resource_name="MCExt2"), mc_settings), "MCExt3": MultiConfigExt.TestCase( VirtualMachineRunCommandClient(self._context.vm, VmExtensionIds.RunCommandHandler, resource_name="MCExt3"), mc_settings), "CSE": MultiConfigExt.TestCase( VirtualMachineExtensionClient(self._context.vm, VmExtensionIds.CustomScript), sc_settings) } # Add each extension to the VM and validate the instance view has succeeded status with its assigned guid in the # status message log.info("") log.info("Add CSE and 3 instances of RCv2 to the VM. Each instance will echo a unique guid...") self.enable_and_assert_test_cases(cases_to_enable=test_cases, cases_to_assert=test_cases) # Update MCExt3 and CSE with new guids and add a new instance of RCv2 to the VM updated_test_cases: Dict[str, MultiConfigExt.TestCase] = { "MCExt3": MultiConfigExt.TestCase( VirtualMachineRunCommandClient(self._context.vm, VmExtensionIds.RunCommandHandler, resource_name="MCExt3"), mc_settings), "MCExt4": MultiConfigExt.TestCase( VirtualMachineRunCommandClient(self._context.vm, VmExtensionIds.RunCommandHandler, resource_name="MCExt4"), mc_settings), "CSE": MultiConfigExt.TestCase( VirtualMachineExtensionClient(self._context.vm, VmExtensionIds.CustomScript), sc_settings) } test_cases.update(updated_test_cases) # Enable only the updated extensions, verify every extension has the correct test guid is in status message, and # remove all extensions from the test vm log.info("") log.info("Update MCExt3 and CSE with new guids and add a new instance of RCv2 to the VM...") self.enable_and_assert_test_cases(cases_to_enable=updated_test_cases, cases_to_assert=test_cases, delete_extensions=True) # Enable, verify, and remove only multi config extensions log.info("") log.info("Add only multi-config extensions to the VM...") mc_test_cases: Dict[str, MultiConfigExt.TestCase] = { "MCExt5": MultiConfigExt.TestCase( VirtualMachineRunCommandClient(self._context.vm, VmExtensionIds.RunCommandHandler, resource_name="MCExt5"), mc_settings), "MCExt6": MultiConfigExt.TestCase( VirtualMachineRunCommandClient(self._context.vm, VmExtensionIds.RunCommandHandler, resource_name="MCExt6"), mc_settings) } self.enable_and_assert_test_cases(cases_to_enable=mc_test_cases, cases_to_assert=mc_test_cases, delete_extensions=True) # Enable, verify, and delete only single config extensions log.info("") log.info("Add only single-config extension to the VM...") sc_test_cases: Dict[str, MultiConfigExt.TestCase] = { "CSE": MultiConfigExt.TestCase( VirtualMachineExtensionClient(self._context.vm, VmExtensionIds.CustomScript), sc_settings) } self.enable_and_assert_test_cases(cases_to_enable=sc_test_cases, cases_to_assert=sc_test_cases, delete_extensions=True) if __name__ == "__main__": MultiConfigExt.run_from_command_line() Azure-WALinuxAgent-a976115/tests_e2e/tests/no_outbound_connections/000077500000000000000000000000001510742556200253665ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests_e2e/tests/no_outbound_connections/check_fallback_to_hgap.py000077500000000000000000000043631510742556200323460ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from assertpy import assert_that from tests_e2e.tests.lib.agent_test import AgentVmTest from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.ssh_client import SshClient class CheckFallbackToHGAP(AgentVmTest): """ Check the agent log to verify that the default channel was changed to HostGAPlugin before executing any extensions. """ def run(self): # 2023-04-14T14:49:43.005530Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. # 2023-04-14T14:49:44.625061Z INFO ExtHandler [Microsoft.Azure.Monitor.AzureMonitorLinuxAgent-1.25.2] Target handler state: enabled [incarnation_2] ssh_client: SshClient = self._context.create_ssh_client() log.info("Parsing agent log on the test VM") output = ssh_client.run_command("grep -E 'INFO ExtHandler.*(Default channel changed to HostGAPlugin)|(Target handler state:)' /var/log/waagent.log | head").split('\n') log.info("Output (first 10 lines) from the agent log:\n\t\t%s", '\n\t\t'.join(output)) assert_that(len(output) > 1).is_true().described_as( "The agent log should contain multiple matching records" ) assert_that(output[0]).contains("Default channel changed to HostGAPlugin").described_as( "The agent log should contain a record indicating that the default channel was changed to HostGAPlugin before executing any extensions" ) log.info("The agent log indicates that the default channel was changed to HostGAPlugin before executing any extensions") if __name__ == "__main__": CheckFallbackToHGAP.run_from_command_line() Azure-WALinuxAgent-a976115/tests_e2e/tests/no_outbound_connections/check_no_outbound_connections.py000077500000000000000000000102051510742556200340330ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from assertpy import fail from typing import Any, Dict, List from tests_e2e.tests.lib.agent_test import AgentVmTest from tests_e2e.tests.lib.agent_test_context import AgentTestContext from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.shell import CommandError from tests_e2e.tests.lib.ssh_client import SshClient class CheckNoOutboundConnections(AgentVmTest): """ Verifies that there is no outbound connectivity on the test VM. """ def __init__(self, context: AgentTestContext): super().__init__(context) self.__distro: str = None @property def distro(self) -> str: if self.__distro is None: raise Exception("The distro has not been initialized") return self.__distro def run(self): # This script is executed on the test VM. It tries to connect to a well-known DNS server (DNS is on port 53). script: str = """ import socket, sys try: socket.setdefaulttimeout(5) socket.socket(socket.AF_INET, socket.SOCK_STREAM).connect(("8.8.8.8", 53)) except socket.timeout: print("No outbound connectivity [expected]") exit(0) print("There is outbound connectivity [unexpected: the custom ARM template should not allow it]", file=sys.stderr) exit(1) """ ssh_client: SshClient = self._context.create_ssh_client() try: self.__distro = ssh_client.get_distro() log.info("Distro: %s", self.distro) except Exception as e: log.warning("Could not determine the distro (setting to UNKNOWN): %s", e) self.__distro = "UNKNOWN" try: log.info("Verifying that there is no outbound connectivity on the test VM") ssh_client.run_command("pypy3 -c '{0}'".format(script.replace('"', '\"'))) log.info("There is no outbound connectivity, as expected.") except CommandError as e: if e.exit_code == 1 and "There is outbound connectivity" in e.stderr: fail("There is outbound connectivity on the test VM, the custom ARM template should not allow it") else: raise Exception(f"Unexpected error while checking outbound connectivity on the test VM: {e}") def get_ignore_error_rules(self) -> List[Dict[str, Any]]: return [ # # RHEL 8.2 uses a very old Daemon (2.3.0.2) that does not create the 'ACCEPT DNS' rule. Even with auto-update enabled, the rule is not created for this test, since outbound connectivity is disabled # and attempts to get the VM Artifacts Profile blob fail after a long timeout (which prevents the self-update Agent to create the rule before the test starts running). Then, this message is # expected and should be ignored. # # 2025-01-16T09:30:54.048522Z WARNING ExtHandler ExtHandler The permanent firewall rules for Azure Fabric are not setup correctly (The following rules are missing: ['ACCEPT DNS'] due to: ['']), will reset them. Current state: # ipv4 -t security -A OUTPUT -d 168.63.129.16 -p tcp -m owner --uid-owner 0 -j ACCEPT # ipv4 -t security -A OUTPUT -d 168.63.129.16 -p tcp -m conntrack --ctstate INVALID,NEW -j DROP # { 'message': r"The permanent firewall rules for Azure Fabric are not setup correctly.*The following rules are missing: \['ACCEPT DNS'\]", 'if': lambda _: self.distro == 'redhat_82' } ] if __name__ == "__main__": CheckNoOutboundConnections.run_from_command_line() Azure-WALinuxAgent-a976115/tests_e2e/tests/no_outbound_connections/deny_outbound_connections.py000077500000000000000000000032641510742556200332300ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import json from typing import Any, Dict from tests_e2e.tests.lib.network_security_rule import NetworkSecurityRule from tests_e2e.tests.lib.update_arm_template import UpdateArmTemplate class DenyOutboundConnections(UpdateArmTemplate): """ Updates the ARM template to add a security rule that denies all outbound connections. """ def update(self, template: Dict[str, Any], is_lisa_template: bool) -> None: NetworkSecurityRule(template, is_lisa_template).add_security_rule( json.loads("""{ "name": "waagent-no-outbound", "properties": { "description": "Denies all outbound connections.", "protocol": "*", "sourcePortRange": "*", "destinationPortRange": "*", "sourceAddressPrefix": "*", "destinationAddressPrefix": "Internet", "access": "Deny", "priority": 200, "direction": "Outbound" } }""")) Azure-WALinuxAgent-a976115/tests_e2e/tests/publish_hostname/000077500000000000000000000000001510742556200237755ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests_e2e/tests/publish_hostname/publish_hostname.py000066400000000000000000000307351510742556200277230ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # This test updates the hostname and checks that the agent published the hostname to DNS. It also checks that the # primary network is up after publishing the hostname. This test was added in response to a bug in publishing the # hostname on fedora distros, where there was a race condition between NetworkManager restart and Network Interface # restart which caused the primary interface to go down. # import datetime import re from typing import List, Dict, Any from assertpy import fail from time import sleep from tests_e2e.tests.lib.shell import CommandError from tests_e2e.tests.lib.agent_test import AgentVmTest, TestSkipped from tests_e2e.tests.lib.agent_test_context import AgentVmTestContext from tests_e2e.tests.lib.logging import log from azurelinuxagent.common.future import UTC class PublishHostname(AgentVmTest): def __init__(self, context: AgentVmTestContext): super().__init__(context) self._context = context self._ssh_client = context.create_ssh_client() self._private_ip = context.vm.get_private_ip_address() self._vm_password = "" def add_vm_password(self): # Add password to VM to help with debugging in case of failure # REMOVE PWD FROM LOGS IF WE EVER MAKE THESE RUNS/LOGS PUBLIC username = self._ssh_client.username pwd = self._ssh_client.run_command("openssl rand -base64 32 | tr : .").rstrip() self._vm_password = pwd log.info("VM Username: {0}; VM Password: {1}".format(username, pwd)) self._ssh_client.run_command("echo '{0}:{1}' | sudo -S chpasswd".format(username, pwd)) def check_and_install_dns_tools(self): lookup_cmd = "dig -x {0}".format(self._private_ip) dns_regex = r"[\S\s]*;; ANSWER SECTION:\s.*PTR\s*(?P.*)\.internal\.(cloudapp\.net|chinacloudapp\.cn|usgovcloudapp\.net).*[\S\s]*" # Not all distros come with dig. Install dig if not on machine try: self._ssh_client.run_command("dig -v") except CommandError as e: if "dig: command not found" in e.stderr: distro = self._ssh_client.run_command("get_distro.py").rstrip().lower() if distro.startswith("debian_9") or distro.startswith("debian_10"): # Debian 9 hostname look up needs to be done with "host" instead of dig lookup_cmd = "host {0}".format(self._private_ip) dns_regex = r".*pointer\s(?P.*)\.internal\.(cloudapp\.net|chinacloudapp\.cn|usgovcloudapp\.net).*" elif "debian" in distro: self._ssh_client.run_command("apt install -y dnsutils", use_sudo=True) elif "alma" in distro or "rocky" in distro: self._ssh_client.run_command("dnf install -y bind-utils", use_sudo=True) else: raise else: raise return lookup_cmd, dns_regex def check_agent_reports_status(self): status_updated = False last_agent_status_time = self._context.vm.get_instance_view().vm_agent.statuses[0].time log.info("Agent reported status at {0}".format(last_agent_status_time)) retries = 3 while retries > 0 and not status_updated: agent_status_time = self._context.vm.get_instance_view().vm_agent.statuses[0].time if agent_status_time != last_agent_status_time: status_updated = True log.info("Agent reported status at {0}".format(last_agent_status_time)) else: retries -= 1 sleep(60) if not status_updated: fail("Agent hasn't reported status since {0} and ssh connection failed. Use the serial console in portal " "to check the contents of '/sys/class/net/eth0/operstate'. If the contents of this file are 'up', " "no further action is needed. If contents are 'down', that indicates the network interface is down " "and more debugging needs to be done to confirm this is not caused by the agent.\n VM: {1}\n RG: {2}" "\nSubscriptionId: {3}\nUsername: {4}\nPassword: {5}".format(last_agent_status_time, self._context.vm, self._context.vm.resource_group, self._context.vm.subscription, self._context.username, self._vm_password)) def retry_ssh_if_connection_reset(self, command: str, use_sudo=False): # pylint: disable=inconsistent-return-statements # The agent may bring the network down and back up to publish the hostname, which can reset the ssh connection. # Adding retry here for connection reset. retries = 3 while retries > 0: try: return self._ssh_client.run_command(command, use_sudo=use_sudo) except CommandError as e: retries -= 1 retryable = e.exit_code == 255 and "Connection reset by peer" in e.stderr if not retryable or retries == 0: raise log.warning("The SSH operation failed, retrying in 30 secs") sleep(30) def run(self): # TODO: Investigate why hostname is not being published on these distros. distros_with_known_publishing_issues = ["alma", "oracle_95", "oracle_810", "redhat_810", "rhel_95", "rocky", "ubuntu"] distro = self._ssh_client.run_command("get_distro.py").rstrip().lower() if any(d in distro for d in distros_with_known_publishing_issues): raise TestSkipped("Known issue with hostname publishing on this distro. Will skip test until we continue " "investigation.") # Add password to VM and log. This allows us to debug with serial console if necessary self.add_vm_password() # This test looks up what hostname is published to dns. Check that the tools necessary to get hostname are # installed, and if not install them. lookup_cmd, dns_regex = self.check_and_install_dns_tools() # Check if this distro monitors hostname changes. If it does, we should check that the agent detects the change # and publishes the host name. If it doesn't, we should check that the hostname is automatically published. monitors_hostname = self._ssh_client.run_command("get-waagent-conf-value Provisioning.MonitorHostName", use_sudo=True).rstrip().lower() log.info(f"Distro: {distro} Provisioning.MonitorHostName: {monitors_hostname}") hostname_change_ctr = 0 # Update the hostname 3 times while hostname_change_ctr < 3: try: hostname = "hostname-monitor-{0}".format(hostname_change_ctr) log.info("Update hostname to {0}".format(hostname)) self.retry_ssh_if_connection_reset("hostnamectl set-hostname {0}".format(hostname), use_sudo=True) # Wait for the agent to detect the hostname change for up to 2 minutes if hostname monitoring is enabled if monitors_hostname == "y" or monitors_hostname == "yes": log.info("Agent hostname monitoring is enabled") hostname_detected = "" for retry in range(4, -1, -1): try: hostname_detected = self.retry_ssh_if_connection_reset("grep -n 'Detected hostname change:.*-> {0}' /var/log/waagent.log".format(hostname), use_sudo=True) if hostname_detected: log.info("Agent detected hostname change: {0}".format(hostname_detected)) break except CommandError as e: # Exit code 1 indicates grep did not find a match. Sleep if exit code is 1, otherwise raise. if e.exit_code != 1: raise if retry > 0: log.info("Agent hasn't detected hostname change yet. Retrying after 30 seconds...") sleep(30) if not hostname_detected: fail("Agent did not detect hostname change: {0}".format(hostname)) else: log.info("Agent hostname monitoring is disabled") # Check that the expected hostname is published with 4 minute timeout timeout = datetime.datetime.now(UTC) + datetime.timedelta(minutes=4) published_hostname = "" while datetime.datetime.now(UTC) <= timeout: try: dns_info = self.retry_ssh_if_connection_reset(lookup_cmd) actual_hostname = re.match(dns_regex, dns_info) if actual_hostname: # Compare published hostname to expected hostname published_hostname = actual_hostname.group('hostname') if hostname == published_hostname: log.info("SUCCESS Hostname {0} was published successfully".format(hostname)) break else: log.info("Unable to parse the dns info: {0}".format(dns_info)) except CommandError as e: if "NXDOMAIN" in e.stdout: log.info("DNS Lookup could not find domain. Will try again.") else: raise sleep(30) if published_hostname == "" or published_hostname != hostname: fail("Hostname {0} was not published successfully. Actual host name is: {1}".format(hostname, published_hostname)) hostname_change_ctr += 1 except CommandError as e: # If failure is ssh issue, we should confirm that the VM did not lose network connectivity due to the # agent's operations on the network. If agent reports status after this failure, then we know the # network is up. if e.exit_code == 255 and ("Connection timed out" in e.stderr or "Connection refused" in e.stderr): self.check_agent_reports_status() raise def get_ignore_error_rules(self) -> List[Dict[str, Any]]: ignore_rules = [ # # We may see temporary network unreachable warnings since we are bringing the network interface down # # 2024-02-16T09:27:14.114569Z WARNING MonitorHandler ExtHandler Error in SendHostPluginHeartbeat: [HttpError] [HTTP Failed] GET http://168.63.129.16:32526/health -- IOError [Errno 101] Network is unreachable -- 1 attempts made --- [NOTE: Will not log the same error for the next hour] # 2024-02-28T05:37:55.562065Z ERROR ExtHandler ExtHandler Error fetching the goal state: [ProtocolError] GET vmSettings [correlation ID: 28de1093-ecb5-4515-ba8e-2ed0c7778e34 eTag: 4648629460326038775]: Request failed: [Errno 101] Network is unreachable # 2024-02-29T09:30:40.702293Z ERROR ExtHandler ExtHandler Error fetching the goal state: [ProtocolError] [Wireserver Exception] [HttpError] [HTTP Failed] GET http://168.63.129.16/machine/ -- IOError [Errno 101] Network is unreachable -- 6 attempts made # { 'message': r"GET (http://168.63.129.16:32526/health|vmSettings|http://168.63.129.16/machine).*\[Errno 101\] Network is unreachable", } ] return ignore_rules if __name__ == "__main__": PublishHostname.run_from_command_line() Azure-WALinuxAgent-a976115/tests_e2e/tests/recover_network_interface/000077500000000000000000000000001510742556200256675ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests_e2e/tests/recover_network_interface/recover_network_interface.py000066400000000000000000000227161510742556200335070ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # This test uses CSE to bring the network down and call check_and_recover_nic_state to bring the network back into an # 'up' and 'connected' state. The intention of the test is to alert us if there is some change in newer distros which # affects this logic. # import json from typing import List, Dict, Any from assertpy import fail, assert_that from time import sleep from tests_e2e.tests.lib.agent_test import AgentVmTest, TestSkipped from tests_e2e.tests.lib.agent_test_context import AgentVmTestContext from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.virtual_machine_extension_client import VirtualMachineExtensionClient from tests_e2e.tests.lib.vm_extension_identifier import VmExtensionIds class RecoverNetworkInterface(AgentVmTest): def __init__(self, context: AgentVmTestContext): super().__init__(context) self._context = context self._ssh_client = context.create_ssh_client() self._private_ip = context.vm.get_private_ip_address() self._vm_password = "" def add_vm_password(self): # Add password to VM to help with debugging in case of failure # REMOVE PWD FROM LOGS IF WE EVER MAKE THESE RUNS/LOGS PUBLIC username = self._ssh_client.username pwd = self._ssh_client.run_command("openssl rand -base64 32 | tr : .").rstrip() self._vm_password = pwd log.info("VM Username: {0}; VM Password: {1}".format(username, pwd)) self._ssh_client.run_command("echo '{0}:{1}' | sudo -S chpasswd".format(username, pwd)) def check_agent_reports_status(self): status_updated = False last_agent_status_time = self._context.vm.get_instance_view().vm_agent.statuses[0].time log.info("Agent reported status at {0}".format(last_agent_status_time)) retries = 3 while retries > 0 and not status_updated: agent_status_time = self._context.vm.get_instance_view().vm_agent.statuses[0].time if agent_status_time != last_agent_status_time: status_updated = True log.info("Agent reported status at {0}".format(last_agent_status_time)) else: retries -= 1 sleep(60) if not status_updated: fail("Agent hasn't reported status since {0} and ssh connection failed. Use the serial console in portal " "to debug".format(last_agent_status_time)) def run(self): # Add password to VM and log. This allows us to debug with serial console if necessary log.info("") log.info("Adding password to the VM to use for debugging in case necessary...") self.add_vm_password() # Skip the test if NM_CONTROLLED=n. The current recover logic does not work in this case result = self._ssh_client.run_command("recover_network_interface-get_nm_controlled.py", use_sudo=True) if "Interface is NOT NM controlled" in result: raise TestSkipped("Current recover method will not work on interfaces where NM_Controlled=n") # Get the primary network interface name ifname = self._ssh_client.run_command("pypy3 -c 'from azurelinuxagent.common.osutil.redhat import RedhatOSUtil; print(RedhatOSUtil().get_if_name())'").rstrip() # The interface name needs to be in double quotes for the pypy portion of the script formatted_ifname = f'"{ifname}"' # The script should bring the primary network interface down and use the agent to recover the interface. These # commands will bring the network down, so they should be executed on the machine using CSE instead of ssh. script = f""" set -uxo pipefail # The 'ifdown' network script is used to bring the network interface down. For some distros, this script # executes nmcli commands which can timeout and return non-zero exit codes. Allow 3 retries in case 'ifdown' # returns non-zero exit code. This is the same number of retries the agent allows in DefaultOSUtil.restart_if retries=3; ifdown_success=false while [ $retries -gt 0 ] do echo Attempting to bring network interface down with ifdown... ifdown {ifname}; exit_code=$? if [ $exit_code -eq 0 ]; then echo ifdown succeeded ifdown_success=true break fi echo ifdown failed with exit code $exit_code, try again after 5 seconds... sleep 5 ((retries=retries-1)) done # Verify the agent network interface recovery logic only if 'ifdown' succeeded if ! $ifdown_success ; then # Fail the script if 'ifdown' command didn't succeed exit 1 else # Log the network interface state before attempting to recover the interface nic_state=$(nmcli -g general.state device show {ifname}) echo Primary network interface state before recovering: $nic_state # Use the agent OSUtil to bring the network interface back up source /home/{self._context.username}/bin/set-agent-env; echo Attempting to recover the network interface with the agent... pypy3 -c 'from azurelinuxagent.common.osutil.redhat import RedhatOSUtil; RedhatOSUtil().check_and_recover_nic_state({formatted_ifname})'; # Log the network interface state after attempting to recover the interface nic_state=$(nmcli -g general.state device show {ifname}); echo Primary network interface state after recovering: $nic_state fi """ log.info("") log.info("Using CSE to bring the primary network interface down and call the OSUtil to bring the interface back up. Command to execute: {0}".format(script)) custom_script = VirtualMachineExtensionClient(self._context.vm, VmExtensionIds.CustomScript, resource_name="CustomScript") try: custom_script.enable(protected_settings={'commandToExecute': script}, settings={}) except TimeoutError: # Custom script may timeout if attempt to recover the network interface was not successful. The agent won't # be able to report status for the extension if network is down. Reboot the VM to bring the network back up # so logs can be collected. log.info("Custom script did not complete within the timeout. Rebooting the VM in attempt to bring the network interface back up...") self._context.vm.restart(wait_for_boot=True, ssh_client=self._ssh_client) fail("Custom script did not complete within the timoeut, which indicates the agent may be unable to report status due to network issues.") # Check that the interface was down and brought back up in instance view log.info("") log.info("Checking the instance view to confirm the primary network interface was brought down and successfully recovered by the agent...") instance_view = custom_script.get_instance_view() log.info("Instance view for custom script after enable is: {0}".format(json.dumps(instance_view.serialize(), indent=4))) assert_that(len(instance_view.statuses)).described_as("Instance view should have a status for CustomScript").is_greater_than(0) assert_that(instance_view.statuses[0].message).described_as("The primary network interface should be in a disconnected state before the attempt to recover").contains("Primary network interface state before recovering: 30 (disconnected)") assert_that(instance_view.statuses[0].message).described_as("The primary network interface should be in a connected state after the attempt to recover").contains("Primary network interface state after recovering: 100 (connected)") # Check that the agent is successfully reporting status after recovering the network log.info("") log.info("Checking that the agent is reporting status after recovering the network...") self.check_agent_reports_status() log.info("") log.info("The primary network interface was successfully recovered by the agent.") def get_ignore_error_rules(self) -> List[Dict[str, Any]]: ignore_rules = [ # # We may see temporary network unreachable warnings since we are bringing the network interface down # 2024-02-01T23:40:03.563499Z ERROR ExtHandler ExtHandler Error fetching the goal state: [ProtocolError] GET vmSettings [correlation ID: ac21bdd7-1a7a-4bba-b307-b9d5bc30da33 eTag: 941323814975149980]: Request failed: [Errno 101] Network is unreachable # { 'message': r"Error fetching the goal state: \[ProtocolError\] GET vmSettings.*Request failed: \[Errno 101\] Network is unreachable" } ] return ignore_rules if __name__ == "__main__": RecoverNetworkInterface.run_from_command_line() Azure-WALinuxAgent-a976115/tests_e2e/tests/samples/000077500000000000000000000000001510742556200220755ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests_e2e/tests/samples/error_remote_test.py000077500000000000000000000017231510742556200262200ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from tests_e2e.tests.lib.agent_test import AgentVmTest class ErrorRemoteTest(AgentVmTest): """ A trivial remote test that fails """ def run(self): self._run_remote_test(self._context.create_ssh_client(), "samples-error_remote_test.py") if __name__ == "__main__": ErrorRemoteTest.run_from_command_line() Azure-WALinuxAgent-a976115/tests_e2e/tests/samples/error_test.py000077500000000000000000000016561510742556200246520ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from tests_e2e.tests.lib.agent_test import AgentVmTest class ErrorTest(AgentVmTest): """ A trivial test that errors out """ def run(self): raise Exception("* TEST ERROR *") # simulate an unexpected error if __name__ == "__main__": ErrorTest.run_from_command_line() Azure-WALinuxAgent-a976115/tests_e2e/tests/samples/fail_remote_test.py000077500000000000000000000017201510742556200257770ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from tests_e2e.tests.lib.agent_test import AgentVmTest class FailRemoteTest(AgentVmTest): """ A trivial remote test that fails """ def run(self): self._run_remote_test(self._context.create_ssh_client(), "samples-fail_remote_test.py") if __name__ == "__main__": FailRemoteTest.run_from_command_line() Azure-WALinuxAgent-a976115/tests_e2e/tests/samples/fail_test.py000077500000000000000000000016271510742556200244320ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from assertpy import fail from tests_e2e.tests.lib.agent_test import AgentVmTest class FailTest(AgentVmTest): """ A trivial test that fails """ def run(self): fail("* TEST FAILED *") if __name__ == "__main__": FailTest.run_from_command_line() Azure-WALinuxAgent-a976115/tests_e2e/tests/samples/pass_remote_test.py000077500000000000000000000017231510742556200260350ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from tests_e2e.tests.lib.agent_test import AgentVmTest class PassRemoteTest(AgentVmTest): """ A trivial remote test that succeeds """ def run(self): self._run_remote_test(self._context.create_ssh_client(), "samples-pass_remote_test.py") if __name__ == "__main__": PassRemoteTest.run_from_command_line() Azure-WALinuxAgent-a976115/tests_e2e/tests/samples/pass_test.py000077500000000000000000000016521510742556200244630ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from tests_e2e.tests.lib.agent_test import AgentVmTest from tests_e2e.tests.lib.logging import log class PassTest(AgentVmTest): """ A trivial test that passes. """ def run(self): log.info("* PASSED *") if __name__ == "__main__": PassTest.run_from_command_line() Azure-WALinuxAgent-a976115/tests_e2e/tests/samples/sleep_test.py000077500000000000000000000020321510742556200246160ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import time from tests_e2e.tests.lib.agent_test import AgentVmTest from tests_e2e.tests.lib.logging import log class SleepTest(AgentVmTest): """ A trivial test that sleeps for 2 hours, then passes. """ def run(self): log.info("Sleeping for 2 hours") time.sleep(2 * 60 * 60) log.info("* PASSED *") if __name__ == "__main__": SleepTest.run_from_command_line() Azure-WALinuxAgent-a976115/tests_e2e/tests/samples/vmss_test.py000077500000000000000000000024551510742556200245070ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from tests_e2e.tests.lib.agent_test import AgentVmssTest from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.ssh_client import SshClient class VmssTest(AgentVmssTest): """ Sample test for scale sets """ def run(self): for address in self._context.vmss.get_instances_ip_address(): ssh_client: SshClient = SshClient(ip_address=address.ip_address, username=self._context.username, identity_file=self._context.identity_file) log.info("%s: Hostname: %s", address.instance_name, ssh_client.run_command("hostname").strip()) log.info("* PASSED *") if __name__ == "__main__": VmssTest.run_from_command_line() Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts/000077500000000000000000000000001510742556200221205ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts/agent_cgroups-check_cgroups_agent.py000077500000000000000000000140471510742556200313360ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import os import re from assertpy import fail from tests_e2e.tests.lib.agent_log import AgentLog from tests_e2e.tests.lib.cgroup_helpers import BASE_CGROUP, get_agent_cgroup_mount_path, \ AGENT_SERVICE_NAME, verify_if_distro_supports_cgroup, print_cgroups, \ verify_agent_cgroup_assigned_correctly, get_mounted_controller_list, CGROUP_TRACKED_PATTERN from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.remote_test import run_remote_test from tests_e2e.tests.lib.retry import retry_if_false def verify_if_cgroup_controllers_are_mounted(): """ Checks if controllers CPU, Memory that agent use are mounted in the system """ log.info("===== Verifying cgroup controllers that agent use are mounted in the system") all_controllers_present = os.path.exists(BASE_CGROUP) missing_controllers = [] mounted_controllers = [] for controller in get_mounted_controller_list(): controller_path = os.path.join(BASE_CGROUP, controller) if not os.path.exists(controller_path): all_controllers_present = False missing_controllers.append(controller_path) else: mounted_controllers.append(controller_path) if not all_controllers_present: fail('Not all of the controllers {0} mounted in expected cgroups. Mounted controllers are: {1}.\n ' 'Missing controllers are: {2} \n System mounted cgroups are:\n{3}'.format(get_mounted_controller_list(), mounted_controllers, missing_controllers, print_cgroups())) log.info('Verified all cgroup controllers are present.\n {0}'.format(mounted_controllers)) def verify_agent_cgroup_created_on_file_system(): """ Checks agent service is running in azure.slice/{agent_service) cgroup and mounted in same system cgroup controllers mounted path """ log.info("===== Verifying the agent cgroup paths exist on file system") agent_cgroup_mount_path = get_agent_cgroup_mount_path() log.info("expected agent cgroup mount path: %s", agent_cgroup_mount_path) missing_agent_cgroup_controllers_path = [] verified_agent_cgroup_controllers_path = [] def is_agent_cgroup_controllers_path_exist(): all_controllers_path_exist = True missing_agent_cgroup_controllers_path.clear() verified_agent_cgroup_controllers_path.clear() for agenet_cgroup in agent_cgroup_mount_path: all_controllers_path_exist = True for controller in get_mounted_controller_list(): agent_controller_path = os.path.join(BASE_CGROUP, controller, agenet_cgroup[1:]) if not os.path.exists(agent_controller_path): all_controllers_path_exist = False missing_agent_cgroup_controllers_path.append(agent_controller_path) else: verified_agent_cgroup_controllers_path.append(agent_controller_path) # check the base cgroup path in v2 if not get_mounted_controller_list(): agent_controller_path = os.path.join(BASE_CGROUP, agenet_cgroup[1:]) if not os.path.exists(agent_controller_path): all_controllers_path_exist = False missing_agent_cgroup_controllers_path.append(agent_controller_path) else: all_controllers_path_exist = True verified_agent_cgroup_controllers_path.append(agent_controller_path) if all_controllers_path_exist: break return all_controllers_path_exist # Test check can happen before agent setup cgroup configuration. So, retrying the check for few times if not retry_if_false(is_agent_cgroup_controllers_path_exist): fail("Agent's cgroup paths couldn't be found on file system. Missing agent cgroups path :{0}.\n Verified agent cgroups path:{1}".format(missing_agent_cgroup_controllers_path, verified_agent_cgroup_controllers_path)) log.info('Verified all agent cgroup paths are present.\n {0}'.format(verified_agent_cgroup_controllers_path)) def verify_agent_cgroups_tracked(): """ Checks if agent is tracking agent cgroups path for polling resource usage. This is verified by checking the agent log for the message "Started tracking cgroup" """ log.info("===== Verifying agent started tracking cgroups from the log") tracked_cgroups = [] def is_agent_tracking_cgroup(): tracked_cgroups.clear() for record in AgentLog().read(): match = re.search(CGROUP_TRACKED_PATTERN, record.message) if match is not None: tracked_cgroups.append(match.group('path')) for controller in get_mounted_controller_list(): if not any(AGENT_SERVICE_NAME in cgroup_path and controller in cgroup_path for cgroup_path in tracked_cgroups): return False return True # Test check can happen before agent starts tracking cgroups. So, retrying the check for few times found = retry_if_false(is_agent_tracking_cgroup) if not found: fail('Agent {0} is not being tracked. Tracked cgroups:{1}'.format(get_mounted_controller_list(), tracked_cgroups)) log.info("Agent is tracking cgroups correctly.\n%s", tracked_cgroups) def main(): verify_if_distro_supports_cgroup() verify_if_cgroup_controllers_are_mounted() verify_agent_cgroup_created_on_file_system() verify_agent_cgroup_assigned_correctly() verify_agent_cgroups_tracked() run_remote_test(main) agent_cgroups_process_check-cgroups_not_enabled.py000077500000000000000000000043551510742556200341720ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # This script verifies agent detected unexpected processes in the agent cgroup before cgroup initialization from assertpy import fail from azurelinuxagent.common.utils import shellutil from tests_e2e.tests.lib.cgroup_helpers import check_agent_quota_disabled, check_log_message, get_agent_cpu_quota from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.retry import retry_if_false def restart_ext_handler(): log.info("Restarting the extension handler") shellutil.run_command(["pkill", "-f", "WALinuxAgent.*run-exthandler"]) def verify_agent_cgroups_not_enabled(): """ Verifies that the agent cgroups not enabled when ama extension(unexpected) processes are found in the agent cgroup """ log.info("Verifying agent cgroups are not enabled") ama_process_found: bool = retry_if_false(lambda: check_log_message("The agent's cgroup includes unexpected processes:.+/var/lib/waagent/Microsoft.Azure.Monitor")) if not ama_process_found: fail("Agent failed to found ama extension processes in the agent cgroup") found: bool = retry_if_false(lambda: check_log_message("Found unexpected processes in the agent cgroup before agent enable cgroups")) if not found: fail("Agent failed to found unknown processes in the agent cgroup") disabled: bool = retry_if_false(check_agent_quota_disabled) if not disabled: fail("The agent failed to disable its CPUQuota when cgroups were not enabled. Current CPUQuota: {0}".format(get_agent_cpu_quota())) def main(): restart_ext_handler() verify_agent_cgroups_not_enabled() if __name__ == "__main__": main() agent_cgroups_process_check-unknown_process_check.py000077500000000000000000000076571510742556200345600ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # This script forces the process check by putting unknown process in the agent's cgroup import subprocess import datetime from assertpy import fail from azurelinuxagent.common.future import UTC from azurelinuxagent.common.utils import shellutil from tests_e2e.tests.lib.cgroup_helpers import check_agent_quota_disabled, check_log_message, get_unit_cgroup_proc_path, AGENT_SERVICE_NAME from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.retry import retry_if_false def prepare_agent(): check_time = datetime.datetime.now(UTC) log.info("Executing script update-waagent-conf to enable agent cgroups config flag") result = shellutil.run_command(["update-waagent-conf", "Debug.CgroupCheckPeriod=20", "Debug.CgroupLogMetrics=y", "Debug.CgroupDisableOnProcessCheckFailure=y", "Debug.CgroupDisableOnQuotaCheckFailure=n"]) log.info("Successfully enabled agent cgroups config flag: {0}".format(result)) found: bool = retry_if_false(lambda: check_log_message(" Agent cgroups enabled: True", after_timestamp=check_time)) if not found: fail("Agent cgroups not enabled") def creating_dummy_process(): log.info("Creating dummy process to add to agent's cgroup") dd_command = ["sleep", "60m"] proc = subprocess.Popen(dd_command) return proc.pid def remove_dummy_process(pid): log.info("Removing dummy process from agent's cgroup") shellutil.run_command(["kill", "-9", str(pid)]) def disable_agent_cgroups_with_unknown_process(pid): """ Adding dummy process to the agent's cgroup and verifying that the agent detects the unknown process and disables cgroups Note: System may kick the added process out of the cgroups, keeps adding until agent detect that process """ def unknown_process_found(): cgroup_procs_path = get_unit_cgroup_proc_path(AGENT_SERVICE_NAME, 'cpu,cpuacct') log.info("Adding dummy process %s to cgroup.procs file %s", pid, cgroup_procs_path) try: with open(cgroup_procs_path, 'a') as f: f.write("\n") f.write(str(pid)) except Exception as e: log.warning("Error while adding process to cgroup.procs file: {0}".format(e)) return False # The log message indicating the check failed is similar to # 2021-03-29T23:33:15.603530Z INFO MonitorHandler ExtHandler Disabling resource usage monitoring. Reason: Check on cgroups failed: # [CGroupsException] The agent's cgroup includes unexpected processes: ['[PID: 25826] python3\x00/home/nam/Compute-Runtime-Tux-Pipeline/dungeon_crawler/s'] found: bool = retry_if_false(lambda: check_log_message( "Disabling resource usage monitoring. Reason: Check on cgroups failed:.+The agent's cgroup includes unexpected processes:.+{0}".format( pid)), attempts=3) return found and retry_if_false(check_agent_quota_disabled, attempts=3) found: bool = retry_if_false(unknown_process_found, attempts=3) if not found: fail("The agent did not detect unknown process: {0}".format(pid)) def main(): prepare_agent() pid = creating_dummy_process() disable_agent_cgroups_with_unknown_process(pid) remove_dummy_process(pid) if __name__ == "__main__": main() Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts/agent_cpu_quota-check_agent_cpu_quota.py000077500000000000000000000216751510742556200321770ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import datetime import os import re import time from assertpy import fail from azurelinuxagent.common.future import UTC from azurelinuxagent.common.osutil import systemd from azurelinuxagent.common.utils import shellutil from tests_e2e.tests.lib.agent_log import AgentLog from tests_e2e.tests.lib.cgroup_helpers import check_agent_quota_disabled, \ get_agent_cpu_quota, check_log_message from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.remote_test import run_remote_test from tests_e2e.tests.lib.retry import retry_if_false def prepare_agent(): # This function prepares the agent: # 1) It modifies the service unit file to wrap the agent process with a script that starts the actual agent and then # launches an instance of the dummy process to consume the CPU. Since all these processes are in the same cgroup, # this has the same effect as the agent itself consuming the CPU. # # The process tree is similar to # # /usr/bin/python3 /home/azureuser/bin/agent_cpu_quota-start_service.py /usr/bin/python3 -u /usr/sbin/waagent -daemon # ├─/usr/bin/python3 -u /usr/sbin/waagent -daemon # │ └─python3 -u bin/WALinuxAgent-9.9.9.9-py3.8.egg -run-exthandlers # │ └─4*[{python3}] # ├─dd if=/dev/zero of=/dev/null # │ # └─{python3} # # And the agent's cgroup looks like # # CGroup: /azure.slice/walinuxagent.service # ├─10507 /usr/bin/python3 /home/azureuser/bin/agent_cpu_quota-start_service.py /usr/bin/python3 -u /usr/sbin/waagent -daemon # ├─10508 /usr/bin/python3 -u /usr/sbin/waagent -daemon # ├─10516 python3 -u bin/WALinuxAgent-9.9.9.9-py3.8.egg -run-exthandlers # ├─10711 dd if=/dev/zero of=/dev/null # # 2) It turns on a few debug flags and resart the agent log.info("***Preparing agent for testing cpu quota") # # Create a drop in file to wrap "start-service.py" around the actual agent: This will ovveride the ExecStart line in the agent's unit file # # ExecStart= (need to be empty to clear the original ExecStart) # ExecStart=/home/.../agent_cgroups-start-service.py /usr/bin/python3 -u /usr/sbin/waagent -daemon # service_file = systemd.get_agent_unit_file() exec_start = None with open(service_file, "r") as file_: for line in file_: match = re.match("ExecStart=(.+)", line) if match is not None: exec_start = match.group(1) break else: file_.seek(0) raise Exception("Could not find ExecStart in {0}\n:{1}".format(service_file, file_.read())) agent_python = exec_start.split()[0] current_directory = os.path.dirname(os.path.abspath(__file__)) start_service_script = os.path.join(current_directory, "agent_cpu_quota-start_service.py") os.makedirs(systemd.get_agent_drop_in_path(), exist_ok=True) drop_in_file = os.path.join(systemd.get_agent_drop_in_path(), "99-ExecStart.conf") log.info("Creating %s...", drop_in_file) with open(drop_in_file, "w") as file_: file_.write(""" [Service] ExecStart= ExecStart={0} {1} {2} """.format(agent_python, start_service_script, exec_start)) log.info("Executing daemon-reload") shellutil.run_command(["systemctl", "daemon-reload"]) # Disable all checks on cgroups and enable log metrics every 20 sec log.info("Executing script update-waagent-conf to enable agent cgroups config flag") result = shellutil.run_command(["update-waagent-conf", "Debug.CgroupCheckPeriod=20", "Debug.CgroupLogMetrics=y", "Debug.CgroupDisableOnProcessCheckFailure=n", "Debug.CgroupDisableOnQuotaCheckFailure=n"]) log.info("Successfully enabled agent cgroups config flag: {0}".format(result)) def verify_agent_reported_metrics(): """ This method verifies that the agent reports % Processor Time and Throttled Time metrics """ log.info("** Verifying agent reported metrics") log.info("Parsing agent log for metrics") processor_time = [] throttled_time = [] def check_agent_log_for_metrics() -> bool: for record in AgentLog().read(): match = re.search(r"% Processor Time\s*\[walinuxagent.service\]\s*=\s*([0-9.]+)", record.message) if match is not None: processor_time.append(float(match.group(1))) else: match = re.search(r"Throttled Time \(s\)\s*\[walinuxagent.service\]\s*=\s*([0-9.]+)", record.message) if match is not None: throttled_time.append(float(match.group(1))) if len(processor_time) < 1 or len(throttled_time) < 1: return False return True found: bool = retry_if_false(check_agent_log_for_metrics) if found: log.info("%% Processor Time: %s", processor_time) log.info("Throttled Time: %s", throttled_time) log.info("Successfully verified agent reported resource metrics") else: fail( "The agent doesn't seem to be collecting % Processor Time and Throttled Time metrics. Agent found Processor Time: {0}, Throttled Time: {1}".format( processor_time, throttled_time)) def wait_for_log_message(message, timeout=datetime.timedelta(minutes=5)): log.info("Checking agent's log for message matching [%s]", message) start_time = datetime.datetime.now(UTC) while datetime.datetime.now(UTC) - start_time <= timeout: for record in AgentLog().read(): match = re.search(message, record.message, flags=re.DOTALL) if match is not None: log.info("Found message:\n\t%s", record.text.replace("\n", "\n\t")) return time.sleep(30) fail("The agent did not find [{0}] in its log within the allowed timeout".format(message)) def verify_throttling_time_check_on_agent_cgroups(): """ This method checks agent disables its CPUQuota when it exceeds its throttling limit """ log.info("***Verifying CPU throttling check on agent cgroups") # Now disable the check on unexpected processes and enable the check on throttledtime and verify that the agent disables its CPUQuota when it exceeds its throttling limit if check_agent_quota_disabled(): fail("The agent's CPUQuota is not enabled: {0}".format(get_agent_cpu_quota())) shellutil.run_command(["update-waagent-conf", "Debug.CgroupDisableOnProcessCheckFailure=n", "Debug.CgroupDisableOnQuotaCheckFailure=y", "Debug.AgentCpuThrottledTimeThreshold=5"]) # The log message indicating the check failed is similar to # 2021-04-01T20:47:55.892569Z INFO MonitorHandler ExtHandler Disabling resource usage monitoring. Reason: Check on cgroups failed: # [CGroupsException] The agent has been throttled for 121.339916938 seconds # # After we need to wait for a little longer for the agent to update systemd: # 2021-04-14T01:51:44.399860Z INFO MonitorHandler ExtHandler Executing systemctl daemon-reload... # wait_for_log_message( "Disabling resource usage monitoring. Reason: Check on cgroups failed:.+The agent has been throttled", timeout=datetime.timedelta(minutes=10)) wait_for_log_message("Stopped tracking cpu cgroup walinuxagent.service", timeout=datetime.timedelta(minutes=10)) disabled: bool = retry_if_false(check_agent_quota_disabled) if not disabled: fail("The agent did not disable its CPUQuota: {0}".format(get_agent_cpu_quota())) def cleanup_test_setup(): log.info("Cleaning up test setup") drop_in_file = os.path.join(systemd.get_agent_drop_in_path(), "99-ExecStart.conf") if os.path.exists(drop_in_file): log.info("Removing %s...", drop_in_file) os.remove(drop_in_file) shellutil.run_command(["systemctl", "daemon-reload"]) check_time = datetime.datetime.now(UTC) shellutil.run_command(["agent-service", "restart"]) found: bool = retry_if_false(lambda: check_log_message(" Agent cgroups enabled: True", after_timestamp=check_time)) if not found: fail("Agent cgroups not enabled yet") def main(): prepare_agent() verify_agent_reported_metrics() verify_throttling_time_check_on_agent_cgroups() cleanup_test_setup() run_remote_test(main) Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts/agent_cpu_quota-start_service.py000077500000000000000000000057071510742556200305370ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # This script starts the actual agent and then launches an instance of the dummy process periodically to consume the CPU # import signal import subprocess import sys import threading import time import traceback from azurelinuxagent.common import logger class CpuConsumer(threading.Thread): def __init__(self): threading.Thread.__init__(self) self._stopped = False def run(self): threading.current_thread().name = "*Stress*" while not self._stopped: try: # Dummy operation(reads empty streams and drops) which creates load on the CPU dd_command = ["dd", "if=/dev/zero", "of=/dev/null"] logger.info("Starting dummy dd command: {0} to stress CPU", ' '.join(dd_command)) subprocess.Popen(dd_command) logger.info("dd command completed; sleeping...") i = 0 while i < 30 and not self._stopped: time.sleep(1) i += 1 except Exception as run_exception: logger.error("{0}:\n{1}", run_exception, traceback.format_exc()) def stop(self): self._stopped = True try: threading.current_thread().name = "*StartService*" logger.set_prefix("E2ETest") logger.add_logger_appender(logger.AppenderType.FILE, logger.LogLevel.INFO, "/var/log/waagent.log") agent_command_line = sys.argv[1:] logger.info("Starting Agent: {0}", ' '.join(agent_command_line)) agent_process = subprocess.Popen(agent_command_line) # sleep a little to give the agent a chance to initialize time.sleep(15) cpu_consumer = CpuConsumer() cpu_consumer.start() def forward_signal(signum, _): if signum == signal.SIGTERM: logger.info("Stopping stress thread...") cpu_consumer.stop() logger.info("Forwarding signal {0} to Agent", signum) agent_process.send_signal(signum) signal.signal(signal.SIGTERM, forward_signal) agent_process.wait() logger.info("Agent completed") cpu_consumer.stop() cpu_consumer.join() logger.info("Stress completed") logger.info("Exiting...") sys.exit(agent_process.returncode) except Exception as exception: logger.error("Unexpected error occurred while starting agent service : {0}", exception) raise Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts/agent_ext_policy-verify_operation_disallowed.py000077500000000000000000000045151510742556200336300ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Checks that the input data is found in the agent log # import argparse import sys import re from datetime import datetime from azurelinuxagent.common.future import UTC, datetime_min_utc from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.agent_log import AgentLog def main(): parser = argparse.ArgumentParser() parser.add_argument('--extension-name', dest='extension_name', required=True) parser.add_argument('--operation', dest='operation', required=True, choices=['run', 'uninstall']) parser.add_argument("--after-timestamp", dest='after_timestamp', required=False) args, _ = parser.parse_known_args() log.info("Verifying that agent log shows {0} failure due to policy".format(args.operation)) pattern = (r".*Extension will not be processed: failed to {0} extension '{1}' because it is not specified as an allowed extension.*" .format(args.operation, re.escape(args.extension_name))) agent_log = AgentLog() if args.after_timestamp is None: after_datetime = datetime_min_utc else: after_datetime = datetime.strptime(args.after_timestamp, '%Y-%m-%d %H:%M:%S').replace(tzinfo=UTC) try: for record in agent_log.read(): if record.timestamp > after_datetime: if re.search(pattern, record.message): log.info("Found expected error in agent log: {0}".format(record.message)) sys.exit(0) except Exception as e: log.info("Error thrown when searching for test data in agent log: {0}".format(str(e))) log.info("Did not find expected error in agent log. Expected to find pattern: {0}".format(pattern)) sys.exit(1) if __name__ == "__main__": main()Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts/agent_ext_policy-verify_operation_success.py000077500000000000000000000113361510742556200331500ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # import argparse from assertpy import fail from datetime import datetime import time import re from azurelinuxagent.common.future import UTC, datetime_min_utc from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.agent_log import AgentLog # This script verifies the success of an operation using the agent log. # Enable: check that the agent has reported a successful status for the specified list of extensions # Uninstall: check that the agent has not reported any status for the specified list of extensions # # Usage: # agent_ext_policy-verify_operation_success.py --extension-list "A" "B" --operation "enable" --after-timestamp "2025-01-13 11:21:40" def __get_last_reported_status(after_timestamp): # Get last reported status from the agent log file. If after_timestamp is specified, return only the status reported # after that timestamp, and raise error if not found after 2 tries. agent_log = AgentLog() retries = 10 for attempt in range(retries): phrase = "All extensions in the goal state have reached a terminal state" latest_status = None for record in agent_log.read(): if record.timestamp < after_timestamp: continue if phrase in record.message: if latest_status is None: latest_status = record else: if latest_status.timestamp < record.timestamp: latest_status = record if latest_status is not None: log.info("Latest status: {0}".format(latest_status.message)) return latest_status log.info("Unable to find handler status in agent log on attempt {0}. Retrying...".format(attempt + 1)) time.sleep(30) return None def check_extension_reported_successful_status(status_message, ext_name: str): # Extract extension statuses from the agent record pattern = r"\(u?'(" + re.escape(ext_name) + r")', u?'([^']+)'\)" match = re.search(pattern, status_message) if match is None: fail("Agent did not report any status for extension {0}, enable failed.".format(ext_name)) else: status_code = match.group(2).lower() log.info("Status code: {0}".format(status_code)) if status_code not in ["success", "ready"]: fail("Agent did not report a successful status for extension {0}, enable failed. Status: {1}".format(ext_name, status_code)) else: log.info("Agent reported a successful status for extension {0}, enable succeeded.".format(ext_name)) def main(): parser = argparse.ArgumentParser() parser.add_argument('--extension-list', dest='extension_list', required=True, nargs='+', help='Extension name(s) to process. Provide a single name or a space-separated list of names.') parser.add_argument('--operation', dest='operation', required=True, choices=['enable', 'uninstall']) parser.add_argument("--after-timestamp", dest='after_timestamp', required=False) args = parser.parse_args() if args.after_timestamp is not None: after_datetime = datetime.strptime(args.after_timestamp, '%Y-%m-%d %H:%M:%S').replace(tzinfo=UTC) else: after_datetime = datetime_min_utc status = __get_last_reported_status(after_datetime) if status is None: fail("Unable to find extension status in agent log.") if args.operation == "enable": log.info("Checking agent status file to verify that extensions were enabled successfully.") for extension in args.extension_list: check_extension_reported_successful_status(status.message, extension) elif args.operation == "uninstall": log.info("Checking agent log to verify that status is not reported for uninstalled extensions.") for extension in args.extension_list: if extension in status.message: fail("Agent reported status for extension {0}, uninstall failed.".format(extension)) else: log.info("Agent did not report status for extension {0}, uninstall succeeded.".format(extension)) if __name__ == "__main__": main()Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts/agent_ext_workflow-assert_operation_sequence.py000077500000000000000000000175511510742556200336650ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # The DcrTestExtension maintains an `operations-.log` for every operation that the agent executes on that # extension. This script asserts that the operations sequence in the log file matches the expected operations given as # input to this script. We do this to confirm that the agent executed the correct sequence of operations. # # Sample operations-.log file snippet - # Date:2019-07-30T21:54:03Z; Operation:install; SeqNo:0 # Date:2019-07-30T21:54:05Z; Operation:enable; SeqNo:0 # Date:2019-07-30T21:54:37Z; Operation:enable; SeqNo:1 # Date:2019-07-30T21:55:20Z; Operation:disable; SeqNo:1 # Date:2019-07-30T21:55:22Z; Operation:uninstall; SeqNo:1 # import argparse import os import sys import time from datetime import datetime from typing import Any, Dict, List from azurelinuxagent.common.future import UTC DELIMITER = ";" OPS_FILE_DIR = "/var/log/azure/Microsoft.Azure.TestExtensions.Edp.GuestAgentDcrTest/" OPS_FILE_PATTERN = ["operations-%s.log", "%s/operations-%s.log"] MAX_RETRY = 5 SLEEP_TIMER = 30 def parse_ops_log(ops_version: str, input_ops: List[str], start_time: str): # input_ops are the expected operations that we expect to see in the operations log file ver = (ops_version,) ops_file_name = None for file_pat in OPS_FILE_PATTERN: ops_file_name = os.path.join(OPS_FILE_DIR, file_pat % ver) if not os.path.exists(ops_file_name): ver = ver + (ops_version,) ops_file_name = None continue break if not ops_file_name: raise IOError("Operations File %s not found" % os.path.join(OPS_FILE_DIR, OPS_FILE_PATTERN[0] % ops_version)) ops = [] with open(ops_file_name, 'r') as ops_log: # Example of a line in the log file - `Date:2019-07-30T21:54:03Z; Operation:install; SeqNo:0` content = ops_log.readlines() for op_log in content: data = op_log.split(DELIMITER) date = datetime.strptime(data[0].split("Date:")[1], "%Y-%m-%dT%H:%M:%SZ").replace(tzinfo=UTC) op = data[1].split("Operation:")[1] seq_no = data[2].split("SeqNo:")[1].strip('\n') # We only capture the operations that > start_time of the test if start_time > date: continue ops.append({'date': date, 'op': op, 'seq_no': seq_no}) # Only parse the expected number of operations after the test case starts. There may be additional # operations on the extension if the agent processes a goal state with additional extensions added by policy # or otherwise (ConfigurationforLinux, for example) if len(ops) == len(input_ops): break return ops def assert_ops_in_sequence(actual_ops: List[Dict[str, Any]], expected_ops: List[str]): exit_code = 0 if len(actual_ops) != len(expected_ops): print("Operation sequence length doesn't match, exit code 2") exit_code = 2 last_date = datetime(70, 1, 1, tzinfo=UTC) for idx, val in enumerate(actual_ops): if exit_code != 0: break if val['date'] < last_date or val['op'] != expected_ops[idx]: print("Operation sequence doesn't match, exit code 2") exit_code = 2 last_date = val['date'] return exit_code def check_update_sequence(args): # old_ops_file_name = OPS_FILE_PATTERN % args.old_version # new_ops_file_name = OPS_FILE_PATTERN % args.new_version actual_ops = parse_ops_log(args.old_version, args.old_ops, args.start_time) actual_ops.extend(parse_ops_log(args.new_version, args.new_ops, args.start_time)) actual_ops = sorted(actual_ops, key=lambda op: op['date']) exit_code = assert_ops_in_sequence(actual_ops, args.ops) return exit_code, actual_ops def check_operation_sequence(args): # ops_file_name = OPS_FILE_PATTERN % args.version actual_ops = parse_ops_log(args.version, args.ops, args.start_time) exit_code = assert_ops_in_sequence(actual_ops, args.ops) return exit_code, actual_ops def main(): # There are 2 main ways you can call this file - normal_ops_sequence or update_sequence parser = argparse.ArgumentParser() cmd_parsers = parser.add_subparsers(help="sub-command help", dest="command") # We use start_time to make sure we're testing the correct test and not some other test parser.add_argument("--start-time", dest='start_time', required=True) # Normal_ops_sequence gets the version of the ext and parses the corresponding operations file to get the operation # sequence that were run on the extension normal_ops_sequence_parser = cmd_parsers.add_parser("normal_ops_sequence", help="Test the normal operation sequence") normal_ops_sequence_parser.add_argument('--version', dest='version') normal_ops_sequence_parser.add_argument('--ops', nargs='*', dest='ops', default=argparse.SUPPRESS) # Update_sequence mode is used to check for the update scenario. We get the expected old operations, expected # new operations and the final operation list and verify if the expected operations match the actual ones update_sequence_parser = cmd_parsers.add_parser("update_sequence", help="Test the update operation sequence") update_sequence_parser.add_argument("--old-version", dest="old_version") update_sequence_parser.add_argument("--new-version", dest="new_version") update_sequence_parser.add_argument("--old-ver-ops", nargs="*", dest="old_ops", default=argparse.SUPPRESS) update_sequence_parser.add_argument("--new-ver-ops", nargs="*", dest="new_ops", default=argparse.SUPPRESS) update_sequence_parser.add_argument("--final-ops", nargs="*", dest="ops", default=argparse.SUPPRESS) args, unknown = parser.parse_known_args() if unknown or len(unknown) > 0: # Print any unknown arguments passed to this script and fix them with low priority print("[Low Proiority][To-Fix] Found unknown args: %s" % ', '.join(unknown)) args.start_time = datetime.strptime(args.start_time, "%Y-%m-%dT%H:%M:%SZ").replace(tzinfo=UTC) exit_code = 999 actual_ops = [] for i in range(0, MAX_RETRY): if args.command == "update_sequence": exit_code, actual_ops = check_update_sequence(args) elif args.command == "normal_ops_sequence": exit_code, actual_ops = check_operation_sequence(args) else: print("No such command %s, exit code 5\n" % args.command) exit_code, actual_ops = 5, [] break if exit_code == 0: break print("{0} test failed with exit code: {1}; Retry attempt: {2}; Retrying in {3} secs".format(args.command, exit_code, i, SLEEP_TIMER)) time.sleep(SLEEP_TIMER) if exit_code != 0: print("Expected Operations: %s" % ", ".join(args.ops)) print("Actual Operations: %s" % ','.join(["[%s, Date: %s]" % (op['op'], op['date'].strftime("%Y-%m-%dT%H:%M:%SZ")) for op in actual_ops])) print("Assertion completed, exiting with code: %s" % exit_code) sys.exit(exit_code) if __name__ == "__main__": print("Asserting operations\n") main() agent_ext_workflow-validate_no_lag_between_agent_start_and_gs_processing.py000077500000000000000000000116771510742556200413230ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Asserts that goal state processing completed no more than 15 seconds after agent start # from datetime import timedelta import re import sys import time from tests_e2e.tests.lib.agent_log import AgentLog def main(): success = True needs_retry = True retry = 3 while retry >= 0 and needs_retry: success = True needs_retry = False agent_started_time = [] agent_msg = [] time_diff_max_secs = 15 last_agent_log_timestamp = None # Example: Agent WALinuxAgent-2.2.47.2 is running as the goal state agent agent_started_regex = r"Azure Linux Agent \(Goal State Agent version [0-9.]+\)" gs_completed_regex = r"ProcessExtensionsGoalState completed\s\[(?P[a-z]+_\d+)\s(?P\d+)\sms\]" verified_atleast_one_log_line = False verified_atleast_one_agent_started_log_line = False verified_atleast_one_gs_complete_log_line = False agent_log = AgentLog() try: for agent_record in agent_log.read(): last_agent_log_timestamp = agent_record.timestamp verified_atleast_one_log_line = True agent_started = re.match(agent_started_regex, agent_record.message) verified_atleast_one_agent_started_log_line = verified_atleast_one_agent_started_log_line or agent_started if agent_started: agent_started_time.append(agent_record.timestamp) agent_msg.append(agent_record.text) gs_complete = re.match(gs_completed_regex, agent_record.message) verified_atleast_one_gs_complete_log_line = verified_atleast_one_gs_complete_log_line or gs_complete if agent_started_time and gs_complete: duration = gs_complete.group('duration') diff = agent_record.timestamp - agent_started_time.pop() # Reduce the duration it took to complete the Goalstate, essentially we should only care about how long # the agent took after start/restart to start processing GS diff -= timedelta(milliseconds=int(duration)) agent_msg_line = agent_msg.pop() if diff.seconds > time_diff_max_secs: success = False print("Found delay between agent start and GoalState Processing > {0}secs: " "Messages: \n {1} {2}".format(time_diff_max_secs, agent_msg_line, agent_record.text)) except IOError as e: print("Unable to validate no lag time: {0}".format(str(e))) if not verified_atleast_one_log_line: success = False print("Didn't parse a single log line, ensure the log_parser is working fine and verify log regex") if not verified_atleast_one_agent_started_log_line: success = False print("Didn't parse a single agent started log line, ensure the Regex is working fine: {0}" .format(agent_started_regex)) if not verified_atleast_one_gs_complete_log_line: success = False print("Didn't parse a single GS completed log line, ensure the Regex is working fine: {0}" .format(gs_completed_regex)) if agent_started_time or agent_msg: # If agent_started_time or agent_msg is not empty, there is a mismatch in the number of agent start messages # and GoalState Processing messages # If another check hasn't already failed, and the last parsed log is less than 15 seconds after the # mismatched agent start log, we should retry after sleeping for 5s to give the agent time to finish # GoalState processing if success and last_agent_log_timestamp < (agent_started_time[-1] + timedelta(seconds=15)): needs_retry = True print("Sleeping for 5 seconds to allow goal state processing to complete...") time.sleep(5) else: success = False print("Mismatch between number of agent start messages and number of GoalState Processing messages\n " "Agent Start Messages: \n {0}".format('\n'.join(agent_msg))) retry -= 1 sys.exit(0 if success else 1) if __name__ == "__main__": main() Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts/agent_firewall-verify_all_firewall_rules.py000077500000000000000000000356671510742556200327320ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # This script checks all agent firewall rules added properly and working as expected # import argparse import contextlib import os import pwd import re import socket from azurelinuxagent.common.utils import shellutil from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from azurelinuxagent.common.utils.textutil import format_exception from azurelinuxagent.common.version import DISTRO_NAME, DISTRO_VERSION from tests_e2e.tests.lib.firewall_manager import FirewallManager, IpTables, get_wireserver_ip from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.remote_test import run_remote_test import http.client as httpclient def _get_effective_user() -> str: return pwd.getpwuid(os.geteuid()).pw_name @contextlib.contextmanager def switch_user(user: str) -> None: """ Switches the effective UID to the given user """ current_uid = os.getuid() try: uid = pwd.getpwnam(user).pw_uid os.seteuid(uid) log.info(f"Switched to user '{user}' (UID {uid})") yield except Exception as e: raise Exception(f"Cannot switch to user {user}: {e}") finally: try: os.seteuid(current_uid) log.info(f"Switched back to user '{_get_effective_user()}'") except Exception as e: raise Exception(f"Cannot switch back to the original user: {e}") class AgentFirewall: def __init__(self, non_root_user: str): self._firewall_manager: FirewallManager = FirewallManager.create() self._non_root_user: str = non_root_user def run(self): self._prepare_agent() self._firewall_manager.log_firewall_state("** Initial state of the firewall") # Some versions of RHEL have a baked-in agent (2.7.0.6) that can produce duplicate DNS rules. if DISTRO_NAME in ["rhel", "redhat"] and FlexibleVersion(DISTRO_VERSION).major >= 8: self._remove_duplicate_dns_rules() self._firewall_manager.assert_all_rules_are_set() self._test_accept_dns_rule() self._test_accept_rule() self._test_drop_rule() def _remove_duplicate_dns_rules(self) -> None: log.info("Checking for duplicate DNS rules...") if not isinstance(self._firewall_manager, IpTables): raise Exception(f"Expected a FirewallManager of type IpTables on {DISTRO_NAME} {DISTRO_VERSION}. It is {type(self._firewall_manager)}") state = self._firewall_manager.get_state() matches = [line for line in state.splitlines() if re.search(r"ACCEPT.+168\.63\.129\.16.*tcp dpt:53", line) is not None] if len(matches) < 2: log.info("No duplicates found") return duplicates = '\n'.join(matches) log.info(f"Found duplicates:\n{duplicates}") log.info("Removing 1 duplicate...") self._firewall_manager.delete_rule(FirewallManager.ACCEPT_DNS) self._firewall_manager.log_firewall_state("** State of the firewall") @staticmethod def _verify_dns_request_to_wireserver(should_succeed: bool) -> None: """ Verifies DNS requests to the wireserver """ current_user = _get_effective_user() log.info(f"-----Verifying DNS requests to wireserver from user '{current_user}'. Should succeed: {should_succeed}") try: socket.create_connection((get_wireserver_ip(), 53), timeout=10) succeeded = True except Exception as e: # The request should time out if the request is blocked by the firewall if isinstance(e, socket.timeout): succeeded = False else: raise Exception(f"Unexpected error while issuing a DNS request to wireserver: {format_exception(e)}") if succeeded == should_succeed: if succeeded: log.info(f"Success -- can connect to wireserver port 53 as user '{current_user}'") else: log.info(f"Success -- access to wireserver port 53 is blocked for user '{current_user}'") else: if succeeded: raise Exception(f"Error -- unprivileged user:({current_user}) could connect to wireserver port 53, make sure the firewall rules are set correctly") else: raise Exception(f"Cannot issue a DNS request as user '{current_user}'), make sure the firewall rules are set correctly [DNS request timed out]") @staticmethod def _verify_http_request_to_wireserver(should_succeed: bool) -> None: """ Verifies HTTP requests to the wireserver """ current_user = _get_effective_user() log.info(f"-----Verifying HTTP request to wireserver from user '{current_user}'. Should succeed: {should_succeed}") try: client = httpclient.HTTPConnection(get_wireserver_ip(), timeout=10) client.request('GET', '/?comp=versions') succeeded = True except Exception as e: if isinstance(e, socket.timeout): succeeded = False else: raise Exception(f"Unexpected error while connecting to wireserver: {format_exception(e)}") if succeeded == should_succeed: if succeeded: log.info(f"Success -- access to wireserver as user '{current_user}' is allowed") else: log.info(f"Success -- access to wireserver is blocked for user '{current_user}' ") else: if succeeded: raise Exception(f"Error -- user '{current_user}' could connect to wireserver, make sure the firewall rules are set correctly") else: raise Exception(f"Cannot connect to wireserver as user '{current_user}', make sure the firewall rules are set correctly") def _test_accept_dns_rule(self) -> None: """ Deletes the ACCEPT_DNS firewall rule and makes sure it is re-added by agent. """ log.info("-----Verifying behavior of the ACCEPT_DNS rule") log.info("Before deleting the rule, ensure a non root user can do a DNS request to wireserver, but cannot do an HTTP request") with switch_user(self._non_root_user): self._verify_dns_request_to_wireserver(should_succeed=True) self._verify_http_request_to_wireserver(should_succeed=False) # stop the agent, so that it won't re-add rules while checking log.info("Stop Guest Agent service") # agent-service is script name and stop is argument stop_agent = ["agent-service", "stop"] shellutil.run_command(stop_agent) # deleting non root accept rule log.info(f"-----Deleting firewall rule {FirewallManager.ACCEPT_DNS}...") self._firewall_manager.delete_rule(FirewallManager.ACCEPT_DNS) log.info(f"Success -- Deleted firewall rule {FirewallManager.ACCEPT_DNS}") self._firewall_manager.verify_rule_is_not_set(self._firewall_manager.ACCEPT_DNS) self._firewall_manager.log_firewall_state("** Current firewall rules") log.info("After deleting the ACCEPT_DNS rule, ensure a non-root user cannot do a DNS request to wireserver") with switch_user(self._non_root_user): self._verify_dns_request_to_wireserver(should_succeed=False) # restart the agent to re-add the deleted rules log.info("Restart Guest Agent service to re-add the deleted rules") # agent-service is script name and start is argument start_agent = ["agent-service", "start"] shellutil.run_command(start_agent) self._firewall_manager.assert_all_rules_are_set() self._firewall_manager.log_firewall_state("** Current IP table rules") log.info("After appending the rule back , ensure a non root user can do a DNS request to wireserver, but cannot do an HTTP request\n") with switch_user(self._non_root_user): self._verify_dns_request_to_wireserver(should_succeed=True) self._verify_http_request_to_wireserver(should_succeed=False) log.info("Ensuring missing rules are re-added by the running agent") # deleting non root accept rule log.info(f"-----Deleting firewall rule {FirewallManager.ACCEPT_DNS}...") self._firewall_manager.delete_rule(FirewallManager.ACCEPT_DNS) log.info(f"Success -- Deleted firewall rule {FirewallManager.ACCEPT_DNS}") self._firewall_manager.assert_all_rules_are_set() self._firewall_manager.log_firewall_state("** Current firewall rules") log.info("ACCEPT_DNS rule verified successfully\n") def _test_accept_rule(self): """ Deletes the ACCEPT firewall rule and makes sure it is re-added by agent. """ log.info("-----Verifying behavior of the ACCEPT rule") log.info("Before deleting the rule, ensure root can do an HTTP request, but a non-root user cannot") self._verify_http_request_to_wireserver(should_succeed=True) with switch_user(self._non_root_user): self._verify_http_request_to_wireserver(should_succeed=False) # stop the agent, so that it won't re-add rules while checking log.info("Stop Guest Agent service") # agent-service is script name and stop is argument stop_agent = ["agent-service", "stop"] shellutil.run_command(stop_agent) # deleting ACCEPT rule log.info(f"-----Deleting firewall rule {FirewallManager.ACCEPT}...") self._firewall_manager.delete_rule(FirewallManager.ACCEPT) log.info(f"Success -- Deleted firewall rule {FirewallManager.ACCEPT}") self._firewall_manager.verify_rule_is_not_set(FirewallManager.ACCEPT) # deleting drop rule too otherwise after restart, the daemon will go into loop since it cannot connect to wireserver. This would block the agent initialization. log.info(f"-----Deleting firewall rule {FirewallManager.DROP}...") self._firewall_manager.delete_rule(FirewallManager.DROP) log.info(f"Success -- Deleted firewall rule {FirewallManager.DROP}") self._firewall_manager.verify_rule_is_not_set(FirewallManager.DROP) self._firewall_manager.log_firewall_state("** Current firewall rules") # restart the agent to re-add the deleted rules log.info("Restart Guest Agent service to re-add the deleted rules") # agent-service is script name and start is argument start_agent = ["agent-service", "start"] shellutil.run_command(start_agent) self._firewall_manager.assert_all_rules_are_set() self._firewall_manager.log_firewall_state("** Current IP table rules") log.info("After appending the rule back, ensure root can do an HTTP request, but a non-root user cannot") with switch_user(self._non_root_user): self._verify_dns_request_to_wireserver(should_succeed=True) self._verify_http_request_to_wireserver(should_succeed=False) self._verify_http_request_to_wireserver(should_succeed=True) log.info("Ensuring missing rules are re-added by the running agent") log.info(f"-----Deleting firewall rule {FirewallManager.ACCEPT}...") self._firewall_manager.delete_rule(FirewallManager.ACCEPT) log.info(f"Success -- Deleted firewall rule {FirewallManager.ACCEPT}") self._firewall_manager.assert_all_rules_are_set() self._firewall_manager.log_firewall_state("** Current firewall rules") log.info("ACCEPT_DNS rule verified successfully\n") def _test_drop_rule(self): """ Deletes the DROP firewall rule and makes sure it is re-added by agent. """ log.info("-----Verifying behavior of the ACCEPT rule") # stop the agent, so that it won't re-add rules while checking log.info("Stop Guest Agent service") # agent-service is script name and stop is argument stop_agent = ["agent-service", "stop"] shellutil.run_command(stop_agent) # deleting DROP rule log.info(f"-----Deleting firewall rule {FirewallManager.DROP}...") self._firewall_manager.delete_rule(FirewallManager.DROP) log.info(f"Success -- Deleted firewall rule {FirewallManager.DROP}") self._firewall_manager.verify_rule_is_not_set(FirewallManager.DROP) self._firewall_manager.log_firewall_state("** Current firewall rules") log.info("After deleting the non root drop rule, ensure a non-root user can do an HTTP request to wireserver") with switch_user(self._non_root_user): self._verify_http_request_to_wireserver(should_succeed=True) # restart the agent to re-add the deleted rules log.info("Restart Guest Agent service to re-add the deleted rules") # agent-service is script name and start is argument start_agent = ["agent-service", "start"] shellutil.run_command(start_agent) self._firewall_manager.assert_all_rules_are_set() self._firewall_manager.log_firewall_state("** Current IP table rules") log.info("After appending the rule back , ensure a non root user can do a DNS to wireserver, but cannot do an HTTP request") with switch_user(self._non_root_user): self._verify_dns_request_to_wireserver(should_succeed=True) self._verify_http_request_to_wireserver(should_succeed=False) self._verify_http_request_to_wireserver(should_succeed=True) log.info("Ensuring missing rules are re-added by the running agent") log.info(f"-----Deleting firewall rule {FirewallManager.DROP}...") self._firewall_manager.delete_rule(FirewallManager.DROP) log.info(f"Success -- Deleted firewall rule {FirewallManager.DROP}") self._firewall_manager.assert_all_rules_are_set() self._firewall_manager.log_firewall_state("** Current firewall rules") log.info("DROP rule verified successfully\n") @staticmethod def _prepare_agent(): log.info("Executing script update-waagent-conf to enable agent firewall config flag") # Changing the firewall period from default 5 mins to 1 min, so that test won't wait for that long to verify rules shellutil.run_command(["update-waagent-conf", "OS.EnableFirewall=y", f"OS.EnableFirewallPeriod={FirewallManager.FIREWALL_PERIOD}"]) log.info("Successfully enabled agent firewall config flag") parser = argparse.ArgumentParser() parser.add_argument('-u', '--user', required=True, help="Non root user") args = parser.parse_args() run_remote_test(lambda: AgentFirewall(args.user).run()) Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts/agent_persist_firewall-access_wireserver000077500000000000000000000101071510742556200323150ustar00rootroot00000000000000#!/usr/bin/env bash # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Helper script which tries to access Wireserver on system reboot. Also prints out iptable rules if non-root and still # able to access Wireserver if [[ $# -ne 1 ]]; then echo "Usage: agent_persist_firewall-access_wireserver " exit 1 fi TEST_USER=$1 USER=$(whoami) echo "$(date --utc +%FT%T.%3NZ): Running as user: $USER" function check_online { echo "Checking network connectivity..." echo "Connecting to ifconfig.io to check network connection" if command -v curl >/dev/null 2>&1; then curl --retry 5 --retry-delay 5 --connect-timeout 5 -4 ifconfig.io/ip elif command -v wget >/dev/null 2>&1; then wget --tries=5 --timeout=5 --wait=5 -4 ifconfig.io/ip else http_get.py "http://ifconfig.io/ip" --timeout 5 --delay 5 --tries 5 fi if [[ $? -eq 0 ]]; then echo "Network is accessible" return 0 else echo "$(date --utc +%FT%T.%3NZ): Network still not accessible" fi echo "Running ping to 8.8.8.8 option" if ping 8.8.8.8 -c 1 -i .2 -t 30; then echo "Network is accessible" return 0 fi echo "$(date --utc +%FT%T.%3NZ): Network still not accessible" echo "Unable to connect to network, giving up" return 1 # Will remove other options if we determine first option is stable echo "Checking other options to see if network is accessible..." echo "Running ping to localhost option" if ping 127.0.0.1 -c 1 -i .2 -t 30; then echo "Ping to localhost succeeded" return 0 fi echo "Ping to localhost failed" echo "Running socket connection to wireserver:53 option" if python3 /home/"$TEST_USER"/bin/agent_persist_firewall-check_connectivity.py; then echo "Socket connection succeeded" return 0 fi echo "Socket connection failed" echo "Unable to connect to network, giving up" return 1 } if ! check_online; then # We will never be able to get online. Kill script. echo "Unable to connect to network, exiting now" exit 1 fi echo "Finally online, Time: $(date --utc +%FT%T.%3NZ)" echo "Trying to contact Wireserver as $USER to see if accessible" echo "" # This script is run by a cron job on reboot, so it runs in a limited environment. Some distros may be missing the iptables path, # so adding common iptables paths to the environment. export PATH=$PATH:/usr/sbin:/sbin echo "Firewall configuration before accessing Wireserver:" if ! sudo iptables -t security -L -nxv -w; then sudo nft list table walinuxagent fi echo "" WIRE_IP=$(cat /var/lib/waagent/WireServerEndpoint 2>/dev/null || echo '168.63.129.16' | tr -d '[:space:]') if command -v curl >/dev/null 2>&1; then curl --retry 3 --retry-delay 5 --connect-timeout 5 "http://$WIRE_IP/?comp=versions" -o "/tmp/wire-versions-$USER.xml" elif command -v wget >/dev/null 2>&1; then wget --tries=3 "http://$WIRE_IP/?comp=versions" --timeout=5 --wait=5 -O "/tmp/wire-versions-$USER.xml" else http_get.py "http://168.63.129.16/?comp=versions" --timeout 5 --delay 5 --tries 3 fi WIRE_EC=$? echo "ExitCode: $WIRE_EC" if [[ "$USER" != "root" && "$WIRE_EC" == 0 ]]; then echo "Wireserver should not be accessible for non-root user ($USER)" fi if [[ "$USER" != "root" ]]; then echo "" echo "checking tcp traffic to wireserver port 53 for non-root user ($USER)" echo -n 2>/dev/null < /dev/tcp/$WIRE_IP/53 && echo 0 || echo 1 # Establish network connection for port 53 TCP_EC=$? echo "TCP 53 Connection ExitCode: $TCP_EC" fi Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts/agent_persist_firewall-check_connectivity.py000077500000000000000000000013601510742556200331020ustar00rootroot00000000000000import socket import sys WIRESERVER_ENDPOINT_FILE = '/var/lib/waagent/WireServerEndpoint' WIRESERVER_IP = '168.63.129.16' def get_wireserver_ip() -> str: try: with open(WIRESERVER_ENDPOINT_FILE, 'r') as f: wireserver_ip = f.read() except Exception: wireserver_ip = WIRESERVER_IP return wireserver_ip def main(): try: wireserver_ip = get_wireserver_ip() socket.setdefaulttimeout(3) socket.socket(socket.AF_INET, socket.SOCK_STREAM).connect((wireserver_ip, 53)) print('Socket connection to wire server:53 success') except: # pylint: disable=W0702 print('Socket connection to wire server:53 failed') sys.exit(1) if __name__ == "__main__": main() Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts/agent_persist_firewall-test_setup000077500000000000000000000033001510742556200307730ustar00rootroot00000000000000#!/usr/bin/env bash # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Script adds cron job on reboot to make sure iptables rules are added to allow access to Wireserver and also, enable the firewall config flag # set -eao pipefail if [[ $# -ne 1 ]]; then echo "Usage: agent_persist_firewall-test_setup " exit 1 fi # crontab and ping are not installed on some distros, e.g. Ubuntu Minimal if ! command -v crontab; then echo "crontab is not available. Installing cron." apt-get update apt-get install -y cron fi if ! command -v ping; then echo "ping is not available. Installing iputils-ping." apt-get update apt-get install -y iputils-ping fi echo "Creating cron jobs to access Wireserver on reboot" set -x # echo the commands used to set up the cron jobs ( echo "@reboot /home/$1/bin/agent_persist_firewall-access_wireserver $1 > /tmp/reboot-cron-root.log 2>&1" | crontab -u root - echo "@reboot /home/$1/bin/agent_persist_firewall-access_wireserver $1 > /tmp/reboot-cron-$1.log 2>&1" | crontab -u $1 - ) 2>&1 set +x echo "Enabling firewall in waagent.conf" update-waagent-conf OS.EnableFirewall=y agent_persist_firewall-verify_firewall_rules_on_boot.py000077500000000000000000000164351510742556200353030ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # This script checks firewall rules are set on boot through cron job logs.And also capture the logs for debugging purposes. # import argparse import os import re import shutil from assertpy import fail from azurelinuxagent.common.utils import shellutil from tests_e2e.tests.lib.firewall_manager import FirewallManager from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.retry import retry def move_cron_logs_to_var_log(): # Move the cron logs to /var/log log.info("Moving cron logs to /var/log for debugging purposes") files = [ROOT_CRON_LOG, NON_ROOT_WIRE_XML, ROOT_WIRE_XML] if os.path.exists(NON_ROOT_CRON_LOG): files.append(NON_ROOT_CRON_LOG) for cron_log in files: try: shutil.move(src=cron_log, dst=os.path.join("/var", "log", "{0}.{1}".format(os.path.basename(cron_log), BOOT_NAME))) except Exception as e: log.info("Unable to move cron log to /var/log; {0}".format(e)) def check_wireserver_versions_file_exist(wire_version_file): log.info("Checking wire-versions file exist: {0}".format(wire_version_file)) if not os.path.exists(wire_version_file): log.info("File: {0} not found".format(wire_version_file)) return False if os.stat(wire_version_file).st_size > 0: return True return False def verify_data_in_cron_logs(cron_log, verify, err_msg): log.info("Verifying Cron logs") def cron_log_checks(): if not os.path.exists(cron_log): raise Exception("Cron log file not found: {0}".format(cron_log)) with open(cron_log) as f: cron_logs_lines = list(map(lambda _: _.strip(), f.readlines())) if not cron_logs_lines: raise Exception("Empty cron file, looks like cronjob didnt run") if any("Unable to connect to network, exiting now" in line for line in cron_logs_lines): raise Exception("VM was unable to connect to network on startup. Skipping test validation") if not any("ExitCode" in line for line in cron_logs_lines): raise Exception("Cron logs still incomplete, will try again in a minute") if not any(verify(line) for line in cron_logs_lines): fail("Verification failed! (UNEXPECTED): {0}".format(err_msg)) log.info("Verification succeeded. Cron logs as expected") retry(cron_log_checks) def verify_wireserver_ip_reachable_for_root(firewall_manager: FirewallManager): """ For root logs - Ensure the /var/log/wire-versions-root.xml is not-empty (generated by the cron job) Ensure the exit code in the /var/log/reboot-cron-root.log file is 0 """ log.info("Verifying Wireserver IP is reachable from root user") firewall_manager.log_firewall_state("** Current state of the firewall") def check_exit_code(line): match = re.match("ExitCode:\\s(\\d+)", line) return match is not None and int(match.groups()[0]) == 0 verify_data_in_cron_logs(cron_log=ROOT_CRON_LOG, verify=check_exit_code, err_msg="Exit Code should be 0 for root based cron job!") if not check_wireserver_versions_file_exist(ROOT_WIRE_XML): fail("Wire version file should not be empty for root user!") def verify_wireserver_ip_unreachable_for_non_root(firewall_manager: FirewallManager): """ For non-root - Ensure the /tmp/wire-versions-non-root.xml is empty (generated by the cron job) Ensure the exit code in the /tmp/reboot-cron-non-root.log file is non-0 """ log.info("Verifying WireServer IP is unreachable from non-root user") firewall_manager.log_firewall_state("** Current state of the firewall") def check_exit_code(line): match = re.match("ExitCode:\\s(\\d+)", line) return match is not None and int(match.groups()[0]) != 0 verify_data_in_cron_logs(cron_log=NON_ROOT_CRON_LOG, verify=check_exit_code, err_msg="Exit Code should be non-0 for non-root cron job!") if check_wireserver_versions_file_exist(NON_ROOT_WIRE_XML): fail("Wire version file should be empty for non-root user!") def verify_tcp_connection_to_wireserver_for_non_root(firewall_manager: FirewallManager): """ For non-root - Ensure the TCP 53 Connection exit code in the /tmp/reboot-cron-non-root.log file is 0 """ log.info("Verifying TCP connection to Wireserver port for non-root user") firewall_manager.log_firewall_state("** Current state of the firewall") def check_exit_code(line): match = re.match("TCP 53 Connection ExitCode:\\s(\\d+)", line) return match is not None and int(match.groups()[0]) == 0 verify_data_in_cron_logs(cron_log=NON_ROOT_CRON_LOG, verify=check_exit_code, err_msg="TCP 53 Connection Exit Code should be 0 for non-root cron job!") def generate_svg(): """ This is a good to have, but not must have. Not failing tests if we're unable to generate a SVG """ log.info("Running systemd-analyze plot command to get the svg for boot execution order") dest_dir = os.path.join("/var", "log", "svgs") if not os.path.exists(dest_dir): os.makedirs(dest_dir) svg_name = os.path.join(dest_dir, "{0}.svg".format(BOOT_NAME)) cmd = ["systemd-analyze plot > {0}".format(svg_name)] err_code, stdout = shellutil.run_get_output(cmd) if err_code != 0: log.info("Unable to generate svg: {0}".format(stdout)) log.info(f"SVG generated successfully: {svg_name}") def main(): try: # Verify firewall rules are set on boot through cron job logs firewall_manager = FirewallManager.create() firewall_manager.log_firewall_state("** Initial state of the firewall") firewall_manager.assert_all_rules_are_set() verify_wireserver_ip_unreachable_for_non_root(firewall_manager) verify_wireserver_ip_reachable_for_root(firewall_manager) verify_tcp_connection_to_wireserver_for_non_root(firewall_manager) finally: # save the logs to /var/log to capture by collect-logs, this might be useful for debugging move_cron_logs_to_var_log() generate_svg() parser = argparse.ArgumentParser() parser.add_argument('-u', '--user', required=True, help="Non root user") parser.add_argument('-bn', '--boot_name', required=True, help="Boot Name") args = parser.parse_args() NON_ROOT_USER = args.user BOOT_NAME = args.boot_name ROOT_CRON_LOG = "/tmp/reboot-cron-root.log" NON_ROOT_CRON_LOG = f"/tmp/reboot-cron-{NON_ROOT_USER}.log" NON_ROOT_WIRE_XML = f"/tmp/wire-versions-{NON_ROOT_USER}.xml" ROOT_WIRE_XML = "/tmp/wire-versions-root.xml" main() agent_persist_firewall-verify_firewalld_rules_readded.py000077500000000000000000000040131510742556200353650ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # This script deleting the firewalld rules and ensure deleted rules added back to the firewalld rule set after agent start # from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.utils import shellutil from tests_e2e.tests.lib.firewall_manager import Firewalld from tests_e2e.tests.lib.logging import log def main(): if not Firewalld.is_service_running(): log.info("firewalld.service is not running and skipping test") return firewall = Firewalld() firewall.log_firewall_state("** firewalld.service is running; initial state of the firewall") for rule in [Firewalld.ACCEPT_DNS, Firewalld.ACCEPT, Firewalld.DROP]: log.info(f"***** Verifying {rule} rule") agent_name = get_osutil().get_service_name() # stop the agent, so that it won't re-add rules while checking log.info("stop the agent, so that it won't re-add rules while checking") shellutil.run_command(["systemctl", "stop", agent_name]) # deleting rule firewall.delete_rule(rule) # verifying deletion successful firewall.verify_rule_is_not_set(rule) # restart the agent to re-add the deleted rules log.info("restart the agent to re-add the deleted rules") shellutil.run_command(["systemctl", "restart", agent_name]) firewall.assert_all_rules_are_set() if __name__ == "__main__": main() agent_persist_firewall-verify_persist_firewall_service_running.py000077500000000000000000000052331510742556200373750ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # This script verifies firewalld rules set on the vm if firewalld service is running and if it's not running, it verifies network-setup service is enabled by the agent # from assertpy import fail from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.utils import shellutil from tests_e2e.tests.lib.firewall_manager import Firewalld from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.retry import retry_if_false def verify_network_setup_service_enabled(): """ Checks if network-setup service is enabled in the vm """ agent_name = get_osutil().get_service_name() service_name = "{0}-network-setup.service".format(agent_name) cmd = ["systemctl", "is-enabled", service_name] def op(command): try: return shellutil.run_command(command).rstrip() == "enabled" except shellutil.CommandError: return False try: status = retry_if_false(lambda: op(cmd), attempts=5, delay=30) except Exception as e: log.warning("Error -- while checking network.service is-enabled status {0}".format(e)) status = False if not status: cmd = ["systemctl", "status", service_name] fail("network-setup.service is not enabled!. Current status: {0}".format(shellutil.run_command(cmd))) log.info("network-setup.service is enabled") def verify_firewall_service_running(): log.info("Ensure test agent initialize the firewalld/network service setup") log.info("Checking if the firewalld service is active on the VM") if Firewalld.is_service_running(): # Checking if firewalld rules are present in the rule set if firewall service is active Firewalld().assert_all_rules_are_set() else: # Checking if network-setup service is enabled if firewall service is not active log.info("Checking if network-setup service is enabled by the agent since firewall service is not active") verify_network_setup_service_enabled() if __name__ == "__main__": verify_firewall_service_running() Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts/agent_publish-check_update.py000077500000000000000000000105531510742556200277420ustar00rootroot00000000000000#!/usr/bin/env pypy3 import argparse # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import re from assertpy import fail from tests_e2e.tests.lib.agent_log import AgentLog from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.remote_test import run_remote_test from tests_e2e.tests.lib.retry import retry_if_false # pylint: disable=W0105 """ Post the _LOG_PATTERN_00 changes, the last group sometimes might not have the 'Agent' part at the start of the sentence; thus making it optional. > WALinuxAgent-2.2.18 discovered WALinuxAgent-2.2.47 as an update and will exit (None, 'WALinuxAgent-2.2.18', '2.2.47') """ _UPDATE_PATTERN_00 = re.compile(r'(.*Agent\s)?(\S*)\sdiscovered\sWALinuxAgent-(\S*)\sas an update and will exit') """ > Agent WALinuxAgent-2.2.45 discovered update WALinuxAgent-2.2.47 -- exiting ('Agent', 'WALinuxAgent-2.2.45', '2.2.47') """ _UPDATE_PATTERN_01 = re.compile(r'(.*Agent)?\s(\S*) discovered update WALinuxAgent-(\S*) -- exiting') """ > Normal Agent upgrade discovered, updating to WALinuxAgent-2.9.1.0 -- exiting ('Normal Agent', WALinuxAgent, '2.9.1.0 ') """ _UPDATE_PATTERN_02 = re.compile(r'(.*Agent) upgrade discovered, updating to (WALinuxAgent)-(\S*) -- exiting') """ > Agent update found, exiting current process to downgrade to the new Agent version 1.3.0.0 (Agent, 'downgrade', '1.3.0.0') """ _UPDATE_PATTERN_03 = re.compile(r'(.*Agent) update found, exiting current process to (\S*) to the new Agent version (\S*)') """ Current Agent 2.8.9.9 completed all update checks, exiting current process to upgrade to the new Agent version 2.10.0.7 ('2.8.9.9', 'upgrade', '2.10.0.7') """ _UPDATE_PATTERN_04 = re.compile(r'Current Agent (\S*) completed all update checks, exiting current process to (\S*) to the new Agent version (\S*)') """ > Agent WALinuxAgent-2.2.47 is running as the goal state agent ('2.2.47',) """ _RUNNING_PATTERN_00 = re.compile(r'.*Agent\sWALinuxAgent-(\S*)\sis running as the goal state agent') def verify_agent_update_from_log(published_version: str) -> bool: exit_code = 0 detected_update = False update_successful = False update_version = '' agentlog = AgentLog() for record in agentlog.read(): if 'TelemetryData' in record.text: continue for p in [_UPDATE_PATTERN_00, _UPDATE_PATTERN_01, _UPDATE_PATTERN_02, _UPDATE_PATTERN_03, _UPDATE_PATTERN_04]: update_match = re.match(p, record.message) if update_match: update_version = update_match.groups()[2] if update_version == published_version: detected_update = True log.info('found the agent update log: %s', record.text) break if detected_update: running_match = re.match(_RUNNING_PATTERN_00, record.message) if running_match and update_version == running_match.groups()[0]: update_successful = True log.info('found the agent started new version log: %s', record.text) if detected_update: log.info('update was detected: %s', update_version) if update_successful: log.info('update was successful') else: log.warning('update was not successful') exit_code = 1 else: log.warning('update was not detected for version: %s', published_version) exit_code = 1 return exit_code == 0 # This method will trace agent update messages in the agent log and determine if the update was successful or not. def main(): parser = argparse.ArgumentParser() parser.add_argument('-p', '--published-version', required=True) args = parser.parse_args() found: bool = retry_if_false(lambda: verify_agent_update_from_log(args.published_version)) if not found: fail('update was not found in the logs') run_remote_test(main) Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts/agent_publish-get_agent_log_record_timestamp.py000077500000000000000000000053161510742556200335430ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import re from azurelinuxagent.common.future import datetime_min_utc from tests_e2e.tests.lib.agent_log import AgentLog # pylint: disable=W0105 """ > WALinuxAgent-2.2.18 discovered WALinuxAgent-2.2.47 as an update and will exit (None, 'WALinuxAgent-2.2.18', '2.2.47') """ _UPDATE_PATTERN_00 = re.compile(r'(.*Agent\s)?(\S*)\sdiscovered\sWALinuxAgent-(\S*)\sas an update and will exit') """ > Agent WALinuxAgent-2.2.45 discovered update WALinuxAgent-2.2.47 -- exiting ('Agent', 'WALinuxAgent-2.2.45', '2.2.47') """ _UPDATE_PATTERN_01 = re.compile(r'(.*Agent)?\s(\S*) discovered update WALinuxAgent-(\S*) -- exiting') """ > Normal Agent upgrade discovered, updating to WALinuxAgent-2.9.1.0 -- exiting ('Normal Agent', WALinuxAgent, '2.9.1.0 ') """ _UPDATE_PATTERN_02 = re.compile(r'(.*Agent) upgrade discovered, updating to (WALinuxAgent)-(\S*) -- exiting') """ > Agent update found, exiting current process to downgrade to the new Agent version 1.3.0.0 (Agent, 'downgrade', '1.3.0.0') """ _UPDATE_PATTERN_03 = re.compile( r'(.*Agent) update found, exiting current process to (\S*) to the new Agent version (\S*)') """ Current Agent 2.8.9.9 completed all update checks, exiting current process to upgrade to the new Agent version 2.10.0.7 ('2.8.9.9', 'upgrade', '2.10.0.7') """ _UPDATE_PATTERN_04 = re.compile(r'Current Agent (\S*) completed all update checks, exiting current process to (\S*) to the new Agent version (\S*)') """ This script return timestamp of update message in the agent log """ def main(): try: agentlog = AgentLog() for record in agentlog.read(): for p in [_UPDATE_PATTERN_00, _UPDATE_PATTERN_01, _UPDATE_PATTERN_02, _UPDATE_PATTERN_03, _UPDATE_PATTERN_04]: update_match = re.match(p, record.message) if update_match: return record.timestamp return datetime_min_utc except Exception as e: raise Exception("Error thrown when searching for update pattern in agent log to get record timestamp: {0}".format(str(e))) if __name__ == "__main__": timestamp = main() print(timestamp) Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts/agent_removal-verify_manifest_versions.py000077500000000000000000000077151510742556200324520ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Validates that the versions in the agent manifests are expected. # import argparse from typing import List from assertpy import assert_that from azurelinuxagent.common.protocol.util import get_protocol_util from azurelinuxagent.common.protocol.goal_state import GoalStateProperties, ExtensionManifest from azurelinuxagent.common.protocol.wire import WireProtocol from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.remote_test import run_remote_test from tests_e2e.tests.lib.retry import retry from tests_e2e.tests.lib.shell import run_command def _get_manifest_uris(wire_protocol: WireProtocol, family: str) -> List[str]: retry(lambda: wire_protocol.client.update_goal_state) goal_state = wire_protocol.client.get_goal_state() manifest_uris = next((gs_family.uris for gs_family in goal_state.extensions_goal_state.agent_families if gs_family.name == family), []) if len(manifest_uris) == 0: raise Exception("Unable to retrieve agent manifest uris from goal state. GS Agent Families: {0}".format(goal_state.extensions_goal_state.agent_families)) return manifest_uris def main(): parser = argparse.ArgumentParser() parser.add_argument('--expected_versions', required=False) parser.add_argument('--removed_version', required=False) parser.add_argument('--ga_family', required=True) args = parser.parse_args() expected_versions = args.expected_versions.split(';') if args.expected_versions is not None else None removed_version = args.removed_version ga_family = args.ga_family log.info("") log.info("Retrieving agent manifest uris for {0} GAFamily...".format(ga_family)) protocol = get_protocol_util().get_protocol(init_goal_state=False) retry(lambda: protocol.client.reset_goal_state(goal_state_properties=GoalStateProperties.ExtensionsGoalState)) manifest_uris = _get_manifest_uris(protocol, ga_family) log.info("Successfully retrieved manifest uris from goal state.") log.info("") log.info("Validating agent versions in manifest URIs from goal state...") for uri in manifest_uris: log.info("") log.info("URI: {0}".format(uri)) # xml_text = run_command(["curl", "-s", "{0}".format(uri)]) xml_text = run_command(["http_get.py", "{0}".format(uri)]) manifest = ExtensionManifest(xml_text) agent_versions = [pkg.version for pkg in manifest.pkg_list.versions] log.info("Agent versions in manifest: {0}".format(agent_versions)) if expected_versions is not None: log.info("Expected versions: {0}".format(expected_versions)) assert_that(expected_versions).described_as("Expected agent versions in manifest does not match actual agent versions in manifest. Expected={0}, Actual={1}".format(expected_versions, agent_versions)).is_equal_to(agent_versions) log.info("Expected versions match actual agent versions in manifest") else: log.info("Agent version which was deleted: {0}".format(removed_version)) assert_that(agent_versions).described_as("Removed version {0} is still in manifest. Manifest versions: {1}".format(removed_version, agent_versions)).does_not_contain(removed_version) log.info("Agent version which was deleted is not in manifest") log.info("") log.info("Validated all manifests successfully.") run_remote_test(main) Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts/agent_status-get_last_gs_processed.py000077500000000000000000000025231510742556200315400ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Writes the last goal state processed line in the log to stdout # import re import sys from tests_e2e.tests.lib.agent_log import AgentLog def main(): gs_completed_regex = r"ProcessExtensionsGoalState completed\s\[[a-z_\d]{13,14}\s\d+\sms\]" last_gs_processed = None agent_log = AgentLog() try: for agent_record in agent_log.read(): gs_complete = re.match(gs_completed_regex, agent_record.message) if gs_complete is not None: last_gs_processed = agent_record.text except IOError as e: print("Unable to get last goal state processed: {0}".format(str(e))) print(last_gs_processed) sys.exit(0) if __name__ == "__main__": main() Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts/agent_update-get_latest_version_from_manifest.py000077500000000000000000000052171510742556200337510ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # returns the agent latest version published # import argparse from azurelinuxagent.common.protocol.goal_state import GoalStateProperties from azurelinuxagent.common.protocol.util import get_protocol_util from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from tests_e2e.tests.lib.retry import retry def get_agent_family_manifest(goal_state, family_type): """ Get the agent_family from last GS for given Family """ agent_families = goal_state.extensions_goal_state.agent_families agent_family_manifests = [] for m in agent_families: if m.name == family_type: if len(m.uris) > 0: agent_family_manifests.append(m) return agent_family_manifests[0] def get_largest_version(agent_manifest): """ Get the largest version from the agent manifest """ largest_version = FlexibleVersion("0.0.0.0") for pkg in agent_manifest.pkg_list.versions: pkg_version = FlexibleVersion(pkg.version) if pkg_version > largest_version: largest_version = pkg_version return largest_version def main(): try: parser = argparse.ArgumentParser() parser.add_argument('--family_type', dest="family_type", default="Test") args = parser.parse_args() protocol = get_protocol_util().get_protocol(init_goal_state=False) retry(lambda: protocol.client.reset_goal_state( goal_state_properties=GoalStateProperties.ExtensionsGoalState)) goal_state = protocol.client.get_goal_state() agent_family = get_agent_family_manifest(goal_state, args.family_type) agent_manifest = goal_state.fetch_agent_manifest(agent_family.name, agent_family.uris) largest_version = get_largest_version(agent_manifest) print(str(largest_version)) except Exception as e: raise Exception("Unable to verify agent updated to latest version since test failed to get the which is the latest version from the agent manifest: {0}".format(e)) if __name__ == "__main__": main() Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts/agent_update-modify_agent_version000077500000000000000000000024021510742556200307140ustar00rootroot00000000000000#!/usr/bin/env bash # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # This script to update necessary flags to make agent ready for rsm updates # set -euo pipefail if [[ $# -ne 1 ]]; then echo "Usage: agent_update-modify_agent_version " exit 1 fi version=$1 PYTHON=$(get-agent-python) echo "Agent's Python: $PYTHON" # some distros return .pyc byte file instead source file .py. So, I retrieve parent directory first. version_file_dir=$($PYTHON -c 'import azurelinuxagent.common.version as v; import os; print(os.path.dirname(v.__file__))') version_file_full_path="$version_file_dir/version.py" sed -E -i "s/^AGENT_VERSION\s+=\s+'[0-9.]+'/AGENT_VERSION = '$version'/" $version_file_full_pathAzure-WALinuxAgent-a976115/tests_e2e/tests/scripts/agent_update-self_update_check.py000077500000000000000000000044121510742556200305640ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Script verifies agent update was done by test agent # import argparse import re from assertpy import fail from tests_e2e.tests.lib.agent_log import AgentLog from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.retry import retry_if_false #2023-12-28T04:34:23.535652Z INFO ExtHandler ExtHandler Current Agent 2.8.9.9 completed all update checks, exiting current process to upgrade to the new Agent version 2.10.0.7 _UPDATE_PATTERN = re.compile(r'Current Agent (\S*) completed all update checks, exiting current process to upgrade to the new Agent version (\S*)') def verify_agent_update_from_log(latest_version, current_version) -> bool: """ Checks if the agent updated to the latest version from current version """ agentlog = AgentLog() for record in agentlog.read(): update_match = re.match(_UPDATE_PATTERN, record.message) if update_match: log.info('found the agent update log: %s', record.text) if update_match.groups()[0] == current_version and update_match.groups()[1] == latest_version: return True return False def main() -> None: parser = argparse.ArgumentParser() parser.add_argument('-l', '--latest-version', required=True) parser.add_argument('-c', '--current-version', required=True) args = parser.parse_args() found: bool = retry_if_false(lambda: verify_agent_update_from_log(args.latest_version, args.current_version)) if not found: fail('agent update was not found in the logs for latest version {0} from current version {1}'.format(args.latest_version, args.current_version)) if __name__ == "__main__": main() Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts/agent_update-self_update_test_setup000077500000000000000000000045361510742556200312660ustar00rootroot00000000000000#!/usr/bin/env bash # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # This script prepares the new agent and install it on the vm # set -euo pipefail usage() ( echo "Usage: agent_update-self_update_test_setup -p|--package -v|--version -u|--update_to_latest_version " exit 1 ) while [[ $# -gt 0 ]]; do case $1 in -p|--package) shift if [ "$#" -lt 1 ]; then usage fi package=$1 shift ;; -v|--version) shift if [ "$#" -lt 1 ]; then usage fi version=$1 shift ;; -u|--update_to_latest_version) shift if [ "$#" -lt 1 ]; then usage fi update_to_latest_version=$1 shift ;; *) usage esac done if [ "$#" -ne 0 ] || [ -z ${package+x} ] || [ -z ${version+x} ]; then usage fi echo "Service stop and renaming agent log " agent-service stop mv /var/log/waagent.log /var/log/waagent.$(date --iso-8601=seconds).log # Some distros may pre-install higher version than custom version that test installs, so we need to lower the version to install custom version agent_update-modify_agent_version 2.2.53 echo "Cleaning up the existing agents" rm -rfv /var/lib/waagent/WALinuxAgent-* echo "Installing $package as version $version..." unzip.py $package /var/lib/waagent/WALinuxAgent-$version echo "updating the related to self-update flags and service restart" update-waagent-conf AutoUpdate.UpdateToLatestVersion=$update_to_latest_version AutoUpdate.GAFamily=Test Debug.EnableGAVersioning=n Debug.SelfUpdateHotfixFrequency=120 Debug.SelfUpdateRegularFrequency=120 Autoupdate.Frequency=120 agent_update-verify_agent_reported_update_status.py000077500000000000000000000043301510742556200344070ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Verify if the agent reported update status to CRP via status file # import argparse import glob import json from assertpy import fail from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.remote_test import run_remote_test from tests_e2e.tests.lib.retry import retry_if_false def check_agent_reported_update_status(expected_version: str) -> bool: agent_status_file = "/var/lib/waagent/history/*/waagent_status.json" file_paths = glob.glob(agent_status_file, recursive=True) for file in file_paths: with open(file, 'r') as f: data = json.load(f) log.info("Agent status file is %s and it's content %s", file, data) guest_agent_status = data["aggregateStatus"]["guestAgentStatus"] if "updateStatus" in guest_agent_status.keys(): if guest_agent_status["updateStatus"]["expectedVersion"] == expected_version: log.info("we found the expected version %s in agent status file", expected_version) return True log.info("we did not find the expected version %s in agent status file", expected_version) return False def main(): parser = argparse.ArgumentParser() parser.add_argument('-v', '--version', required=True) args = parser.parse_args() log.info("checking agent status file to verify if agent reported update status") found: bool = retry_if_false(lambda: check_agent_reported_update_status(args.version)) if not found: fail("Agent failed to report update status, so skipping rest of the agent update validations") run_remote_test(main) agent_update-verify_versioning_supported_feature.py000077500000000000000000000036501510742556200344470ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Verify if the agent reported supportedfeature VersioningGovernance flag to CRP via status file # import glob import json from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.remote_test import run_remote_test from tests_e2e.tests.lib.retry import retry_if_false def check_agent_supports_versioning() -> bool: agent_status_file = "/var/lib/waagent/history/*/waagent_status.json" file_paths = glob.glob(agent_status_file, recursive=True) for file in file_paths: with open(file, 'r') as f: data = json.load(f) log.info("Agent status file is %s and it's content %s", file, data) supported_features = data["supportedFeatures"] for supported_feature in supported_features: if supported_feature["Key"] == "VersioningGovernance": return True return False def main(): log.info("checking agent status file for VersioningGovernance supported feature flag") found: bool = retry_if_false(check_agent_supports_versioning) if not found: raise Exception("Agent failed to report supported feature flag. So, skipping agent update validations " "since CRP will not send RSM requested version in GS if feature flag not found in status") run_remote_test(main) Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts/agent_update-wait_for_rsm_gs.py000077500000000000000000000055561510742556200303320ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Verify the latest goal state included rsm requested version and if not, retry # import argparse from azurelinuxagent.common.protocol.util import get_protocol_util from azurelinuxagent.common.protocol.goal_state import GoalState, GoalStateProperties from azurelinuxagent.common.protocol.wire import WireProtocol from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.remote_test import run_remote_test from tests_e2e.tests.lib.retry import retry_if_false, retry def get_requested_version(gs: GoalState) -> str: agent_families = gs.extensions_goal_state.agent_families agent_family_manifests = [m for m in agent_families if m.name == "Test" and len(m.uris) > 0] if len(agent_family_manifests) == 0: raise Exception( u"No manifest links found for agent family Test, skipping agent update verification") manifest = agent_family_manifests[0] if manifest.version is not None: return str(manifest.version) return "" def verify_rsm_requested_version(wire_protocol: WireProtocol, expected_version: str) -> bool: log.info("fetching the goal state to check if it includes rsm requested version") wire_protocol.client.update_goal_state() goal_state = wire_protocol.client.get_goal_state() requested_version = get_requested_version(goal_state) if requested_version == expected_version: return True else: return False def main(): parser = argparse.ArgumentParser() parser.add_argument('-v', '--version', required=True) args = parser.parse_args() protocol = get_protocol_util().get_protocol(init_goal_state=False) retry(lambda: protocol.client.reset_goal_state( goal_state_properties=GoalStateProperties.ExtensionsGoalState)) # whole pipeline can take some time to update the goal state with the requested version, so increasing the timeout found: bool = retry_if_false(lambda: verify_rsm_requested_version(protocol, args.version), delay=60) if not found: raise Exception("The latest goal state didn't contain requested version after we submit the rsm request for: {0}.".format(args.version)) else: log.info("Successfully verified that latest GS contains rsm requested version : %s", args.version) run_remote_test(main) Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts/check_data_in_agent_log.py000077500000000000000000000035061510742556200272540ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Checks that the input data is found in the agent log # import argparse import sys from datetime import datetime from azurelinuxagent.common.future import UTC from tests_e2e.tests.lib.agent_log import AgentLog def main(): parser = argparse.ArgumentParser() parser.add_argument("--data", dest='data', required=True) parser.add_argument("--after-timestamp", dest='after_timestamp', required=False) args, _ = parser.parse_known_args() print("Verifying data: {0} in waagent.log".format(args.data)) found = False try: if args.after_timestamp is not None: after_datetime = datetime.strptime(args.after_timestamp, '%Y-%m-%dT%H:%M:%SZ').replace(tzinfo=UTC) found = AgentLog().agent_log_contains(args.data, after_datetime) else: found = AgentLog().agent_log_contains(args.data) if found: print("Found data: {0} in agent log".format(args.data)) else: print("Did not find data: {0} in agent log".format(args.data)) except Exception as e: print("Error thrown when searching for test data in agent log: {0}".format(str(e))) sys.exit(0 if found else 1) if __name__ == "__main__": main()Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts/ext_cgroups-check_cgroups_extensions.py000077500000000000000000000264211510742556200321400ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import os import re from assertpy import fail from tests_e2e.tests.lib.agent_log import AgentLog from tests_e2e.tests.lib.cgroup_helpers import verify_if_distro_supports_cgroup, \ verify_agent_cgroup_assigned_correctly, BASE_CGROUP, get_unit_cgroup_mount_path, \ GATESTEXT_SERVICE, AZUREMONITORAGENT_SERVICE, check_agent_quota_disabled, \ check_cgroup_disabled_due_to_systemd_error, CGROUP_TRACKED_PATTERN, AZUREMONITOREXT_FULL_NAME, GATESTEXT_FULL_NAME, \ print_cgroups, get_mounted_controller_list, using_cgroupv2 from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.retry import retry_if_false def verify_custom_script_cgroup_assigned_correctly(): """ This method verifies that the CSE script is created expected folder after install and also checks if CSE ran under the expected cgroups """ log.info("===== Verifying custom script was assigned to the correct cgroups") # CSE creates this folder to save the output of cgroup information where the CSE script was executed. Since CSE process exits after execution, # and cgroup paths gets cleaned up by the system, so this information saved at run time when the extension executed. check_temporary_folder_exists() cpu_mounted = False memory_mounted = False log.info("custom script cgroup mounts:") with open('/var/lib/waagent/tmp/custom_script_check') as fh: controllers = fh.read() log.info("%s", controllers) extension_path = "/azure.slice/azure-vmextensions.slice/azure-vmextensions-Microsoft.Azure.Extensions.CustomScript" correct_cpu_mount_v1_1 = "cpu,cpuacct:{0}".format(extension_path) correct_cpu_mount_v1_2 = "cpuacct,cpu:{0}".format(extension_path) correct_memory_mount_v1 = "memory:{0}".format(extension_path) correct_cpu_memory_mount_v2 = "0::{0}".format(extension_path) cgroup_v2 = using_cgroupv2() for mounted_controller in controllers.split("\n"): if cgroup_v2: if correct_cpu_memory_mount_v2 in mounted_controller: log.info('Custom script extension mounted under correct cgroup for CPU and Memory: %s', mounted_controller) cpu_mounted = True memory_mounted = True else: if correct_cpu_mount_v1_1 in mounted_controller or correct_cpu_mount_v1_2 in mounted_controller: log.info('Custom script extension mounted under correct cgroup ' 'for CPU: %s', mounted_controller) cpu_mounted = True elif correct_memory_mount_v1 in mounted_controller: log.info('Custom script extension mounted under correct cgroup ' 'for Memory: %s', mounted_controller) memory_mounted = True if not cpu_mounted: fail('Custom script not mounted correctly for CPU! Expected {0} or {1} in cgroupv1 or {2} in cgroupv2'.format(correct_cpu_mount_v1_1, correct_cpu_mount_v1_2, correct_cpu_memory_mount_v2)) if not memory_mounted: fail('Custom script not mounted correctly for Memory! Expected {0} in cgroupv1 or {1} in cgroupv2'.format(correct_memory_mount_v1, correct_cpu_memory_mount_v2)) def check_temporary_folder_exists(): tmp_folder = "/var/lib/waagent/tmp" if not os.path.exists(tmp_folder): fail("Temporary folder {0} was not created which means CSE script did not run!".format(tmp_folder)) def verify_ext_cgroup_controllers_created_on_file_system(): """ This method ensure that extension cgroup controllers are created on file system after extension install """ log.info("===== Verifying ext cgroup controllers exist on file system") all_controllers_present = os.path.exists(BASE_CGROUP) missing_controllers_path = [] verified_controllers_path = [] for controller in get_mounted_controller_list(): controller_path = os.path.join(BASE_CGROUP, controller) if not os.path.exists(controller_path): all_controllers_present = False missing_controllers_path.append(controller_path) else: verified_controllers_path.append(controller_path) if not all_controllers_present: fail('Expected all of the extension controller: {0} paths present in the file system after extension install. But missing cgroups paths are :{1}\n' 'and verified cgroup paths are: {2} \nSystem mounted cgroups are \n{3}'.format(get_mounted_controller_list(), missing_controllers_path, verified_controllers_path, print_cgroups())) log.info('Verified all extension cgroup controller paths are present and they are: \n {0}'.format(verified_controllers_path)) def verify_extension_service_cgroup_created_on_file_system(): """ This method ensure that extension service cgroup paths are created on file system after running extension """ log.info("===== Verifying the extension service cgroup paths exist on file system") # GA Test Extension Service gatestext_cgroup_mount_path = get_unit_cgroup_mount_path(GATESTEXT_SERVICE) verify_extension_service_cgroup_created(GATESTEXT_SERVICE, gatestext_cgroup_mount_path) # Azure Monitor Extension Service azuremonitoragent_cgroup_mount_path = get_unit_cgroup_mount_path(AZUREMONITORAGENT_SERVICE) azuremonitoragent_service_name = AZUREMONITORAGENT_SERVICE verify_extension_service_cgroup_created(azuremonitoragent_service_name, azuremonitoragent_cgroup_mount_path) log.info('Verified all extension service cgroup paths created in file system .\n') def verify_extension_service_cgroup_created(service_name, cgroup_mount_path): log.info("expected extension service cgroup mount path: %s", cgroup_mount_path) all_controllers_present = True missing_cgroups_path = [] verified_cgroups_path = [] for controller in get_mounted_controller_list(): # cgroup_mount_path is similar to /azure.slice/walinuxagent.service # cgroup_mount_path[1:] = azure.slice/walinuxagent.service # expected extension_service_controller_path similar to /sys/fs/cgroup/cpu/azure.slice/walinuxagent.service extension_service_controller_path = os.path.join(BASE_CGROUP, controller, cgroup_mount_path[1:]) if not os.path.exists(extension_service_controller_path): all_controllers_present = False missing_cgroups_path.append(extension_service_controller_path) else: verified_cgroups_path.append(extension_service_controller_path) if not all_controllers_present: fail("Extension service: [{0}] cgroup paths couldn't be found on file system. Missing cgroup paths are: {1} \n Verified cgroup paths are: {2} \n " "System mounted cgroups are \n{3}".format(service_name, missing_cgroups_path, verified_cgroups_path, print_cgroups())) def verify_ext_cgroups_tracked(): """ Checks if ext cgroups are tracked by the agent. This is verified by checking the agent log for the message "Started tracking cgroup {extension_name}" """ log.info("===== Verifying ext cgroups tracked") cgroups_added_for_telemetry = [] gatestext_cgroups_tracked = False azuremonitoragent_cgroups_tracked = False gatestext_service_cgroups_tracked = False azuremonitoragent_service_cgroups_tracked = False cgroup_tracked_pattern_re = re.compile(CGROUP_TRACKED_PATTERN) for record in AgentLog().read(): # Cgroup tracking logged as # 2021-11-14T13:09:59.351961Z INFO ExtHandler ExtHandler Started cpu tracking cgroup Microsoft.Azure.Extensions.Edp.GATestExtGo-1.0.0.2 # [/sys/fs/cgroup/cpu,cpuacct/azure.slice/azure-vmextensions.slice/azure-vmextensions-Microsoft.Azure.Extensions.Edp.GATestExtGo_1.0.0.2.slice] cgroup_tracked_match = cgroup_tracked_pattern_re.findall(record.message) if len(cgroup_tracked_match) != 0: name, path = cgroup_tracked_match[0][1], cgroup_tracked_match[0][2] if name.startswith(GATESTEXT_FULL_NAME): gatestext_cgroups_tracked = True elif name.startswith(AZUREMONITOREXT_FULL_NAME): azuremonitoragent_cgroups_tracked = True elif name.startswith(GATESTEXT_SERVICE): gatestext_service_cgroups_tracked = True elif name.startswith(AZUREMONITORAGENT_SERVICE): azuremonitoragent_service_cgroups_tracked = True cgroups_added_for_telemetry.append((name, path)) # agent, gatest extension, azuremonitor extension and extension service cgroups if len(cgroups_added_for_telemetry) < 1: fail('Expected cgroups were not tracked, according to the agent log. ' 'Pattern searched for: {0} and found \n{1}'.format(CGROUP_TRACKED_PATTERN.pattern, cgroups_added_for_telemetry)) if not gatestext_cgroups_tracked: fail('Expected gatestext cgroups were not tracked, according to the agent log. ' 'Pattern searched for: {0} and found \n{1}'.format(CGROUP_TRACKED_PATTERN.pattern, cgroups_added_for_telemetry)) if not azuremonitoragent_cgroups_tracked: fail('Expected azuremonitoragent cgroups were not tracked, according to the agent log. ' 'Pattern searched for: {0} and found \n{1}'.format(CGROUP_TRACKED_PATTERN.pattern, cgroups_added_for_telemetry)) if not gatestext_service_cgroups_tracked: fail('Expected gatestext service cgroups were not tracked, according to the agent log. ' 'Pattern searched for: {0} and found \n{1}'.format(CGROUP_TRACKED_PATTERN.pattern, cgroups_added_for_telemetry)) if not azuremonitoragent_service_cgroups_tracked: fail('Expected azuremonitoragent service cgroups were not tracked, according to the agent log. ' 'Pattern searched for: {0} and found \n{1}'.format(CGROUP_TRACKED_PATTERN.pattern, cgroups_added_for_telemetry)) log.info("Extension cgroups tracked as expected\n%s", cgroups_added_for_telemetry) def main(): verify_if_distro_supports_cgroup() verify_ext_cgroup_controllers_created_on_file_system() verify_custom_script_cgroup_assigned_correctly() verify_agent_cgroup_assigned_correctly() verify_extension_service_cgroup_created_on_file_system() verify_ext_cgroups_tracked() try: main() except Exception as e: # It is possible that agent cgroup can be disabled and reset the quotas if the extension failed to start using systemd-run. In that case, we should ignore the validation if check_cgroup_disabled_due_to_systemd_error() and retry_if_false(check_agent_quota_disabled): log.info("Cgroup is disabled due to systemd error while invoking the extension, ignoring ext cgroups validations") else: raise Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts/ext_sequencing-get_ext_enable_time.py000077500000000000000000000047041510742556200315040ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Gets the timestamp for when the provided extension was enabled # import argparse import re import sys from datetime import datetime from azurelinuxagent.common.future import UTC from tests_e2e.tests.lib.agent_log import AgentLog def main(): """ Searches the agent log after the provided timestamp to determine when the agent enabled the provided extension. """ parser = argparse.ArgumentParser() parser.add_argument("--ext", dest='ext', required=True) parser.add_argument("--after_time", dest='after_time', required=True) args, _ = parser.parse_known_args() # Only search the agent log after the provided timestamp: args.after_time after_time = datetime.strptime(args.after_time, u'%Y-%m-%d %H:%M:%S').replace(tzinfo=UTC) # Agent logs for extension enable: 2024-02-09T09:29:08.943529Z INFO ExtHandler [Microsoft.Azure.Extensions.CustomScript-2.1.10] Enable extension: [bin/custom-script-shim enable] enable_log_regex = r"\[{0}-[.\d]+\] Enable extension: .*".format(args.ext) agent_log = AgentLog() try: for agent_record in agent_log.read(): if agent_record.timestamp >= after_time: # The agent_record prefix for enable logs is the extension name, for example: [Microsoft.Azure.Extensions.CustomScript-2.1.10] if agent_record.prefix is not None: ext_enabled = re.match(enable_log_regex, " ".join([agent_record.prefix, agent_record.message])) if ext_enabled is not None: print(agent_record.when) sys.exit(0) except IOError as e: print("Error when parsing agent log: {0}".format(str(e))) print("Extension {0} was not enabled after {1}".format(args.ext, args.after_time), file=sys.stderr) sys.exit(1) if __name__ == "__main__": main() ext_signature_validation-check_signature_validated.py000077500000000000000000000075121510742556200346670ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # import argparse import glob import sys import re from datetime import datetime from azurelinuxagent.common.future import UTC, datetime_min_utc from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.agent_log import AgentLog # This script verifies that signature was validated for the specified extension. # Usage: ext_signature_validation-check_signature_validated.py --extension-name "CustomScript" def main(): parser = argparse.ArgumentParser() parser.add_argument('--extension-name', dest='extension_name', required=True) parser.add_argument("--after-timestamp", dest='after_timestamp', required=False) args, _ = parser.parse_known_args() log.info("Verifying that {0} package signature was validated.".format(args.extension_name)) sig_pattern = (r".*Successfully validated signature for package '.*{0}.*'".format(re.escape(args.extension_name))) man_pattern = (r".*Successfully validated handler manifest 'signingInfo' for extension '.*{0}.*'".format(re.escape(args.extension_name))) agent_log = AgentLog() if args.after_timestamp is None: after_datetime = datetime_min_utc else: after_datetime = datetime.strptime(args.after_timestamp, '%Y-%m-%d %H:%M:%S').replace(tzinfo=UTC) try: # Check for the signature validation and manifest validation messages sig_validated = False man_validated = False for record in agent_log.read(): if record.timestamp > after_datetime: if re.search(sig_pattern, record.message): log.info("Found message indicating that signature was successfully validated: {0}".format(record.message)) sig_validated = True if re.search(man_pattern, record.message): log.info("Found message indicating that manifest was successfully validated: {0}".format(record.message)) man_validated = True if not sig_validated: log.info("Did not find expected signature validation message in agent log. Expected pattern: {0}".format(sig_pattern)) sys.exit(1) if not man_validated: log.info("Did not find expected manifest validation message in agent log. Expected pattern: {0}".format(man_pattern)) sys.exit(1) # Check for the signature validation state file log.info("Checking that signature validation state file exists.") state_file_pattern = "/var/lib/waagent/*{0}*/package_validated".format(args.extension_name) matched_files = glob.glob(state_file_pattern) if matched_files is None or len(matched_files) == 0: log.info("No signature validation state file found for extension '{0}'".format(args.extension_name)) sys.exit(1) if len(matched_files) > 1: log.info("Expected exactly one signature validation state file, but found {0}.".format(len(matched_files))) log.info("Signature validation state file found for extension '{0}'".format(args.extension_name)) sys.exit(0) except Exception as e: log.info("Error thrown when checking that signature was validated: {0}".format(str(e))) sys.exit(1) if __name__ == "__main__": main() Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts/ext_telemetry_pipeline-add_extension_events.py000077500000000000000000000242761510742556200334750ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Adds extension events for each provided extension and verifies the TelemetryEventsCollector collected or dropped them # import argparse import json import os import sys import time import uuid from assertpy import fail from datetime import datetime, timedelta from random import choice from typing import List from azurelinuxagent.common.future import UTC from tests_e2e.tests.lib.agent_log import AgentLog from tests_e2e.tests.lib.logging import log def add_extension_events(extensions: List[str], bad_event_count=0, no_of_events_per_extension=50): def missing_key(bad_event): key = choice(list(bad_event.keys())) del bad_event[key] return "MissingKeyError: {0}".format(key) def oversize_error(bad_event): bad_event["EventLevel"] = "ThisIsAnOversizeError\n" * 300 return "OversizeEventError" def empty_message(bad_event): bad_event["Message"] = "" return "EmptyMessageError" errors = [ missing_key, oversize_error, empty_message ] sample_ext_event = { "EventLevel": "INFO", "Message": "Starting IaaS ScriptHandler Extension v1", "Version": "1.0", "TaskName": "Extension Info", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:05.0950244Z" } sample_messages = [ "Starting IaaS ScriptHandler Extension v1", "Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.", "The quick brown fox jumps over the lazy dog", "Cursus risus at ultrices mi.", "Doing Something", "Iaculis eu non diam phasellus.", "Doing other thing", "Look ma, lemons", "Pretium quam vulputate dignissim suspendisse.", "Man this is insane", "I wish it worked as it should and not as it ain't", "Ut faucibus pulvinar elementum integer enim neque volutpat ac tincidunt." "Did you get any of that?", "Non-English message - 此文字不是英文的" "κόσμε", "�", "Quizdeltagerne spiste jordbær med fløde, mens cirkusklovnen Wolther spillede på xylofon.", "Falsches Üben von Xylophonmusik quält jeden größeren Zwerg", "Zwölf Boxkämpfer jagten Eva quer über den Sylter Deich", "Heizölrückstoßabdämpfung", "Γαζέες καὶ μυρτιὲς δὲν θὰ βρῶ πιὰ στὸ χρυσαφὶ ξέφωτο", "Ξεσκεπάζω τὴν ψυχοφθόρα βδελυγμία", "El pingüino Wenceslao hizo kilómetros bajo exhaustiva lluvia y frío, añoraba a su querido cachorro.", "Portez ce vieux whisky au juge blond qui fume sur son île intérieure, à côté de l'alcôve ovoïde, où les bûches", "se consument dans l'âtre, ce qui lui permet de penser à la cænogenèse de l'être dont il est question", "dans la cause ambiguë entendue à Moÿ, dans un capharnaüm qui, pense-t-il, diminue çà et là la qualité de son œuvre.", "D'fhuascail Íosa, Úrmhac na hÓighe Beannaithe, pór Éava agus Ádhaimh", "Árvíztűrő tükörfúrógép", "Kæmi ný öxi hér ykist þjófum nú bæði víl og ádrepa", "Sævör grét áðan því úlpan var ónýt", "いろはにほへとちりぬるを わかよたれそつねならむ うゐのおくやまけふこえて あさきゆめみしゑひもせす", "イロハニホヘト チリヌルヲ ワカヨタレソ ツネナラム ウヰノオクヤマ ケフコエテ アサキユメミシ ヱヒモセスン", "? דג סקרן שט בים מאוכזב ולפתע מצא לו חברה איך הקליטה" "Pchnąć w tę łódź jeża lub ośm skrzyń fig", "В чащах юга жил бы цитрус? Да, но фальшивый экземпляр!", "๏ เป็นมนุษย์สุดประเสริฐเลิศคุณค่า กว่าบรรดาฝูงสัตว์เดรัจฉาน", "Pijamalı hasta, yağız şoföre çabucak güvendi." ] for ext in extensions: bad_count = bad_event_count event_dir = os.path.join("/var/log/azure/", ext, "events") if not os.path.isdir(event_dir): fail(f"Expected events dir: {event_dir} does not exist") log.info("") log.info("Expected dir: {0} exists".format(event_dir)) log.info("Creating random extension events for {0}. No of Good Events: {1}, No of Bad Events: {2}".format( ext, no_of_events_per_extension - bad_event_count, bad_event_count)) new_opr_id = str(uuid.uuid4()) event_list = [] for _ in range(no_of_events_per_extension): event = sample_ext_event.copy() event["OperationId"] = new_opr_id event["TimeStamp"] = datetime.now(UTC).strftime(u'%Y-%m-%dT%H:%M:%S.%fZ') event["Message"] = choice(sample_messages) if bad_count != 0: # Make this event a bad event reason = choice(errors)(event) bad_count -= 1 # Missing key error might delete the TaskName key from the event if "TaskName" in event: event["TaskName"] = "{0}. This is a bad event: {1}".format(event["TaskName"], reason) else: event["EventLevel"] = "{0}. This is a bad event: {1}".format(event["EventLevel"], reason) event_list.append(event) file_name = os.path.join(event_dir, '{0}.json'.format(int(time.time() * 1000000))) log.info("Create json with extension events in event directory: {0}".format(file_name)) with open("{0}.tmp".format(file_name), 'w+') as f: json.dump(event_list, f) os.rename("{0}.tmp".format(file_name), file_name) def wait_for_extension_events_dir_empty(extensions: List[str]): # By ensuring events dir to be empty, we verify that the telemetry events collector has completed its run start_time = datetime.now(UTC) timeout = timedelta(minutes=2) ext_event_dirs = [os.path.join("/var/log/azure/", ext, "events") for ext in extensions] while (start_time + timeout) >= datetime.now(UTC): log.info("") log.info("Waiting for extension event directories to be empty...") all_dir_empty = True for event_dir in ext_event_dirs: if not os.path.exists(event_dir) or len(os.listdir(event_dir)) != 0: log.info("Dir: {0} is not yet empty".format(event_dir)) all_dir_empty = False if all_dir_empty: log.info("Extension event directories are empty: \n{0}".format(ext_event_dirs)) return time.sleep(20) fail("Extension events dir not empty before 2 minute timeout") def main(): # This test is a best effort test to ensure that the agent does not throw any errors while trying to transmit # events to wireserver. We're not validating if the events actually make it to wireserver. parser = argparse.ArgumentParser() parser.add_argument("--extensions", dest='extensions', type=str, required=True) parser.add_argument("--num_events_total", dest='num_events_total', type=int, required=True) parser.add_argument("--num_events_bad", dest='num_events_bad', type=int, required=False, default=0) args, _ = parser.parse_known_args() extensions = args.extensions.split(',') add_extension_events(extensions=extensions, bad_event_count=args.num_events_bad, no_of_events_per_extension=args.num_events_total) # Ensure that the event collector ran after adding the events wait_for_extension_events_dir_empty(extensions=extensions) # Sleep for a min to ensure that the TelemetryService has enough time to send events and report errors if any time.sleep(60) found_error = False agent_log = AgentLog() log.info("") log.info("Check that the TelemetryEventsCollector did not emit any errors while collecting and reporting events...") telemetry_event_collector_name = "TelemetryEventsCollector" for agent_record in agent_log.read(): if agent_record.thread == telemetry_event_collector_name and agent_record.level == "ERROR": found_error = True log.info("waagent.log contains the following errors emitted by the {0} thread: \n{1}".format(telemetry_event_collector_name, agent_record)) if found_error: fail("Found error(s) emitted by the TelemetryEventsCollector, but none were expected.") log.info("The TelemetryEventsCollector did not emit any errors while collecting and reporting events") for ext in extensions: good_count = args.num_events_total - args.num_events_bad log.info("") if not agent_log.agent_log_contains("Collected {0} events for extension: {1}".format(good_count, ext)): fail("The TelemetryEventsCollector did not collect the expected number of events: {0} for {1}".format(good_count, ext)) log.info("All {0} good events for {1} were collected by the TelemetryEventsCollector".format(good_count, ext)) if args.num_events_bad != 0: log.info("") if not agent_log.agent_log_contains("Dropped events for Extension: {0}".format(ext)): fail("The TelemetryEventsCollector did not drop bad events for {0} as expected".format(ext)) log.info("The TelemetryEventsCollector dropped bad events for {0} as expected".format(ext)) sys.exit(0) if __name__ == "__main__": main() Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts/ext_update-modify_handler_manifest.py000077500000000000000000000111361510742556200315110ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # The script updates the handlerManifest.json file for a given extension. # # import argparse import glob import json import os.path import shutil import sys from tests_e2e.tests.lib.logging import log def main(): parser = argparse.ArgumentParser() parser.add_argument('--extension-name', dest='extension_name', required=True, help='Name of the extension to update the handlerManifest for') operation = parser.add_argument_group('Main operation') operation_parser = operation.add_mutually_exclusive_group(required=True) operation_parser.add_argument('--properties', dest='properties', nargs='+', help='List of property=value to update in the handlerManifest file') operation_parser.add_argument("--reset", dest='reset', help='Reset the handlerManifest file to the default values') args, _ = parser.parse_known_args() extension_name = args.extension_name properties = args.properties reset = args.reset # Check for the handlerManifest file log.info("Checking that handlerManifest file exists.") manifest_file_pattern = "/var/lib/waagent/*{0}*/HandlerManifest.json".format(extension_name) matched_files = glob.glob(manifest_file_pattern) if matched_files is None or len(matched_files) == 0: log.info("No handlerManifest.json file found for extension '{0}'".format(extension_name)) sys.exit(1) manifest_file = matched_files[0] log.info("HandlerManifest file found for extension '{0}': {1}".format(extension_name, manifest_file)) if reset is not None: log.info("Resetting handlerManifest file for extension '{0}' to default values".format(extension_name)) # Reset the handlerManifest file to the default values backup_file = manifest_file + ".bak" if os.path.exists(backup_file): shutil.copy(backup_file, manifest_file) log.info("Reset handlerManifest file for extension '{0}' to default values".format(extension_name)) else: log.info("No backup file found to reset the handlerManifest file for extension '{0}'".format(extension_name)) sys.exit(0) if len(properties) == 0: log.info("No properties provided to update in the handlerManifest file for extension '{0}'".format(extension_name)) sys.exit(1) shutil.copy(manifest_file, manifest_file + ".bak") # Sample handlerManifest.json structure: # [ # { # "version": 1.0, # "handlerManifest": { # "installCommand": "bin/custom-script-shim install", # "uninstallCommand": "bin/custom-script-shim uninstall", # "updateCommand": "bin/custom-script-shim update", # "enableCommand": "bin/custom-script-shim enable", # "disableCommand": "bin/custom-script-shim disable", # "rebootAfterInstall": false, # "reportHeartbeat": false, # "updateMode": "UpdateWithInstall" # }, # "signingInfo": { # "type": "CustomScript", # "publisher": "Microsoft.Azure.Extensions", # "version": "2.1.13" # } # } # ] with open(manifest_file, 'r') as file: data = json.load(file) commands = data[0]['handlerManifest'] for prop in properties: # Split the property into cmd_name and cmd_value if '=' not in prop: log.info("Property '{0}' is not in the format 'cmd_name=cmd_value'".format(prop)) sys.exit(1) cmd_name, cmd_value = prop.split('=', 1) log.info("Updating command '{0}' with value '{1}'".format(cmd_name, cmd_value)) # Update the handlerManifest file log.info("Updating handlerManifest file for extension '{0}' for cmd '{1}' and value '{2}'".format(extension_name, cmd_name, cmd_value)) commands[cmd_name] = cmd_value with open(manifest_file, 'w') as file: json.dump(data, file, indent=4) log.info("Updated the handlerManifest file for extension '{0}'".format(extension_name)) sys.exit(0) if __name__ == "__main__": main() Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts/fips-check_fips_mariner000077500000000000000000000036001510742556200266170ustar00rootroot00000000000000#!/usr/bin/env bash # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Verifies whether FIPS is enabled on Mariner 2.0 # set -euo pipefail # Check if FIPS mode is enabled by the kernel (returns 1 if enabled) fips_enabled=$(sudo cat /proc/sys/crypto/fips_enabled) if [ "$fips_enabled" != "1" ]; then echo "FIPS is not enabled by the kernel: $fips_enabled" exit 1 fi # Check if sysctl is configured (returns crypto.fips_enabled = 1 if enabled) sysctl_configured=$(sudo sysctl crypto.fips_enabled) if [ "$sysctl_configured" != "crypto.fips_enabled = 1" ]; then echo "sysctl is not configured for FIPS: $sysctl_configured" exit 1 fi # Check if openssl library is running in FIPS mode # MD5 should fail; the command's output should be similar to: # Error setting digest # 131590634539840:error:060800C8:digital envelope routines:EVP_DigestInit_ex:disabled for FIPS:crypto/evp/digest.c:135: openssl=$(openssl md5 < /dev/null 2>&1 || true) if [[ "$openssl" != *"disabled for FIPS"* ]]; then echo "openssl is not running in FIPS mode: $openssl" exit 1 fi # Check if dracut-fips is installed (returns dracut-fips-) dracut_fips=$( (rpm -qa | grep dracut-fips) || true ) if [[ "$dracut_fips" != *"dracut-fips"* ]]; then echo "dracut-fips is not installed: $dracut_fips" exit 1 fi echo "FIPS mode is enabled."Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts/fips-enable_fips_mariner000077500000000000000000000027351510742556200270000ustar00rootroot00000000000000#!/usr/bin/env bash # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Enables FIPS on Mariner 2.0 # set -euo pipefail echo "Installing packages required packages to enable FIPS..." sudo tdnf install -y grubby dracut-fips # # Set boot_uuid variable for the boot partition if different from the root # boot_dev="$(df /boot/ | tail -1 | cut -d' ' -f1)" echo "Boot partition: $boot_dev" root_dev="$(df / | tail -1 | cut -d' ' -f1)" echo "Root partition: $root_dev" boot_uuid="" if [ "$boot_dev" != "$root_dev" ]; then boot_uuid="boot=UUID=$(blkid $boot_dev -s UUID -o value)" echo "Boot UUID: $boot_uuid" fi # # Enable FIPS and set boot= parameter # echo "Enabling FIPS..." if sudo grub2-editenv - list | grep -q kernelopts; then set -x sudo grub2-editenv - set "$(sudo grub2-editenv - list | grep kernelopts) fips=1 $boot_uuid" else set -x sudo grubby --update-kernel=ALL --args="fips=1 $boot_uuid" fiAzure-WALinuxAgent-a976115/tests_e2e/tests/scripts/get-waagent-conf-value000077500000000000000000000021541510742556200263100ustar00rootroot00000000000000#!/usr/bin/env bash # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # Echos the value in waagent.conf for the specified setting if it exists. # set -euo pipefail if [[ $# -lt 1 ]]; then echo "Usage: get-waagent-conf-value " exit 1 fi PYTHON=$(get-agent-python) waagent_conf=$($PYTHON -c 'from azurelinuxagent.common.osutil import get_osutil; print(get_osutil().agent_conf_file_path)') cat $waagent_conf | while read line do if [[ $line == $1* ]]; then IFS='=' read -a values <<< "$line" echo ${values[1]} exit 0 fi done Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts/get_distro.py000077500000000000000000000016571510742556200246510ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Prints the distro and version of the machine # import sys from azurelinuxagent.common.version import get_distro def main(): # Prints '_' distro = get_distro() print(distro[0] + "_" + distro[1].replace('.', '')) sys.exit(0) if __name__ == "__main__": main() Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts/get_goal_state.py000077500000000000000000000313431510742556200254620ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Utility to fetch the goal state. # import argparse import os.path import re import subprocess import sys import tempfile import time from http import client from typing import Dict, List from urllib.parse import urlparse from xml.dom import minidom verbose: bool = False def _get(url: str, headers: Dict[str, str], tries: int, timeout: int, delay: int) -> str: """ Issues an HTTP GET request using the given 'url'; returns the response. """ if verbose: print(f"\n{url}\n\n") p = urlparse(url) relative_uri = p.path if p.fragment: relative_uri = f"{relative_uri}#{p.fragment}" if p.query: relative_uri = f"{relative_uri}?{p.query}" for i in range(tries): try: connection = client.HTTPConnection(p.hostname, p.port, timeout=timeout) try: connection.request("GET", url=relative_uri, headers=headers) response = connection.getresponse() content = response.read().decode("utf-8") if response.status != 200: raise Exception(f"{response.reason} - {content}") return content finally: connection.close() except Exception as exception: print(f"GET {url} failed: {exception}", file=sys.stderr) if i < tries - 1: print(f"Retrying in {delay} seconds...", file=sys.stderr) time.sleep(delay) raise Exception(f"GET {url} failed after {tries} tries") def _get_goal_state(endpoint: str, tries: int, timeout: int, delay: int) -> str: """ Issues an HTTP GET request to retrieve the goal state. """ return _get(url=f"http://{endpoint}:80/machine/?comp=goalstate", headers=_get_wireserver_request_headers(request_encryption=False, key_location=None), tries=tries, timeout=timeout, delay=delay) def _get_wireserver_request_headers(request_encryption: bool, key_location) -> Dict[str, str]: """ Returns the headers needed for requests to the WireServer endpoint. If 'request_encryption' is True, adds the Transport certificate to the headers. """ headers = { "x-ms-agent-name": "WALinuxAgent", "x-ms-version": "2012-11-30" } if request_encryption: key = "" with open(os.path.join(key_location, "TransportCert.pem"), mode="rt") as f: for line in f: if not line.startswith("----"): key += line.rstrip() headers.update({ "x-ms-cipher-name": "AES128_CBC", "x-ms-guest-agent-public-x509-cert": key }) return headers def _get_vm_settings(endpoint: str, goal_state_xml: str, tries: int, timeout: int, delay: int) -> str: """ Issues an HTTP GET request to retrieve the VmSettings. """ headers = { "x-ms-version": "2015-09-01", "x-ms-containerid": _get_elements_by_tag_name(goal_state_xml, "ContainerId")[0], "x-ms-host-config-name": _get_elements_by_tag_name(goal_state_xml, "ConfigName")[0], "x-ms-client-correlationid": "12345678-9012-3456-7890-123456789012" } return _get(url=f"http://{endpoint}:32526/vmSettings", headers=headers, tries=tries, timeout=timeout, delay=delay) def _get_elements_by_tag_name(xml: str, tag: str) -> List[str]: """ Retrieves a list of all the elements matching the given tag in the given XML. """ root = minidom.parseString(xml) elements = root.getElementsByTagName(tag) if len(elements) == 0: raise Exception(f"Can't find {tag}") return list(map(lambda e: e.childNodes[0].data if len(e.childNodes) == 1 and e.childNodes[0].nodeType == minidom.Node.TEXT_NODE else e.toxml(), elements)) def _run_command(command: str, command_input: str) -> str: """ Executes a command and returns the stdout. """ return subprocess.Popen( command, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE ).communicate(input=command_input.encode())[0].decode() def _extract_certificates(data: str, key_location: str, output_location: str) -> None: """ Extracts the certificates from the WireServer response and writes them to *.crt and *.prv files in the given 'output_location'. The 'key_location' specifies the path to the Transport certificate. """ # # Decrypt the data returned by the WireServer (which is a PFX package) and convert it into a PEM package. # pem_data = _run_command( f'openssl cms -decrypt -inkey {os.path.join(key_location, "TransportPrivate.pem")} -recip {os.path.join(key_location, "TransportCert.pem")} | openssl pkcs12 -nodes -password pass: -nomacver', command_input=f'MIME-Version:1.0\nContent-Disposition: attachment\nContent-Type: application/x-pkcs7-mime\nContent-Transfer-Encoding: base64\n\n{data}' ) # # Split the PEM data into individual keys and certificates # pem_data_lines = pem_data.splitlines() keys, certificates = [], [] start, end = 0, 0 while end < len(pem_data_lines): if re.match(r'[-]+END.*KEY[-]+', pem_data_lines[end]): keys.append((start, end)) start = end + 1 elif re.match(r'[-]+END.*CERTIFICATE[-]+', pem_data_lines[end]): certificates.append((start, end)) start = end + 1 end += 1 # # Write each certificates to a *.crt file using the corresponding thumbprint as name; keep a map of thumbprints indexed by public key in # order to associate each private key with its corresponding thumbprint. # thumbprints_by_public_key = {} # map of thumbprints indexed by the corresponding public key for c in certificates: certificate_data = "\n".join(pem_data_lines[c[0] : c[1] + 1]) + "\n" thumbprint = _run_command('openssl x509 -fingerprint -noout', command_input=certificate_data) thumbprint = thumbprint.rstrip().split('=')[1].replace(':', '').upper() # the fingerprint looks like 'SHA1 Fingerprint=DF:94:08:08:B0:BB:78:23:49:2E:28:E2:E2:33:86:0C:DD:31:75:88' public_key = _run_command('openssl x509 -pubkey -noout', command_input=certificate_data) thumbprints_by_public_key[public_key] = thumbprint certificate_path = os.path.join(output_location, f'{thumbprint}.crt') with open(certificate_path, "wt") as f: f.write(certificate_data) print(certificate_path) # # Write each private key to a *.prv file using the corresponding thumbprint as name. # for k in keys: key_data = "\n".join(pem_data_lines[k[0] : k[1] + 1]) public_key = _run_command('openssl rsa -pubout', command_input=key_data) thumbprint = thumbprints_by_public_key.get(public_key) if thumbprint is None: print("WARNING: Skipping private key with no associated certificate", file=sys.stderr) continue key_path = os.path.join(output_location, f'{thumbprint}.prv') with open(key_path, "wt") as f: f.write(key_data) print(key_path) def main(): parser = argparse.ArgumentParser( formatter_class=argparse.RawDescriptionHelpFormatter, description='Display the current goal state.', epilog=""" Displays the goal state object. Use the --certificates, --extensions, '--hosting-environment, --remote-access, and --shared option to display the corresponding sub-object. Use the --tag option to display only the XML elements matching that tag. Use the --vmsettings to display the VmSettings. Note that this is a JSON document, not an XML document Examples: * get_goal_state.py * get_goal_state.py --tag Incarnation * get_goal_state.py --extensions * get_goal_state.py --certificates --expand /tmp * get_goal_state.py --vmsettings """) parser.add_argument('--delay', required=False, default=6, type=int, help="Delay in seconds between retries of WireServer requests.") parser.add_argument('--endpoint', required=False, default="168.63.129.16", help="IP address for the WireServer endpoint.") parser.add_argument('--expand', nargs="?", const="", required=False, help="When used with --certificates, expands the WireServer response into *.crt and *.prv PEM files. If a value is given, files are created under that path, otherwise a temporary directory is used.") parser.add_argument('--tag', required=False, default=None, help="Outputs only the XML elements that match the tag.") parser.add_argument('--tries', required=False, default=3, type=int, help="Number of times to attempt WireServer requests.") parser.add_argument('--timeout', required=False, default=10, type=int, help="Timeout in seconds for WireServer requests.") parser.add_argument('-v', '--verbose', action='store_true', help='Display verbose output.') parser.add_argument('--waagent', required=False, default="/var/lib/waagent", help="Location of the Transport certificate for WireServer requests.") group = parser.add_mutually_exclusive_group(required=False) group.add_argument('--certificates', action='store_true', help='Fetch the Certificates for the goal state') group.add_argument('--extensions', action='store_true', help='Fetch the ExtensionsConfig for the goal state') group.add_argument('--hosting-environment', action='store_true', help='Fetch the HostingEnvironmentConfig for the goal state') group.add_argument('--remote-access', action='store_true', help='Fetch the RemoteAccessInfo for the goal state') group.add_argument('--shared', action='store_true', help='Fetch the SharedConfig for the goal state') group.add_argument('--vmsettings', action='store_true', help='Fetch the VmSettings') args = parser.parse_args() if args.vmsettings: if args.tag is not None: raise Exception("--vmsettings and --tag are mutually exclusive") if args.expand is not None: if not args.certificates: raise Exception("The --expand option can only be used with --certificates") if args.tag is not None: raise Exception("--expand and --tag are mutually exclusive") if args.verbose: global verbose # pylint: disable=global-statement verbose = True goal_state = _get_goal_state(endpoint=args.endpoint, tries=args.tries, timeout=args.timeout, delay=args.delay) url, headers = None, None if args.certificates: url = _get_elements_by_tag_name(goal_state, 'Certificates')[0] headers = _get_wireserver_request_headers(request_encryption=True, key_location=args.waagent) elif args.extensions: url = _get_elements_by_tag_name(goal_state, 'ExtensionsConfig')[0] elif args.hosting_environment: url = _get_elements_by_tag_name(goal_state, 'HostingEnvironmentConfig')[0] elif args.remote_access: url = _get_elements_by_tag_name(goal_state, 'RemoteAccessInfo')[0] headers = _get_wireserver_request_headers(request_encryption=True, key_location=args.waagent) elif args.shared: url = _get_elements_by_tag_name(goal_state, 'SharedConfig')[0] elif args.vmsettings: vm_settings = _get_vm_settings(endpoint=args.endpoint, goal_state_xml=goal_state, tries=args.tries, timeout=args.timeout, delay=args.delay) print(vm_settings) return if url is None: xml_document = goal_state else: if headers is None: headers = _get_wireserver_request_headers(request_encryption=False, key_location=None) xml_document = _get(url, headers=headers, tries=args.tries, timeout=args.timeout, delay=args.delay) if args.certificates and args.expand is not None: output_location = args.expand if args.expand != '' else tempfile.mkdtemp() _extract_certificates(_get_elements_by_tag_name(xml_document, 'Data')[0], key_location=args.waagent, output_location=output_location) return if args.tag is None: print(xml_document) else: elements = _get_elements_by_tag_name(xml_document, args.tag) if len(elements) == 1: print(elements[0]) else: print(elements) if __name__ == "__main__": try: main() except Exception as exception: print(exception, file=sys.stderr) sys.exit(1) sys.exit(0) Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts/http_get.py000077500000000000000000000056531510742556200243240ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Prints the distro and version of the machine # from __future__ import print_function import argparse import time import sys if sys.version_info[0] < 3: import httplib as http_client from urlparse import urlparse else: from http import client as http_client from urllib.parse import urlparse def main(): parser = argparse.ArgumentParser() parser.add_argument('-t', '--timeout', dest="timeout", required=False, default=5) parser.add_argument('-r', '--tries', dest="tries", required=False, default=3) parser.add_argument('-d', '--delay', dest="delay", required=False, default=5) parser.add_argument('-O', '--output', dest="output", required=False, default=None) parser.add_argument('url') args = parser.parse_args() url = args.url timeout = int(args.timeout) tries = int(args.tries) delay = int(args.delay) p = urlparse(url) relative_uri = p.path if p.fragment: relative_uri = "{0}#{1}".format(relative_uri, p.fragment) if p.query: relative_uri = "{0}?{1}".format(relative_uri, p.query) for i in range(tries): try: if "https" in p.scheme: connection = http_client.HTTPSConnection(p.hostname, p.port, timeout=timeout) else: connection = http_client.HTTPConnection(p.hostname, p.port, timeout=timeout) try: connection.request("GET", url=relative_uri) response = connection.getresponse() if response.status != 200: raise Exception("{0} - {1}".format(response.reason, response.read())) if args.output: with open(args.output, 'wb') as output_file: output_file.write(response.read()) else: content = response.read().decode("utf-8") print(content) break finally: connection.close() except Exception as exception: print("GET failed: {0}".format(exception), file=sys.stderr) if i < tries - 1: time.sleep(delay) else: raise if __name__ == "__main__": try: main() except Exception as e: print(e, file=sys.stderr) sys.exit(1) sys.exit(0) initial_agent_update-agent_update_check_from_log.py000077500000000000000000000051261510742556200342520ustar00rootroot00000000000000Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Checks that the initial agent update happens with self-update before processing goal state from the agent log import argparse import datetime import re from assertpy import fail from tests_e2e.tests.lib.agent_log import AgentLog from tests_e2e.tests.lib.logging import log def main(): parser = argparse.ArgumentParser() parser.add_argument("--current_version", dest='current_version', required=True) parser.add_argument("--latest_version", dest='latest_version', required=True) args = parser.parse_args() agentlog = AgentLog() patterns = { "goal_state": "ProcessExtensionsGoalState started", "self_update": f"Self-update is ready to upgrade the new agent: {args.latest_version} now before processing the goal state", "exit_process": f"Current Agent {args.current_version} completed all update checks, exiting current process to upgrade to the new Agent version {args.latest_version}" } first_occurrence_times = {"goal_state": datetime.time.min, "self_update": datetime.time.min, "exit_process": datetime.time.min} for record in agentlog.read(): for key, pattern in patterns.items(): # Skip if we already found the first occurrence of the pattern if first_occurrence_times[key] != datetime.time.min: continue if re.search(pattern, record.message, flags=re.DOTALL): log.info(f"Found data: {record} in agent log") first_occurrence_times[key] = record.when break if first_occurrence_times["self_update"] < first_occurrence_times["goal_state"] and first_occurrence_times["exit_process"] < first_occurrence_times["goal_state"]: log.info("Verified initial agent update happened before processing goal state") else: fail(f"Agent initial update didn't happen before processing goal state and first_occurrence_times for patterns: {patterns} are: {first_occurrence_times}") if __name__ == '__main__': main() Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts/recover_network_interface-get_nm_controlled.py000077500000000000000000000017741510742556200334400ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # import sys from azurelinuxagent.common.osutil import get_osutil def main(): os_util = get_osutil() ifname = os_util.get_if_name() nm_controlled = os_util.get_nm_controlled(ifname) if nm_controlled: print("Interface is NM controlled") else: print("Interface is NOT NM controlled") sys.exit(0) if __name__ == "__main__": main() Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts/samples-error_remote_test.py000077500000000000000000000021161510742556200277020ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # A sample remote test that simulates an unexpected error # from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.remote_test import run_remote_test def main(): log.info("Setting up test") log.info("Doing some operation") log.warning("Something went wrong, but the test can continue") log.info("Doing some other operation") raise Exception("Something went wrong") # simulate an unexpected error run_remote_test(main) Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts/samples-fail_remote_test.py000077500000000000000000000020651510742556200274670ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # A sample remote test that fails # from assertpy import fail from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.remote_test import run_remote_test def main(): log.info("Setting up test") log.info("Doing some operation") log.warning("Something went wrong, but the test can continue") log.info("Doing some other operation") fail("Verification of the operation failed") run_remote_test(main) Azure-WALinuxAgent-a976115/tests_e2e/tests/scripts/samples-pass_remote_test.py000077500000000000000000000020271510742556200275200ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # A sample remote test that passes # from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.remote_test import run_remote_test def main(): log.info("Setting up test") log.info("Doing some operation") log.warning("Something went wrong, but the test can continue") log.info("Doing some other operation") log.info("All verifications succeeded") run_remote_test(main)